Subscribe by Email


Showing posts with label Analysis. Show all posts
Showing posts with label Analysis. Show all posts

Friday, May 10, 2013

Risk Planning - Look at bug curves for previous projects and identify patterns - Part 4

These are a series of posts that I have been writing the process of risk planning for the project. For a software development project, one of the biggest risks relates to the defects that are present during the software development cycle. These risks could be because of the number of defects are more than expected, or defects are of a much higher severity than expected, or that some of the defects are getting rejected with the fixes either being inadequate or totally problematic. Typically, for software or for projects that have been going through several versions, there is a lot of information available in terms of the data for the previous versions of the software projects, with this information being very useful to make predictions about the defect cycle during the current version. However, at the same time, my experience taught me that not everybody even considers looking at previous cycles and figuring out problematic areas that may turn up, and how to handle those. Being to able to predict such problematic points and then figuring out how to resolve them is an integral part of risk planning.
In the previous post (Looking at bug curves for previous projects and identifying patterns - Part 3), I talked about how the bug graph analysis from previous projects indicated that there were a large number of open defects soon after a point when the development team had completed the handover of all the features to the testing. This was essentially because a number of the big ticket features came near this date, and this was the first time that the testing team was able to test these. But this overload of defects was something that was causing a lot of pressure to the team, and it was unhealthy for the project. We had to resolve this, but we would never have even actually realized this problem if we had not mined the data from previous projects.
In this post, I will take another example of what we did using the mining of the data of looking at defects from previous cycles and then doing some analysis of that data. There can be many such cases, and I will take up some of these cases that were important to us in these posts, but you can extrapolate these to as many cases as you like and then gather the data and do the analysis.
One of the quests that we had for our team was to figure out how to decrease the time period that a defect was open with a developer - or more accurately, the time period in which a developer first looked at a defect. One of the biggest problems was that there would be defects which were not looked at by a developer for many weeks because of the load that the developers had, and when they got the time to look many of such defects, the tester did not accurately remember the conditions, or the software application had been changed many times since then because of all the developers checking in their changes, and it was not easy to resolve such defects. In short, the quest was that defect ageing be reduced from the many weeks that it was currently, and everybody agreed that we had to do something. However, before we proceeded to do something, we needed to know the extent of the problem, especially at different parts of the development cycle.
For this purpose, we again needed to refer to the defect database for data for previous versions, set up the query for this, and then get the data. To some extent, we also compared the data for multiple previous versions to see whether there were patterns in the data, and we did find some patterns. The analysis for how to reduce this, and what this involved in terms of changes in terms of defect management was something that took effort, but there were rewards for the same. But the most important part remained that we could only do all this analysis once we had the data for the previous cycles, and this was recognized as an important of risk management.

Read more about this in the next post (Risk Planning - Look at bug curves for previous projects and identify patterns - Part 5)


Thursday, May 9, 2013

Risk planning - Look at bug curves for previous projects and identify patterns - Part 3

This has been a series of posts on doing some risk planning by focusing on the defect management side. There is a lot that can be done to improve the defect management in the current cycle by looking at the defect trends from the previous cycles (in most cases, there is a lot that can be learnt from previous cycles; it's only totally new projects where nothing can be learnt). In the previous post (Looking at bug curves and identify patterns - Part 2), I took more of a look at ensuring that the defect database is properly setup for getting the information needed for generating this data. Unless this sort of data can be generated for the current cycle, it cannot be used in the next cycles. Hence, there is a need to make sure that required infrastructure is in place for the same.
In this post, I will assume that the data is there for previous cycles to interpret and hopefully generate some actionable items. In the first post in this series, I had already talked about some examples that can be used such as identifying a phase in the project when there have been more defects rejected in the past and then try to change that. There are clear benefits of doing such kind of work, and the project manager should be able to get some clear improvements in the project.
The way to use this kind of defect data from previous cycles is to be more systematic in terms of analysing it, and then identifying some clear patterns from this analysis that will help to identify improvements. So, one clear way of starting on this line is to take the entire defect curve from the previous cycle (charting the number of open bugs against the time axis, and doing the same for other bug stats as well (consider the example of rate of defect closure, the number of defects that have been closed, the number of defects that were closed by a manner other than fixing, and so on).
Based on these various defect stats, there is a lot information that can be generated that can be used for analysis. For example, we used to find that the number of open defects used to shoot up to its highest figure soon after the development team had completed their work on all the features, and this was basically a number of the features would come to the testing team near the final deadline for finishing development, and this was when the testing team was able to put in their full efforts. They were able to find a large number of defects in this period, and as a result, the peak in open defects would happen around here. This was a critical timeline in the project, since the number of defects with each developer would reach their maximum, and cause huge pressure.
Based on the identification of such as pattern, we put in a lot of effort to see how we could avoid a number of features coming in near the end. We had always talked about trying to improve our development cycle in this regard, but the analysis of the defects made it more important for us to actually make these changes. We prioritized the features in a different way such that the features were we expected more integration issues were planned to be complete much earlier in the cycle. None of this was easy, but the analysis we had done had shown how not changing anything would lead to the same situation of a large number of open defects, as well as the increased pressure on the development and testing team. Such pressure also led to mistakes, which caused more pressure, and so on ....


Read more about this effort in the next post (Look at bug curves for previous projects and identify patterns - Part 4)


Wednesday, May 8, 2013

Risk planning - Look at bug curves for previous projects and identify patterns - Part 2

In the previous post (Looking at bug curves for identifying patterns - Part 1), I talked about how the defect curves for the previous versions of the software product should be reviewed to look for patterns that will help in predicting defect patters in the current release. As an example, if there was a phase in previous cycles where the pressure caused a greater number of defects to be either partially or completely rejected, then this was worrisome. Such a problem area is something that should be focused on to ensure that these kind of issues are reduced or removed totally altogether, and the benefits can be considerable.
In this post, I will continue on this subject area. One of the biggest problems that teams face is that they are so worked up in terms of what is happening in the current release that they use previous versions for help in estimation, and do not really use defect management and analysis to the level they should. One of the critical focus items in terms of project management and risk planning is about understanding that one looks not only at the current version of the software development cycle, but also at future versions. There is a lot of learning to be had from current releases and one should ensure that we are able to gain from this learning in the next release.
The Defect database should be set in such a manner that while it provides you the functionality to do your defect management for the current release, there should be the ability to store this information. If for example, your defect database is not able to store the data that defects have been rejected, or capture the fact that defects did multiple rounds back and forth between the developer and tester, you are losing out on data that is pretty important. We actually ran into such a situation, where we wanted to determine which defects are going back and forth between the developers and testers, or even between multiple people on the team. This was a way to figure out which defects are taking more time, and it seemed like a good place to start. The concept was that if we could figure out these from the previous cycle, and were able to get a figure on which sets of people do not work well together (too much back and forth between people over a defect is certainly not useful, you would expect people to collaborate and resolve issues rather than doing discussion in a defect).
However, things did not work as well as we expected. The defect management system did not have such a query or something even similar to it. It was possible to get this information once we got access to the tables in the database, but this is not something that is easy or quick to do. We needed to get hold of people who had some expertise in how the database was structured, and also needed access to people who knew how to manipulate the database and get us the report that we wanted. It took a fair amount of time, and in the end, we got what we wanted. The results were interesting. They showed that there was a person in the team who wanted every bit of information to be in the database, even something that was clear could be asked and did not add any value to the defect information. But, the defect was passed back to the other person, and then it would depend on the time and defect load of the other person. However, since the person did not have the defect on himself, normal statistics would not show any problem.
We took some action on this one in terms of some counseling for the person from the manager, in a non-threatening way, and resulted in improvement in terms of defect handling. However, it was difficult to quantify the time saved, but we were satisfied in terms of the results we could see, and felt that there was improvements we made in the defect management, and would help in terms of reducing the load due to defects.

Read more about this effort in the next post (Look at bug curves for previous projects and identify patterns - Part 3)


Tuesday, May 7, 2013

Risk planning - Look at bug curves for previous projects and identify patterns - Part 1

As a part of the risk planning for projects, defect management is one of the key areas to handle and handle well. A large number of defects may not be a problem if they are expected, but if unexpected, they can cause huge problems to the schedule of a development cycle. As an example, if the number of defects is much higher than expected, there are many problems that occur:
- The amount of time that is required to be devoted to resolving these defects and testing the changes will cause a strain on the schedule.
- And it is not just the effort change, a sudden high number of unanticipated defects will cause the team to wonder about the quality of the work done,
- Most important, will cause a lot of uncertainty about whether all the defects have been found, or are there many more to be found ?
- When more defects are found, the overall cycle of analyzing the defect, figuring out the problem, doing an impact analysis of the change, making the change, getting somebody to review the change, and then testing the change will cause a lot of strain.
- Some of these changes can be big, and will require more effort to validate the fix, and in some of these cases, the team management will decide that they would not want to take a risk and would rather that the defect get passed onto customers.
The above was just an example of what happens when there are a large number of defects that suddenly crop up. However, it would be criminal on the part of the team management and the project manager if they have not already done a study of the kind of defects that come up in different stages of the development cycle. The best way to do this, and do some kind of forecasting, is to look at the bug curves of previous cycles (which obviously cannot be done if the data about defects of previous cycles is not taken at that time, or stored at a later point of time).
We had started doing this for the past 2-3 years or so, and this helped us determine what points of the development cycle had the highest number of defects that were found, as well as closed, and even which were the times when there was the highest chances of defects not being fixed (either being partially fixed or being rejected totally by the tester). Now, even though every cycle would be different, there were some patterns that had a high probability of being repeated (and this even true when one project differs from the other, although it varied from project to project).
Let us take another example. There was a time in the project when defects had a higher chance of being rejected, and a rejected defect can be very expensive in terms of time of both the developer and the tester. As a result, during such stages that had happened in previous projects, we would ensure that there was higher focus on impact analysis and code review, and had even borrowed some senior developers from another project for around a month just for the additional focus on reviews. This paid off, since the number of defects getting rejected went down considerably, which per single defect did not amount for much, but our analysis showed that the total saving of effort because of this additional effort was around 25%.

Read more about this effort in the next post (Look at bug curves for previous projects and identify patterns - Part 2)


Wednesday, December 26, 2012

What is IBM Rational Purify?


Rational Purify is another dynamic tool from IBM which is meant for carrying out the analysis of the software systems and applications and to provide help to the software developers in producing a code that is more reliable. 

The IBM rational purify comes with unique capabilities:
  1. Memory leak detection: This capability is related to the identification of the memory blocks to which there are no valid pointers.
  2. Memory debugging: This capability is related to pin – pointing memory errors which are quite hard to be discovered such as the following:
a)   Points in code where the memory is freed improperly,
b)   Buffer over flow
c)   Access to uninitialized memory and so on.
  1. Performance profiling: This capability is involved with highlighting the bottle necks of the program performance and improving the application understanding via some graphical representations of the calls to the functions.
  2. Code coverage: This capability involves the identification of the code with the line – level precision that is untested.
- Platforms such as the AIX, solaris, windows, Linux and so on support the IBM rational purify. 
- The code that is developed with the help of the IBM rational purify is not only reliable but also faster. 
- This analysis tool is very well supported by the windows application development environment. 
- It has been observed that the windows applications which have been developed using the rational purify have stood to be quite reliable throughout. - There is no need to provide rational purify with a direct access to the source code.
- This makes it capable to be used with the libraries belonging to the third–parties. 
- Languages such as the .NET, visual C++ etc are supported by the rational purify. 
- The IBM rational purify is known to integrate well with the Microsoft visual studio. 
- Almost all the software systems belonging to the windows family are supported by the IBM rational purify. 
- The corruption in the memory is identified and the debugging time is reduced significantly. 
- The reliability pertaining to the execution of the software is also reduced. 
- Also, the software systems and applications now make a better utilization of the memory. 
- The IBM rational purify comes with the binary instrumentation technology in which the code is instrumented at the level of the object or the byte level. 
Here, re–linking or the re–compilation of the software system or application is not required for the analyzation of the code. 
- Further, the third–part libraries are also analyzed. 
- With the help of the IDE integration feature, the rational purify can integrate very well with the Microsoft visual studio thus cutting down on the need of switching between different tools having different types of user interfaces. 
- It therefore develops a more productive and cohesive development environment and experience. 
- It helps in the testing as well as the analyzation of the code as it is produced by the programmer. 
- A comprehensive support is provided to most of the programming languages. - The ‘selective instrumentation’ feature of the rational purify enables a user to limit the analyzation of the software to a subset consisting of the modules which together comprise the whole application. 
- This helps greatly in the reduction of the run-time and instrumentation overhead.  
- The reporting to the modules concerned also gets limited. 
- The rational purify can be run from the command line also since it comes with a command line interface. 
- Automated testing frame works are among the rest that are supported by the software system or application. 
- In a way the software developers are empowered in delivering the product with the quality that is expected by the users.


Wednesday, November 28, 2012

How to generate reports for analyzing the testing process in test director?


Reports and graphs in the test director testing process help you assess up to what extent have your requirements, test runs, test plans, defect tracking etc have progressed.

Generating Reports in Test Director

- In test director you have the facility of generating reports as well as graphs at any point of time in the testing process and from each of the test director modules. 
- You have the choice of working with the default settings as well as the customized ones. 
- While you customize the reports or the graphs you have the right to apply sort conditions as well as filters. 
- Also the information can be displayed according to your specifications if you wish so. 
- The settings you make can be saved as the favorite views and they can be reloaded whenever required. 
- A report can be generated from any of the modules of the test director. 
- Each module of the test director provides you with a variety of report generation options. 
- Once you have generated the report, you can customize various properties of the report as per your wish. 
- The information can be displayed according to your specifications by altering or customizing various properties of that report. 

In this article we shall provide you with the steps for generating a standard requirements report, customizing it for a specific user name and adding it to the favorites list.

Steps for generating reports in Test Director

Follow the steps mentioned below:
  1. First step is to open your project and login. If the project is not open, log on to it.
  2. To view the requirements tree you need to turn on the requirements module. To do so clicking on the requirements tab will display the requirements module.
  3. Next step is to choose a report. To do so go to the analysis option, then reports, and then finally click on the standard requirements report. A report will open up containing the default data.
  4. Next if you need to customize the report as per your needs and specifications.  Clicking on the configure report and sub reports button will launch a report customization page with all the default options already selected.
  5. Here you will get various options for displaying the number of items per display page. Set the option to ‘all items in one page’ if you want them to be displayed all in one page.
  6. If you want to define a filter to view the requirements that were created by a specific user name clicking on the set filter/ sort button will certainly help. A filter dialog box will open up where you will see a filed titled ‘author’. For this click the filter condition box and click on the browse button. This will again open up the select filter condition dialog box. For the users field select the test director log-in user name and click OK. This will close the select the filter condition dialog box. Once again click OK to close the filter dialog box.
  7. Under fields specify the fields and order in which you want them to be displayed. Select the custom field layout and next click on the select fields button to open the select fields dialog box. You will observe the following two fields:
a) Available fields: fields that are not currently displayed.
b) Visible fields: fields that are currently displayed.
You need to select the attachment option in the visible fields box and click the left arrow in order to move it to the available fields. Move the required fields to the available fields box.
  1. Clear the history.
  2. You can add the report as a favorite view by clicking on the add to favorites button.
  3. Close the report. 


Tuesday, July 10, 2012

What Tools are used for code coverage analysis?


Code coverage analysis is quite an essential process that makes up the complete and efficient software testing process. 
This analyzation consists of the following three basic activities:
  1. Checking out for the areas of the software system or application that have not been exercised by the set of tests that have been performed so far.
  2. Creation of the additional test cases so that the code coverage can be increased.
  3. Determination of the quantitative measure for the code coverage which some what provides an indirect measure of the quality of the software system and application.
Apart from this, there is one more optional aspect of the code coverage analysis which is that it helps in the identification of the redundant test cases that add to the measure of the code coverage but do not merely increase it.
In this article we have discussed about the tools that make this whole process of code coverage analyzation quite easy.

Tools Used for Code Coverage Analysis


- The code coverage analyzation is quite an effort and time consuming process and therefore is nowadays automated using tools like code coverage analyzer. 
- But a code coverage analyzer cannot be used always like in situations when the tests have to be run through the release candidate.
- For different languages, there are many different and vivid tools are available for code coverage analysis.

  1. For C++ and C programming languages:
a)  Tcov
b)  Bulls eye coverage
c)  Gcov
d)  LDRA test bed
e)  NuMega True Coverage
f)   Tessy
g)  Trucov
h)  Froglogic’s squish coco
i)   Parasoft C++ soft
j)   Test well CTC++
k)  McCabe IQ
l)   Insure++
m)Cantata

  1. Tools for C#:
a)  Mc Cabe IQ
b)  Jet brains dot cover
c)  Ncover
d)  Visual studio 2010
e)  Parasoft Dottest
f)   Test driven.NET
g)  Kalistick
h)  Dev partner

  1. Tools for Java:
a)  McCabe IQ
b)  Clover
c)  EMMA
d)  Kalistick
e)  JaCoCo
f)   JMockit coverage
g)  Code coverage
h)  LDRA test bed
i)    Jtest
j)   Den partner
k)  Cobertura

  1. Tools for Java Script:
a) Mc Cabe IQ
b) JS coverage
c) Code coverage
d) Script cover
e) Coveraje

  1. Tools for Perl:
a) Mc Cabe IQ
b)  Devel cover

  1. Tools for Haskell:
a) HPC (Haskell program coverage) tool kit

  1. Tools for Python:
a) Mc Cabe IQ
b) Fig leaf
c) Pester
d) Coverage.py

  1. Tools for PHP:
a) Mc Cabe IQ
b) PHP unit

  1. Tools for Ruby:
a) Rcov
b) Mc Cabe IQ
c) Simple cov
d) Cover Me

  1. Tools for Ada:
a) GNAT coverage
b) Mc Cabe IQ
c) Rapi Cover

Out of all the above mentioned tools for C and C++, the bulls eye coverage has proven to be the best code coverage analyzer in terms of reliability, usability and platform support etc. 
This coverage analyzer is different from the other analyzers in the following ways:
  1. Better coverage measurement
  2. Wide platform support
  3. Rigorously tested
  4. Efficient technical support
  5. Quite easy to use.
- Using this tool it can be determined that how much of the software system’s or application’s code was tested and this information later can be employed to focus your testing efforts and areas that require some improvement.
- With the bullseye coverage a more reliable code can be created and time can be saved. 
- The function coverage provided by the bulls eye coverage gives you a very high precision.

You can include or exclude the parts of the code of your choice. And what more? You can even merge the results you obtained from the distributed testing plus the run time code can also be included from custom environments. 


Saturday, June 30, 2012

Typically, what are the testing activities that are automated?


With the test automation use on the move, a lot of processes and testing activities are being automated nowadays as far as possible. Test automation has been widely accepted since it is quite an effective testing methodology that can help the software engineering industry to keep pace with this fast paced technology savvy world.  

Usually the processes that are automated include manual processes that make use of formalized testing processes. But besides these, there are many other software development processes and activities that are automated using the test automation and process and the best part is that they do not disappoint the testers.

In this article, we have taken up the discussion regarding the activities that are quite often subjected to automation. These activities are mostly concerned with the test automation. Usually test cases for automated software testing are exclusively written but the test cases for most of the manual testing processes that deploy formalized processes are subjected to test automation in order to save time and efforts both. However, before automating any testing activity it is made certain that the automated tests will be compatible with the system on which they will be run.
Today the developers are forced to develop software systems and applications in quite a small time frame which represents quite a big challenge. There is not only the need of testing the software system or application rigorously but also there is a need to do it as quickly as possible. 

What is automated Testing Life Cycle Methodology?


In order to make the development process quite systematic a methodology has been introduced which is commonly known as “automated testing life cycle methodology” or ATLM in short form. 
The ATLM lays down 6 processes or activities in the process of test automation and many of the sub activities are automate.

1. Decision to automate test: This includes:
(a)  Overcoming false expectations of automated testing.
(b)  Benefits of automated testing.
(c)  Acquiring management support.

     2. Test tool acquisition: 
    This is the second phase of ATLM and involves activities like tool evaluation and selection process. Here the activity tool evaluation can be automated to some extent. While selecting the testing tool it is required that the tester should keep in mind the system’s engineering environment.
    
     3. Automated testing introduction phase: This phase involves the following steps:
   
    (a)  Test process analysis: This analysis ensures that all the test strategies and processes are in one place. The test strategies, goals and objectives are all defined in this phase and are documented. In this phase only the testing techniques are defined and test plans are assessed.
    
    (b)  Test tool consideration: This step involves the investigation of the incorporated automated test tools and their viewing in the context of the automated project testing requirements. Also the mapping of the potential test utilities and tools to the test requirements is done. The compatibility factor of the testing tools with the software system or application and environment is verified and solutions are further investigated.

     4. Test planning, design and development

    5. Execution and management of tests: By this phase the test design and test development has been addressed by the testing team. The test procedures are now ready to be automated. The setting up of the test environment after every test case execution is also automated in accordance with the guidelines. Now the test plan is ready and test environment is also set up, the execution of the test cases is started. This whole process is automated in the favor of the exercising the software system or application under the test.

     6. Test program review and assessment


Tuesday, May 8, 2012

Compare Test Driven Development and Traditional Testing?


Till many years the traditional testing was in use until there was a rise of another software development strategy or process called the test driven development. These two development processes are in great contrast to each other. 
This article is focussed entirely up on the differences between the two types of development i.e., the traditional testing and the test driven development. So let us begin by describing the test driven development. 

About Test Driven Development


- The test driven development process is comprised of the repetition of very short development cycles.
- By short here we mean the development cycles in the test driven development are shorter than the usual normal cycles. 
- These development cycles comprise of the following tests:
  1. Production of code: This step involves the creation of some code implementing which the code can only pass the test and does not incorporates any new functionality.
  2. Execution of the automated tests and observing their success.
  3. Re-factoring of code: This step involves the cleaning up of the code.
  4. Repetition: The whole cycle is repeated for the improvement of the functionalities. 
- The test driven development was considered to be somewhat related to the test- first programming concepts of the extreme programming long since. 
- After that it came to be as an individual software development process
- The test driven development has prove to be quite effective for developing ad improving the legacy code that has been developed using the older development techniques. 
- So many development styles for the test driven development have been identified like those mentioned below:
  1. Fake it till you make it
  2. Keep it simple stupid or KISS
  3. You ain’t gonna need it or YAGNI
- In the test driven development the primary focus is on writing the code which is necessary only to pass the tests to keep the design clean and clear of the fuzz.

About Traditional Testing


 Now coming to the traditional development methodologies or approaches, they are commonly called as the engineering approaches and they were defined at the very beginning of the software sciences. 

- Traditional development methodologies were developed to control the software development processes via a disciplined approach whose build and the stages were predictable.
- In the traditional software development methods, the stages of analysis and design precede the stage at which the software is build. 
- Unlike the test driven development process, these traditional development process are well documented .
- Their main disadvantage is that they are quite difficult to apply because of their complexity.
- Another disadvantage being that the traditional development methodologies are bureaucratic.
- In practical, these traditional development processes often cause a high level of complexity in the software system or application. 
- In traditional approaches there are two main stages namely:
1.      Stage of analysis and
2.      Stage of design
-The foundation of the whole project depends up on these two stages, therefore it is necessary that adequate focus is on these two stages and more and more efforts should be put on these stages. 
- For the project to be successful it is important that the traditional method is applied in the right way.
- Designing is considered to be a highly creative activity. 
- It becomes very difficult to plan and predict these complex methodologies as the level of their creativity increases. 
- The main feature of the traditional development processes is the detailed planning and designing phase. 
Traditional development holds good when it comes to undertaking very large projects involving higher risk. 
- One more commonly observed thing is that the projects development with the traditional methodologies last longer. 


Facebook activity