Subscribe by Email


Showing posts with label Defect patterns. Show all posts
Showing posts with label Defect patterns. Show all posts

Friday, May 10, 2013

Risk Planning - Look at bug curves for previous projects and identify patterns - Part 4

These are a series of posts that I have been writing the process of risk planning for the project. For a software development project, one of the biggest risks relates to the defects that are present during the software development cycle. These risks could be because of the number of defects are more than expected, or defects are of a much higher severity than expected, or that some of the defects are getting rejected with the fixes either being inadequate or totally problematic. Typically, for software or for projects that have been going through several versions, there is a lot of information available in terms of the data for the previous versions of the software projects, with this information being very useful to make predictions about the defect cycle during the current version. However, at the same time, my experience taught me that not everybody even considers looking at previous cycles and figuring out problematic areas that may turn up, and how to handle those. Being to able to predict such problematic points and then figuring out how to resolve them is an integral part of risk planning.
In the previous post (Looking at bug curves for previous projects and identifying patterns - Part 3), I talked about how the bug graph analysis from previous projects indicated that there were a large number of open defects soon after a point when the development team had completed the handover of all the features to the testing. This was essentially because a number of the big ticket features came near this date, and this was the first time that the testing team was able to test these. But this overload of defects was something that was causing a lot of pressure to the team, and it was unhealthy for the project. We had to resolve this, but we would never have even actually realized this problem if we had not mined the data from previous projects.
In this post, I will take another example of what we did using the mining of the data of looking at defects from previous cycles and then doing some analysis of that data. There can be many such cases, and I will take up some of these cases that were important to us in these posts, but you can extrapolate these to as many cases as you like and then gather the data and do the analysis.
One of the quests that we had for our team was to figure out how to decrease the time period that a defect was open with a developer - or more accurately, the time period in which a developer first looked at a defect. One of the biggest problems was that there would be defects which were not looked at by a developer for many weeks because of the load that the developers had, and when they got the time to look many of such defects, the tester did not accurately remember the conditions, or the software application had been changed many times since then because of all the developers checking in their changes, and it was not easy to resolve such defects. In short, the quest was that defect ageing be reduced from the many weeks that it was currently, and everybody agreed that we had to do something. However, before we proceeded to do something, we needed to know the extent of the problem, especially at different parts of the development cycle.
For this purpose, we again needed to refer to the defect database for data for previous versions, set up the query for this, and then get the data. To some extent, we also compared the data for multiple previous versions to see whether there were patterns in the data, and we did find some patterns. The analysis for how to reduce this, and what this involved in terms of changes in terms of defect management was something that took effort, but there were rewards for the same. But the most important part remained that we could only do all this analysis once we had the data for the previous cycles, and this was recognized as an important of risk management.

Read more about this in the next post (Risk Planning - Look at bug curves for previous projects and identify patterns - Part 5)


Thursday, May 9, 2013

Risk planning - Look at bug curves for previous projects and identify patterns - Part 3

This has been a series of posts on doing some risk planning by focusing on the defect management side. There is a lot that can be done to improve the defect management in the current cycle by looking at the defect trends from the previous cycles (in most cases, there is a lot that can be learnt from previous cycles; it's only totally new projects where nothing can be learnt). In the previous post (Looking at bug curves and identify patterns - Part 2), I took more of a look at ensuring that the defect database is properly setup for getting the information needed for generating this data. Unless this sort of data can be generated for the current cycle, it cannot be used in the next cycles. Hence, there is a need to make sure that required infrastructure is in place for the same.
In this post, I will assume that the data is there for previous cycles to interpret and hopefully generate some actionable items. In the first post in this series, I had already talked about some examples that can be used such as identifying a phase in the project when there have been more defects rejected in the past and then try to change that. There are clear benefits of doing such kind of work, and the project manager should be able to get some clear improvements in the project.
The way to use this kind of defect data from previous cycles is to be more systematic in terms of analysing it, and then identifying some clear patterns from this analysis that will help to identify improvements. So, one clear way of starting on this line is to take the entire defect curve from the previous cycle (charting the number of open bugs against the time axis, and doing the same for other bug stats as well (consider the example of rate of defect closure, the number of defects that have been closed, the number of defects that were closed by a manner other than fixing, and so on).
Based on these various defect stats, there is a lot information that can be generated that can be used for analysis. For example, we used to find that the number of open defects used to shoot up to its highest figure soon after the development team had completed their work on all the features, and this was basically a number of the features would come to the testing team near the final deadline for finishing development, and this was when the testing team was able to put in their full efforts. They were able to find a large number of defects in this period, and as a result, the peak in open defects would happen around here. This was a critical timeline in the project, since the number of defects with each developer would reach their maximum, and cause huge pressure.
Based on the identification of such as pattern, we put in a lot of effort to see how we could avoid a number of features coming in near the end. We had always talked about trying to improve our development cycle in this regard, but the analysis of the defects made it more important for us to actually make these changes. We prioritized the features in a different way such that the features were we expected more integration issues were planned to be complete much earlier in the cycle. None of this was easy, but the analysis we had done had shown how not changing anything would lead to the same situation of a large number of open defects, as well as the increased pressure on the development and testing team. Such pressure also led to mistakes, which caused more pressure, and so on ....


Read more about this effort in the next post (Look at bug curves for previous projects and identify patterns - Part 4)


Wednesday, May 8, 2013

Risk planning - Look at bug curves for previous projects and identify patterns - Part 2

In the previous post (Looking at bug curves for identifying patterns - Part 1), I talked about how the defect curves for the previous versions of the software product should be reviewed to look for patterns that will help in predicting defect patters in the current release. As an example, if there was a phase in previous cycles where the pressure caused a greater number of defects to be either partially or completely rejected, then this was worrisome. Such a problem area is something that should be focused on to ensure that these kind of issues are reduced or removed totally altogether, and the benefits can be considerable.
In this post, I will continue on this subject area. One of the biggest problems that teams face is that they are so worked up in terms of what is happening in the current release that they use previous versions for help in estimation, and do not really use defect management and analysis to the level they should. One of the critical focus items in terms of project management and risk planning is about understanding that one looks not only at the current version of the software development cycle, but also at future versions. There is a lot of learning to be had from current releases and one should ensure that we are able to gain from this learning in the next release.
The Defect database should be set in such a manner that while it provides you the functionality to do your defect management for the current release, there should be the ability to store this information. If for example, your defect database is not able to store the data that defects have been rejected, or capture the fact that defects did multiple rounds back and forth between the developer and tester, you are losing out on data that is pretty important. We actually ran into such a situation, where we wanted to determine which defects are going back and forth between the developers and testers, or even between multiple people on the team. This was a way to figure out which defects are taking more time, and it seemed like a good place to start. The concept was that if we could figure out these from the previous cycle, and were able to get a figure on which sets of people do not work well together (too much back and forth between people over a defect is certainly not useful, you would expect people to collaborate and resolve issues rather than doing discussion in a defect).
However, things did not work as well as we expected. The defect management system did not have such a query or something even similar to it. It was possible to get this information once we got access to the tables in the database, but this is not something that is easy or quick to do. We needed to get hold of people who had some expertise in how the database was structured, and also needed access to people who knew how to manipulate the database and get us the report that we wanted. It took a fair amount of time, and in the end, we got what we wanted. The results were interesting. They showed that there was a person in the team who wanted every bit of information to be in the database, even something that was clear could be asked and did not add any value to the defect information. But, the defect was passed back to the other person, and then it would depend on the time and defect load of the other person. However, since the person did not have the defect on himself, normal statistics would not show any problem.
We took some action on this one in terms of some counseling for the person from the manager, in a non-threatening way, and resulted in improvement in terms of defect handling. However, it was difficult to quantify the time saved, but we were satisfied in terms of the results we could see, and felt that there was improvements we made in the defect management, and would help in terms of reducing the load due to defects.

Read more about this effort in the next post (Look at bug curves for previous projects and identify patterns - Part 3)


Tuesday, May 7, 2013

Risk planning - Look at bug curves for previous projects and identify patterns - Part 1

As a part of the risk planning for projects, defect management is one of the key areas to handle and handle well. A large number of defects may not be a problem if they are expected, but if unexpected, they can cause huge problems to the schedule of a development cycle. As an example, if the number of defects is much higher than expected, there are many problems that occur:
- The amount of time that is required to be devoted to resolving these defects and testing the changes will cause a strain on the schedule.
- And it is not just the effort change, a sudden high number of unanticipated defects will cause the team to wonder about the quality of the work done,
- Most important, will cause a lot of uncertainty about whether all the defects have been found, or are there many more to be found ?
- When more defects are found, the overall cycle of analyzing the defect, figuring out the problem, doing an impact analysis of the change, making the change, getting somebody to review the change, and then testing the change will cause a lot of strain.
- Some of these changes can be big, and will require more effort to validate the fix, and in some of these cases, the team management will decide that they would not want to take a risk and would rather that the defect get passed onto customers.
The above was just an example of what happens when there are a large number of defects that suddenly crop up. However, it would be criminal on the part of the team management and the project manager if they have not already done a study of the kind of defects that come up in different stages of the development cycle. The best way to do this, and do some kind of forecasting, is to look at the bug curves of previous cycles (which obviously cannot be done if the data about defects of previous cycles is not taken at that time, or stored at a later point of time).
We had started doing this for the past 2-3 years or so, and this helped us determine what points of the development cycle had the highest number of defects that were found, as well as closed, and even which were the times when there was the highest chances of defects not being fixed (either being partially fixed or being rejected totally by the tester). Now, even though every cycle would be different, there were some patterns that had a high probability of being repeated (and this even true when one project differs from the other, although it varied from project to project).
Let us take another example. There was a time in the project when defects had a higher chance of being rejected, and a rejected defect can be very expensive in terms of time of both the developer and the tester. As a result, during such stages that had happened in previous projects, we would ensure that there was higher focus on impact analysis and code review, and had even borrowed some senior developers from another project for around a month just for the additional focus on reviews. This paid off, since the number of defects getting rejected went down considerably, which per single defect did not amount for much, but our analysis showed that the total saving of effort because of this additional effort was around 25%.

Read more about this effort in the next post (Look at bug curves for previous projects and identify patterns - Part 2)


Facebook activity