This has been a series of posts on doing some risk planning by focusing on the defect management side. There is a lot that can be done to improve the defect management in the current cycle by looking at the defect trends from the previous cycles (in most cases, there is a lot that can be learnt from previous cycles; it's only totally new projects where nothing can be learnt). In the previous post (Looking at bug curves and identify patterns - Part 2), I took more of a look at ensuring that the defect database is properly setup for getting the information needed for generating this data. Unless this sort of data can be generated for the current cycle, it cannot be used in the next cycles. Hence, there is a need to make sure that required infrastructure is in place for the same.
In this post, I will assume that the data is there for previous cycles to interpret and hopefully generate some actionable items. In the first post in this series, I had already talked about some examples that can be used such as identifying a phase in the project when there have been more defects rejected in the past and then try to change that. There are clear benefits of doing such kind of work, and the project manager should be able to get some clear improvements in the project.
The way to use this kind of defect data from previous cycles is to be more systematic in terms of analysing it, and then identifying some clear patterns from this analysis that will help to identify improvements. So, one clear way of starting on this line is to take the entire defect curve from the previous cycle (charting the number of open bugs against the time axis, and doing the same for other bug stats as well (consider the example of rate of defect closure, the number of defects that have been closed, the number of defects that were closed by a manner other than fixing, and so on).
Based on these various defect stats, there is a lot information that can be generated that can be used for analysis. For example, we used to find that the number of open defects used to shoot up to its highest figure soon after the development team had completed their work on all the features, and this was basically a number of the features would come to the testing team near the final deadline for finishing development, and this was when the testing team was able to put in their full efforts. They were able to find a large number of defects in this period, and as a result, the peak in open defects would happen around here. This was a critical timeline in the project, since the number of defects with each developer would reach their maximum, and cause huge pressure.
Based on the identification of such as pattern, we put in a lot of effort to see how we could avoid a number of features coming in near the end. We had always talked about trying to improve our development cycle in this regard, but the analysis of the defects made it more important for us to actually make these changes. We prioritized the features in a different way such that the features were we expected more integration issues were planned to be complete much earlier in the cycle. None of this was easy, but the analysis we had done had shown how not changing anything would lead to the same situation of a large number of open defects, as well as the increased pressure on the development and testing team. Such pressure also led to mistakes, which caused more pressure, and so on ....
Read more about this effort in the next post (Look at bug curves for previous projects and identify patterns - Part 4)
In this post, I will assume that the data is there for previous cycles to interpret and hopefully generate some actionable items. In the first post in this series, I had already talked about some examples that can be used such as identifying a phase in the project when there have been more defects rejected in the past and then try to change that. There are clear benefits of doing such kind of work, and the project manager should be able to get some clear improvements in the project.
The way to use this kind of defect data from previous cycles is to be more systematic in terms of analysing it, and then identifying some clear patterns from this analysis that will help to identify improvements. So, one clear way of starting on this line is to take the entire defect curve from the previous cycle (charting the number of open bugs against the time axis, and doing the same for other bug stats as well (consider the example of rate of defect closure, the number of defects that have been closed, the number of defects that were closed by a manner other than fixing, and so on).
Based on these various defect stats, there is a lot information that can be generated that can be used for analysis. For example, we used to find that the number of open defects used to shoot up to its highest figure soon after the development team had completed their work on all the features, and this was basically a number of the features would come to the testing team near the final deadline for finishing development, and this was when the testing team was able to put in their full efforts. They were able to find a large number of defects in this period, and as a result, the peak in open defects would happen around here. This was a critical timeline in the project, since the number of defects with each developer would reach their maximum, and cause huge pressure.
Based on the identification of such as pattern, we put in a lot of effort to see how we could avoid a number of features coming in near the end. We had always talked about trying to improve our development cycle in this regard, but the analysis of the defects made it more important for us to actually make these changes. We prioritized the features in a different way such that the features were we expected more integration issues were planned to be complete much earlier in the cycle. None of this was easy, but the analysis we had done had shown how not changing anything would lead to the same situation of a large number of open defects, as well as the increased pressure on the development and testing team. Such pressure also led to mistakes, which caused more pressure, and so on ....
Read more about this effort in the next post (Look at bug curves for previous projects and identify patterns - Part 4)
No comments:
Post a Comment