Subscribe by Email


Tuesday, July 23, 2013

Defect Management: Dividing overall defect reports into separate functional areas - Part 1

Handling defects is one of the major efforts that plays an integral role in handling a project schedule and making it successful. I have known multiple teams where the team did not have a good running estimate of their defect count and the defect estimation over the remaining period of time left in the schedule; as a result, when the team was closer to the final stages of the schedule, they found that they had too many defects that made the remaining part of the schedule very tight - which meant that if they were to do an accurate reckoning of their status, they would need to either defer more defects and maybe end up with a product that is lower in quality; or the product would need to extend their timeline / schedule, which has a huge implication  for the team and many other teams that are involved in the release schedule of the product.
How do you avoid this ? The first paragraph of this post points out a huge problem, but the answer cannot be handled in a single post; it can be handled by a single cheesy phrase but which does not provide any solutions - "You need to do Defect Management". Now, let us get down to the meat of this post - this post just takes a specific aspect of defect management - sending a split of the defect counts as per the different areas. This in turn provides a better view of the defect picture to the team and helps in the process of overall defect management.
We wanted to have a system whereby we could track the counts for each of the separate functional areas and yet have the management team have access to these data on an ongoing basis. These also helped the separate functional teams do a targeting of the counts of the defects of their respective functional areas and work towards reducing this count. So, we took the overall data for defects for the product (open defects) and split these into the following areas:
Open defects:
ToFix (these are primarily defects owned by the development team, although there could be defects that are carried by other team - such as where there are defects with components supplied by external teams)
ToTest (these are primarily defects owned by the testing team, although since anybody can file a defect within the team, there may be people other than the testing team who own a defect)
ToDefer (the exact terminology of these defects can be different across organizations; but these are typically defects that are with a defect review committee for evaluation. These can be significant defects that need evaluation by the committee before they are to be fixed, or these can be defects that are not worthy of fixing but the team wants the committee to take a final call, and so on).
What is the purpose of sending out separate stats on a regular basis ? These data, when plotted on a regular graph over a period of time provides a large amount of information. The team and the managers, although they are in the thick of things, sometimes need to see such kind of aggregate information to take a good decision. For example, if the team is in the second part of the cycle and close to the timeline, and yet the ToFix graphs do not show a declining trend, then this is something to worry about. When such a stage happens, I have seen the development team manager doing a major discussion with the team to figure out how to reduce these counts and figure out what is happening. In extreme cases, I have seen the team actually take a hard look at these defect counts and then make a recommendation for extending the schedule (which is not a simple step to take).


No comments:

Facebook activity