The title below represents one of the biggest problems that I have faced in my life as a Project / Program Manager. Figuring out when a feature is complete is one of the most contentious items in the software development process (and this refers more to the Waterfall methodology and less to Scrum, where the concept of a feature done defines the process). Why is this critical ? Well, trying to define when a feature can be closed has many dependencies on processes down the line:
- When a feature is marked closed, it can be handed over to Beta customers or to customers for them to evaluate
- Feature closure means that all the members of the team can move onto another feature
- Feature closure is also used when the feature needs to be taken up by the teams that make the features available in many languages, and also those who write documentation for the feature. It is costly for them to incorporate defect fixes in their project, and in many cases, the teams only taken up a feature for these conversions when the feature has been marked closed
- Feature closure also means that the Product Manager can take the feature and do demos of the feature in front of the press and other reviewers
A feature closure for a feature means that the final effort spent on that feature can be calculated and reviewed against the estimates, giving data that the team can analyze for the purpose of finding out the accuracy of their estimates.
But, the problem is in terms of figuring out the actual feature closure process, in terms of what the benchmarks are. One of the biggest problems is that testing is never complete, in the sense that if you take a feature, and continue testing, you will find issues (even if you only find minor issues after a certain amount of testing). So, the team needs to also define a certain amount of testing and defect finding, after which the team will stop any further testing of the feature. This part is one of the more more discussed and debated parts of the process. There is only a certain amount of testing that you can do as per your schedule; beyond that amount of testing you start eating into the time reserved for other features, so a compromise needs to be done.
This definition of when to stop testing a feature is primarily dependent on 2 items - first of all, the testing team would have defined a certain number of test cases to be run, and then they would review the number and type of defects that they are finding and figure out whether more testing needs to be done. It is the latter case where there can be disputes, since the development team may believe that the testing is over and the defects that have been found are trivial while the testing team may want to do some additional testing before marking the feature as closed.
And of course there is the other problem about marking a feature as closed, and then having to open it up again because some defects were found. These defects in some cases are reported by people outside the engineering team, such as beta testers (which is possible since the beta testers have a wider range of equipment across which they can test the product vs. the engineering team). In such a case, the team would need to figure out the level of additional testing required before the feature can be closed again.
- When a feature is marked closed, it can be handed over to Beta customers or to customers for them to evaluate
- Feature closure means that all the members of the team can move onto another feature
- Feature closure is also used when the feature needs to be taken up by the teams that make the features available in many languages, and also those who write documentation for the feature. It is costly for them to incorporate defect fixes in their project, and in many cases, the teams only taken up a feature for these conversions when the feature has been marked closed
- Feature closure also means that the Product Manager can take the feature and do demos of the feature in front of the press and other reviewers
A feature closure for a feature means that the final effort spent on that feature can be calculated and reviewed against the estimates, giving data that the team can analyze for the purpose of finding out the accuracy of their estimates.
But, the problem is in terms of figuring out the actual feature closure process, in terms of what the benchmarks are. One of the biggest problems is that testing is never complete, in the sense that if you take a feature, and continue testing, you will find issues (even if you only find minor issues after a certain amount of testing). So, the team needs to also define a certain amount of testing and defect finding, after which the team will stop any further testing of the feature. This part is one of the more more discussed and debated parts of the process. There is only a certain amount of testing that you can do as per your schedule; beyond that amount of testing you start eating into the time reserved for other features, so a compromise needs to be done.
This definition of when to stop testing a feature is primarily dependent on 2 items - first of all, the testing team would have defined a certain number of test cases to be run, and then they would review the number and type of defects that they are finding and figure out whether more testing needs to be done. It is the latter case where there can be disputes, since the development team may believe that the testing is over and the defects that have been found are trivial while the testing team may want to do some additional testing before marking the feature as closed.
And of course there is the other problem about marking a feature as closed, and then having to open it up again because some defects were found. These defects in some cases are reported by people outside the engineering team, such as beta testers (which is possible since the beta testers have a wider range of equipment across which they can test the product vs. the engineering team). In such a case, the team would need to figure out the level of additional testing required before the feature can be closed again.
No comments:
Post a Comment