Subscribe by Email


Showing posts with label Feature. Show all posts
Showing posts with label Feature. Show all posts

Sunday, May 12, 2013

Project Management - Ensuring a defect criteria before features can be closed

The title below represents one of the biggest problems that I have faced in my life as a Project / Program Manager. Figuring out when a feature is complete is one of the most contentious items in the software development process (and this refers more to the Waterfall methodology and less to Scrum, where the concept of a feature done defines the process). Why is this critical ? Well, trying to define when a feature can be closed has many dependencies on processes down the line:
- When a feature is marked closed, it can be handed over to Beta customers or to customers for them to evaluate
- Feature closure means that all the members of the team can move onto another feature
- Feature closure is also used when the feature needs to be taken up by the teams that make the features available in many languages, and also those who write documentation for the feature. It is costly for them to incorporate defect fixes in their project, and in many cases, the teams only taken up a feature for these conversions when the feature has been marked closed
- Feature closure also means that the Product Manager can take the feature and do demos of the feature in front of the press and other reviewers
A feature closure for a feature means that the final effort spent on that feature can be calculated and reviewed against the estimates, giving data that the team can analyze for the purpose of finding out the accuracy of their estimates.
But, the problem is in terms of figuring out the actual feature closure process, in terms of what the benchmarks are. One of the biggest problems is that testing is never complete, in the sense that if you take a feature, and continue testing, you will find issues (even if you only find minor issues after a certain amount of testing). So, the team needs to also define a certain amount of testing and defect finding, after which the team will stop any further testing of the feature. This part is one of the more more discussed and debated parts of the process. There is only a certain amount of testing that you can do as per your schedule; beyond that amount of testing you start eating into the time reserved for other features, so a compromise needs to be done.
This definition of when to stop testing a feature is primarily dependent on 2 items - first of all, the testing team would have defined a certain number of test cases to be run, and then they would review the number and type of defects that they are finding and figure out whether more testing needs to be done. It is the latter case where there can be disputes, since the development team may believe that the testing is over and the defects that have been found are trivial while the testing team may want to do some additional testing before marking the feature as closed.
And of course there is the other problem about marking a feature as closed, and then having to open it up again because some defects were found. These defects in some cases are reported by people outside the engineering team, such as beta testers (which is possible since the beta testers have a wider range of equipment across which they can test the product vs. the engineering team). In such a case, the team would need to figure out the level of additional testing required before the feature can be closed again.


Sunday, January 17, 2010

In a product development cycle, how to get engineering and product management on the same page

As mentioned in the previous post, one of the biggest problems in a product development cycle is the objective of ensuring that the engineering teams and product management are in agreement about the resource commitment. Product Management talks about getting new features in place, while engineering has to content with having to dedicate resources to efforts for handling infrastructural and legacy tasks.
What are some of the tasks that an engineering team has to do which are not related to new feature work:
1. Test features from earlier versions and fix issues in these
2. Incorporate new components - you could be incorporating common components (and once you are building a product, there are many features that can be done by utilizing common components such as disc burning engines, installer technologies such as MSI, Installshield, Visual Studio etc); over a period of time, such components need to be upgraded and there will need to be more testing required
3. Spending dedicated time to improve the quality. Companies with large products such as Microsoft, Apple, Adobe, etc defer a number of bugs in their products. Some of these bugs are deferred since they are not critical, and there is a need to release the products. However, over a period of time, it may turn out that the overall impact of these deferred bugs can be high, and may need to spend some dedicated time to fix these bugs.

Now, what needs to be done in such cases since there needs to be an agreement between the engineering teams and product managers.
First, before starting on a new cycle, the team needs to spend around a week (with multiple meetings) to work through what all is planned for the next cycle, and this includes discussing what needs to be the focus of the release. A good way to sort through the legacy features is to emphasize that legacy features are non-negotiable and need to be tested since they impact customers directly. In a number of cases, without this discussion, the Product Management team has not thought through the need to test legacy features and when this discussion happens, this issue will get cleared.
Secondly, when the discussions about the features for the cycle starts, the team will need to work through setting aside dedicated time for infrastructural items, and the engineering team will need to push a bit hard on this issue; and in most reasonable cases, there will be an agreement between the engineering and the product management team as long as the time needed for this infrastructural work does not take too long.
In all such cases, focusing on a mix of the needed technical architecture work and new features should help to resolve such issues.


Tuesday, June 3, 2008

About usability testing and timing

Suppose you are in a tight development cycle. You have to deliver either a new product, or the next version of an existing product. Getting the features of a product right is always a touch task, given that there are a number of competing features that seem important, and prioritizing the features is something that is very important. This decides the priorities that the engineering team (the feature development team) will follow during the development cycle.
How is this priority actually decided ? If the company is in in the business of defining an absolutely new product that has not been conceptualized as yet, then getting some feedback from prospective customers is difficult; however, if there are already customers using an existing product (from the same company or a rival company), then it is absolutely essential that these users be polled for the features so that there is a good idea about the features that are most critical (it would also help to identify features that customers would be willing to pay a premium for).
Now consider that we are in the development phase of the project lifecycle, where the UI team works along with the engineering team to define the workflow for the feature. There is a lot of discussion around what the feature should be like (with a possibility of the discussion getting heated as a regular part of feature discussion), and eventually most people agree to what the feature should be like. The UI specs of the feature are drawn up and the feature implementation is based on the spec. At this point, everything may seem settled, but it is critically important that this final implementation be evaluated for usability issues. At this point, the team needs to find a set of people who would adequately represent the final set of users, and get them to see the feature working in the actual product. Such usability testing will help determine whether the determined final feature is actually something that the users can accept, or whether there are problems that need to be modified.
The timing of such user testing is most critical. Typically, such workflows reach a final form close to the end of the cycle, and this is the form in which users can actually exercise the workflows. However, in a contra effect, this time is also very late in the cycle, and the team will be hesitant to accept changes that are significant, since the amount of time required to make these changes may not be easily available.
What is the solution ? The solution that seems to work is to have a much more active involvement with users, starting with showing them mockups as the workflow gets more concrete, active question and answer sessions about what they may be looking for, till the time that they can review the actual product implementation. Further, if a workflow is very new and contentious, then it would make sense to try and complete it earlier. And finally, there needs to be time built into the schedule to take such changes.


Facebook activity