Subscribe by Email

Monday, February 23, 2015

Tracking platform usage and making decisions based on that

Even in today's data based world, if you are an analytics expert, you can't expect to be totally popular, or that people will welcome you with hands outstretched. There are many aspects of a product development cycle that would benefit from integration with a data analytics cycle - generating the questions, collecting the data, and the extremely important task of generating the output information and conclusions (and a wrong conclusion at this point can have negative implications for the product - putting in effort in wrong or unimportant areas). However, consider the case where there is support for using analytics and there are resources dedicated for ensuring that requisite changes to the product can be made (based on the analytics output).
One major area where the team needs to gather information is on the area of platform support. Platform could mean the Operating System (MacOS, Windows, others), Versions (Windows XP, Windows 7/8, etc) as well as browsers support (versions of Chrome, Internet Explorer, Firefox, Opera, etc). These can make a lot of different in terms of the amount of effort required for product development. For example, on different versions of Windows, the versions of systems files are different and it is a pain to support these different versions. There can be small changes to the functionality of the system based on these files, and in many cases, the product installer would actually detect the Windows version and install the required versions of these systems files. If you could find out that the number of people using these different versions, and find out that one of these versions is being used by a small number of consumers, then the decision to drop that particular operating system may be taken.
So, this is one of the areas in which analytics can work. The simplest way to do that is to make it part of the functionality of the product, where the product, on the user's computer, dials home and provides information such as the platform on which the product has been installed. Once this kind of data has been collected and a proper analysis is done, then the product team can look at this information and factor that into the feature list for the next version of the product. The team will need to decide on the time period for which the data would be captured, and also some of the benchmarks which will decide whether data from the analytics effort needed to be used for making decision (for example, the data seems to be such that people feel that the data is widely different from public perception).
However, this is also dependent on how much the team can depend on this data and the analysis; after all, even a small variation during the analysis can result in information that has levels of inaccuracies in it. But, it is necessary that the team spends effort in the analytics effort, since the payoff from using accurate data analysis and interpretation is very high.

Wednesday, February 11, 2015

Trying to get non-responsive members of the team be more schedule sensitive

We know this problem, it happens all the time. You have different members of the team, some more disciplined and some less disciplined. Actually discipline is the wrong word. When you have creative members of the team, or team members who are attached to multiple projects, then there can be problems with respect to scheduling of their deliverables. In the case of team members such as User Interface Designers or Visual Experts, or Visual Designers, they typically do not move to the same beat as that of the rest of the project teams, such as Engineers or Testing Engineers.
This can be problematic for the rest of the team, since schedules are interlinked to each other. For example, the User Interface Designer would prepare design specifications that are used by the team members to discuss and finalize the feature workflow and the technical architecture. These are then developed and passed onto the testing team which does the testing and then releases the feature. However, if the initial design does not come in time, then the rest of the schedule will get impacted.
One of the problems that I have experienced with User Designers or similar creative people is that they do not work in pieces; they would like to look at the overall workflow for the product and then release a completed design. But the team does not work like this, it would like workflow designs feature by feature, so that the work can be done feature by feature (and it makes logical sense).
Another option that could have been postulated is that the Workflow Designer could have a period of 2-3 months before the start of the cycle, so that the Designer gets enough time to make the design. This seems logical, but there are problems in this. The Workflow Designer does not work entirely on his / her own, but needs to work with the Product Manager and the team members (the team members  are involved so that the team could figure out the technical cost of doing the Workflow designs; some of these workflows may take more time and effort than other workflows and the contribution of the technical team in figuring out these is critical. This process can be iterative).
So how do you work out trying to get such more creative members as part of the process?
- First and foremost, it is necessary to ensure that you do not make the assumption that these resources understand the critical nature of meeting their schedule deliveries. It would be needed to spend much more time with these people and form a detailed plan for deliveries, doing this discussion multiple times till an understanding has been formed.
- In my experience, it was also necessary to have 2 dates in the schedule with a few dates gap between the 2 dates. It was necessary to push the delivery to happen for the first date, but there was also the understanding that the delivery happening on the 2nd date would also work fine without threatening the schedule.
- It was also realized that there was the need for a regular reminder along with checking about state of progress and updating the rest of the team on such progress. So, the Project Manager had setup a weekly meeting with the workflow designer to discuss the state of progress and the deliverable, and figure out alternatives if there was a delay.

Thursday, February 5, 2015

Emergency defect fixing: Giving local fixes for quick verification

During the process of defect fixing and verification, there is a standard process whereby a build process is created and defect fixes checked into this build. What this does is to ensure that every day (the builds typically come every day) the defects which were fixed the previous day are available for testing in a proper installer which can be used by the testing team similar to the product that is available to the customer.
This process works pretty well, as long as everybody involved knows the process well, and there are people in the process who have responsibilities for the different parts of the process (for example, somebody who ensures that the build stability systems are in place, others who do a quick smoke testing to ensure that the build is usable), and so on.
However, such a system cannot protect against a case where a defect has not been fixed, either fully, or partially. In the normal case of operations, it is normal to have defects that are not fixed and are rejected by the testing team, or some part of the defect not being fixed well, and a new defect being filed for the same.
When does this process not work ? Consider the case where the product development process is nearing the end of the schedule. In such a case, the defects that are to be fixed are restricted, and only those defects that are being allocated for fixing are passed onto the development team and the list given to the testing team for verification. However, the cost of a failed defect fix can be fairly high.
A defect fix that has failed would mean that the build for that day is not ready for use, and this can be very expensive.
When such parts of the schedules have been reached, there is the need for much closer interaction between the specific developer and the tester(s) for that defect fix. When the developer has made the fix, he/she would work with the tester and provide the fix in a local build, made on the developer's machine, which could be quickly tested by the tester to ensure that the fix has been made to the satisfaction of both the developer and the tester. This goes a long way to help that the build that comes the next day is usable and important fixes are not failing.
There are some problems that can still happen in this process. The local build may not be incorporating changes made by the other developers, and this can cause a dependency problem that may still cause the defect fix to fail. However, the chance of this happening is low (or can be monitored by the developer to reduce the failure rate) and goes a long way to ensure that the development process has reduced risk near the end of the cycle.
However, this requires close collaboration between the tester and the developer, and is not really required to be done in the regular part of the development cycle, since there is an overhead involved in the process.

Facebook activity