Subscribe by Email

Friday, May 17, 2013

Ensuring automated scripts to test components that need to be integrated into the software product

Most modern software products use components as a part of their system. If you consider the case that we use as an example from time to time, of a software that generates greeting cards and use videos and images that the user can provide, then you can understand the need for components to make the software development process faster and more efficient. In the given product, the software would need to be able to read a large number of different video formats and use them in the greeting card to be generated. However, it would be very inefficient and would take a much longer time to develop a similar functionality that external components (free or paid) would be able to provide. Most applications use a similar kind of architecture, and hence the software development schedule incorporates the need to take ongoing versions of these components during the product schedule.
However, using such components also adds a large amount of risk to the software development project. External components typically do not follow the same schedule as the software application. In addition, external components can have a release for which the quality is suspect and unknown, and it is a problem taking such a component. We were incorporating some open source components for which we knew that quality could be a problem sometimes on the latest release (and hence we would always take a previous stable release), but for a couple of years, we were running into a problem even in a case where we were paying a vendor for the component delivery.
The component was having quality issues which indicated that a comprehensive quality scan was not happening, and this would only be clear only when some detailed testing was happening at our end (in one case, a significant problem only happened during the testing conducted by a Beta tester). Even while discussion was happening with the vendor to synchronize our testing policies, there was a risk associated with the component that needs to be handled.
In our case, given the number of components that we integrated and in some cases, our lack of control on the schedule of the external component, we needed to figure out the way to manage these risks. The only way we could do this was to do a much more comprehensive testing at our end. However, given that we had an iterative cycle where we would receive a component, report issues, get a new version of the component, and so on, it was very expensive in terms of testing time to keep on doing a comprehensive testing of these components every time we received a new version of this component.
We did have a budget for automation of some of our testing cases, and we decided to focus on these components. The execution plan was simple - prioritize the components that were received multiple times and automate the major testing cases for these components. Running these automation cases took significantly much less time time than the manual testing and found defects much earlier. This also gave a learning to use that we shared with other teams, since the benefits of automation of testing in these cases was much higher - after all, the cycle of testing-defect-fix-test was much more efficient when the code was written by the developers in the product rather than when this was related to a component written by an external team.

No comments:

Facebook activity