Subscribe by Email


Thursday, February 19, 2009

Joke about reviews

If you are in the software business, you must have heard about reviews. Reviews are an integral part of the software development process. It is said about reviews that they uncover many mistakes, many problems, and add a lot of value. Reviews can happen for a variety of documents (test plans, test cases, High level design, Low Level design, Architecture, Project Plan, Project proposals, Code reviews, the list of documents and processes that need review is as big as the number of documents in a software cycle). In a future post, I will cover reviews in more details, but right now, here is an old joke that describes in very painful terms as to why reviews are necessary.

Why we need reviews.

In an ancient monastery in a far away place, a new monk arrived to join his brothers in copying books and scrolls in the monastery's scriptorium. He was assigned as a rubricate on copies of books that had already been copied by hand.

One day, while working on the monks' Book of Vows, he asks old Father Florian, the Armarius of the Scriptorium, 'Does not the copying by hand of other copies allow for chances of error? How do we know we are not copying the mistakes of someone else? Are they ever checked against the original?'

Fr. Florian was set back a bit by the obvious logical observation of this youthful monk. 'A very good point, my son. I will take one of the latest copies of the Book of Vows down to the vault and compare it against the original.' Fr. Florian went down to the secured vault and began his verification.

A day passed and the monks began to worry and went down looking for the old priest. They were sure something may have happened. As they approached the vault they heard sobbing and wailing... they opened the door and found Fr. Florian crying over the new copy and the original, ancient Book of Vows, both opened before him on the table. It was obvious to all that the poor man had been crying his old heart out for a long time.

'What is the problem, Reverend Father???' asked one of the monks.

'Oh, my Lord,' sobbed the priest, 'The word is 'CELEBRATE'!!!' (not celibate)

And this is why we need reviews.


Saturday, February 14, 2009

Testing of World Wide Web sites ..

There are an increasing number of transactions happening on the internet. Whether this be shopping sites, news sites, social networking sites, email sites, media sites, etc., people have come to depend on them to an increasing degree. To ensure that sites are dependable, testing needs to happen. However, testing of internet sites cannot follow the exact same process as that of client-server or other such systems. So the important question is - How can World Wide Web sites be tested?

For those familiar with client-server applications, testing of web sites is somewhat of an advantage, since in a simplistic way, web sites are essentially client/server applications - with web servers and 'browser' clients. However, this has a number of complications built in, such as having to consider various factors such as the interactions between html pages, complexity of TCP/IP communications and a wide variety of Internet connections, having to deal with client-side firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). To increase the fun and complexity, given the number of years that the internet has been in existence, there are now a wide variety of servers and browsers (with differenet users having their own preferences for the browser), and with browsers having many versions in co-existence (and as a result, small but sometimes significant differences between them), variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. Starting from a simple client-server architecture, the end result is that testing for web sites can become a major ongoing effort. And these are not all, with other considerations including:
• Estimating the the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). For this purpose, there is a need to determine what are the kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?
• Determining the target audience, and also trying to determine the kind of browsers they will be using. What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?
• Performance on the client side is becoming much more important, and some of the ways in which this performance is perceived was: how fast should pages appear, how fast should animations, applets, etc. load and run. How well does the site compare with other sites in terms of performance ?
• Will down time for server and content maintenance/upgrades be allowed? how much? Downtime on the internet world is supposed to be minimal to non-existent.
• What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested? Testing of security for internet sites is a major effort and needs to be handled professionally and systematically.
• How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?
• What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.? Content management for a site needs to work correctly and properly, and transparently to the user.
• Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?
• Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??
• How will internal and external links be validated and updated? how often?
• Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?
• How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing? Server logging and analysing the logs need to be designed properly, otherwise a mistake in data can lead to severe problems.
• How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?


Sunday, February 8, 2009

What's the role of documentation in QA?

Sometimes, for people who are new to the software line (and specifically new to the testing business), there are questions about how important documentation would be to regular Quality work. After all, the major work is about testing some software and getting the bugs resolved, how important could testing be ? Well, the role of documentation in enabling the success of QA activities is critical. Note that documentation can be in electronic form as well, no necessity that it should be only paper, and in fact, recent trends are more towards having electronic forms that can be placed in source control areas in their specific directory locations.
QA practices should be documented such that they are repeatable, and are not dependent on individual people. Software artifacts and processes such as requirement and design specifications, design documents such as architecture, HLD, LLD, business rules, inspection reports, configuration control design and operational documents, code changes, software test plans, test cases, bug reports and decision making on important bugs, user manuals, etc. should all be documented such that these can be referred later (and prove very useful in case the project personnel are changed or should be transitioned to other teams).
As a part of documentation, there needs to be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information. Change management for documentation should be used in all cases, else you will find later that it is hard to figure out why something changed and what were the reasons behind it.
One of the most common reasons for failures or overruns / delays in a complex software project is to have poorly documented requirements specifications. Requirement specifications are the details describing an application's externally-perceived functionality and properties. The condition for requirements should be clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). A testable requirement would be something like 'user needs to enter their date of birth while creating their profile'. Determining and organizing requirements details in a useful and efficient way can be a difficult effort; different methods are available depending on the particular project.
Care should be taken to involve ALL of a project's significant 'customers' and important stakeholders in the requirements process. 'Customers' could be in-house personnel or out, and could include end-users, customer acceptance testers, customer contract officers, customer management, future software maintenance engineers, salespeople, etc. Anyone who could later derail the project if their expectations aren't met should be included if possible. This also helps in ensuring that changes later are minimized (can never be eliminated).
Organizations vary considerably in their handling of requirements specifications. Ideally, the requirements are spelled out in a document with statements such as 'The product shall.....'. 'Design' specifications should not be confused with 'requirements'; design specifications should be traceable back to the requirements.
In some organizations requirements may end up in high level project plans, functional specification documents, in design documents, or in other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by testers in order to properly plan and execute tests. Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.
Test plans need to be documented properly with good change control, since a test plan forms the basis of determining the areas of testing, scope, responsibilities, etc. A test plan forms the first level of generation of confidence in the test strategy for the project.


Monday, February 2, 2009

Changing requirements and implications on testing

An ideal software development cycle involves a process whereby the requirements are frozen pretty early and the entire cycle happens with those frozen requirements. And if requirements do need to change, then a major impact analysis needs to happen, and the change is thoroughly studied before any change is taken. However, in the real world (and something that is increasingly acknowledged by incremental and Agile software methodologies), requirements can and do change and it is better for the software industry if there is a lot more effort put in to figure out how to incorporate the world of changing requirements. One of the folks impacted by changing requirements are the testing team, and once should evaluate how they can respond to such changing requirements. Let us start off by calling it a common problem and a major headache, and then work out what we can do. Here are some steps:
• Work with the project's stakeholders early on to understand how requirements might change (stakeholders have a much better idea on whether the requirements are fully known and stable) so that alternate test plans and strategies can be worked out in advance, if possible.
• It's helpful if the application is initially designed in a manner that allows for adaptability so that later changes do not require redoing the application from scratch, or at least minimise the amount of effort required for change.
• Coding practices of commenting and documenting, if followed religiously, makes handling changes easier for the developers.
• Another way to minimise the need for changing requirements is to present a prototype to the stakeholders and end users early enough in the cycle. This helps customers feel sure of their requirements and minimize changes.
• The project's initial schedule should allow for some extra time commensurate with the possibility of changes. Better to build such a time in the schedule.
• If possible, and if there is some amount of flexibility in negotiating relations with the client, try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version. This however does not work if the changes affect the workflows directly.
• Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application. This should be possible if there is a good change control process in the project.
• Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. This is typically done by delineating a proper change control process and explaining this to the stakeholders along with examples if necessary. Only after this, let management or the customers decide if the changes are warranted.
• Changes have a major effect on test automation systems, especially if there is a change in the UI of the application. Hence, you need to be sure that the effort put into setting up automated testing is commensurate with the expected effort required if there is change which causes re-doing of the test effort.
• Try to design some flexibility into automated test scripts. Not easy, but if you have initial ideas of change this should be possible.
• Focus initial automated testing on application aspects that are most likely to remain unchanged. This ensures that later test automation effort is done when there is some stability in the requirements.
• Devote appropriate effort to risk analysis of changes to minimize regression testing needs.
• The last plan may seem very strange to a test manager. Focus less on detailed test plans and test cases and more on ad hoc testing; keep in mind however that this entails a certain risk.
Overall, when requirements are changing, teams also need to be more flexible to respond to such changes.


Facebook activity