Subscribe by Email


Wednesday, August 15, 2007

Software testing best practices

The objective of testing is to ensure a higher quality for the software product. The better organized the testing is, the more the likelihood that the final product will have fewer defects and better quality. But it is not an easy task to determine how effective the testing is. One way to do that is to benchmark against some best practices, such as the ones being listed below:
These best practices can be broken down into levels, with benchmarking starting at the lowest levels, and after meeting these levels, moving to higher levels. At the end, if there is an impression that you are meeting most of these benchmarks, then there is a higher chance that you will have a very effective testing setup.

Start at the basic level:

  • Start with functional Specifications
  • ŸReviews and Inspection
  • ŸFormal entry and exit criteria
  • ŸFunctional test - variations
  • ŸMulti-platform testing
  • ŸInternal Betas
  • ŸAutomated test execution
  • ŸBeta programs
  • ŸDaily Builds
Functional specification: Enables the quality process of test plan and cases to start as the development process is underway. Further, involvement with the functional specifications deepens the understanding regarding customer workflows. Without functional specifications, the degree of thoroughness required is missing.

Reviews: Software inspection is a very well know efficient technique for debugging code.

Formal entry and exit criteria: Defining a criteria for every step may seem excessive, but it creates a mindset that has already thought through the beginning and end steps for many stages and milestones, and makes declaring milestones a lot easier, with agreement between the Dev and QE teams.

Functional test - variations: This involves varying the sets of input conditions to achieve different output conditions. If this is done in a comprehensive way, then the blackbox testing is said to be close to complete.

Multiple platform testing: Nowadays softwares are available on multiple platforms, and porting from one platform to another results in some changes. It is necessary to define a strategy to figure out the level of testing required across multiple platforms.

Internal Betas: External betas relate to sending software with a certain quality level to testers outside the company. However, if the software development company has a certain size, it can even send the software to its internal folks, providing a good beta testing platform.

Automated test execution: Automation is an integral part of testing processes. Automation can convert replace a fair amount of the manual testing required. In addition, with automation, repetitive testing of software becomes much easier. For example, if you have daily builds, automation can be used to setup a series of test cases that will determine whether the build is usable or not, without human intervention.

Beta Programs: These are a very efficient method of ensuring that there is a wide section of users available to test. They are as close to end users as possible, and provide feedback in terms of functionality, and also a wide coverage. A good example is for a software used for CD / DVD writing, where beta testing will provide coverage on a large number of burning devices, not possible with the testing team.

Daily Builds: Getting a process of daily builds is very useful during the development process. Bugs are turned around faster, new features are available much sooner, and even if a build is not so good, the next one will be available soon. It may seem like overhead, but is very useful.


Saturday, August 11, 2007

What is the Capability Maturity Model (CMM)

What is the Capability Maturity Model (CMM)?

CMM is a framework set by SEI, which provides a general roadmap for process improvement. Software Process Capability describes the range of expected results that can be achieved by following the process. The process capability of an organization determines what can be expected from the organization in terms of quality and productivity. The goal of process improvement is to improve the process capability.

A maturity level is a well-defined evolutionary plateau toward achieving a mature software process. There are five well-defined maturity levels for a software process. These are:

Initial
Repeatable
Defined
Managed
Optimizing

The Initial Process (Level 1) is essentially an ad hoc process that has no formalized method for any activity. Basic project controls for ensuring that activities are being done properly and that the project plan is being adhered to are missing. In crisis the project plans and development processes are abandoned in favor of a code and test type of approach. Success in such organizations depends solely on the quality and capability of individuals. The process capability is unpredictable as the process constantly changes. Organizations at this level can benefit most by improving project management, quality assurance, and change control.

The Repeatable process (Level 2) policies for managing a software project and procedures to implement those policies exist. Project management is well developed in a process at this level. Some of the characteristics of a process at this level are: project commitments are realistic and based on past experience with similar projects, cost and schedule are tracked and problems resolved when they arise, formal configuration control mechanisms are in place, and software project standards are defined and followed. Essentially, results obtained by this process can be repeated as the project planning and tracking is formal.

The Defined process (Level 3) in an organization has standardized on a software process, which is properly documented. A software group exists in the organization that owns and manages the process. In the process each step is carefully defined with verifiable entry and exit criteria, methodologies for performing the step, and verification mechanisms for the output of the step. In this process both the development and management processes are formal.

In the Managed process (Level 4) quantitative goals exist for process and products. Data is collected from software processes, which is used to build models to characterize the process. Hence, measurement plays an important role in a process at this level. Due to the models build, the organization has a good insight in the process and its deficiencies. The results of using such a process can be predicted in quantitative terms.

In the Optimizing process (Level 5), the focus of the organization is on continuous process improvement. Data is collected and routinely analyzed to identify areas that can be strengthened to improve quality or productivity. New technologies and tools are introduced and their effects measured in an effort to improve the performance of the process. Best software engineering and management practices are used throughout the organization.


CMM Model


Software Development Model: Spiral

What is Spiral Model? A brief explanation:

The software development activities can be organized like a spiral that has many cycles. The radial dimension represents the cumulative cost incurred in accomplishing the steps done so far, and the angular dimension represents the progress made in completing each cycle of the spiral.

Each phase begins with identification of objectives for the cycle, the different alternatives that are possible for achieving the objectives, and the constraints that exist. This is the first quadrant of the cycle. The next step in the cycle is to evaluate these different alternatives based on the objectives and constraints. The focus of evaluation in this step is based on the risk perception for the project. Risks reflect chances that some of the objectives of the project may not be met. The next step is to develop strategies that resolve the uncertainties and risks. This step may involve activities such as benchmarking, simulation and prototyping. Next, the software is developed, keeping in mind the risks. Finally the next stage is planned.

For example: In round one, a concept of operation might be developed. The objectives are stated more precisely and quantitatively and the cost and other constraints are defined precisely. The risks here are typically whether or not the goals can be met within the constraints. The plan for the next phase will be developed, which will involve defining separate activities for the project. In round two the top-level requirements are developed. In succeeding rounds the actual development may be done.


You might have heard the term 'Software Quality Factors' before, or maybe you have not heard it before. So, what is it ?

Quality factors are the factors which affect the quality of the software. Quality of a software product has three dimensions; each dimension deals with a set of quality factors:


· Product Operation

o Correctness

o Reliability

o Efficiency

o Integrity

o Usability

· Product Transition

o Portability

o Reusability

o Interoperability

· Product Revision

o Maintainability

o Flexibility

o Testability


More details about verification vs. validation

Verification & Validation

Essentially, the very nature of Quality Testing (QT) is broken into two aspects: verification testing and validation testing. It is thus possible to say:

QT = (Verification + Validation)

IEEE/ANSI defines verification testing as "The process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase." This is somewhat vague although verification testing is generally thought of as a proactive type of testing. Verification is, to a large extent, the Quality Control (QC) activities that are done throughout the lifecycle that help to assure that interim product deliverables meet their initial specifications. Verification is also the least understood of the two.

So, for example, if you are "Verifying Functional Design" then you have to consider the result of what made the functional design: the process of translating user requirements into a set of external human interfaces. This gives a functional design specification - it describes what the user can see, but nothing of what they cannot see. So, in verifying that type of document, the goal here is to determine how successfully the user requirements were incorporated into the functional design. The requirements document serves as a source document to verify the functional design. The best practice is to look for unwarranted additions or ommissions. Or consider the process of "Verifying Internal Design." The process by which this document is created is that the functional specification is broken up into a detailed set of internal attributes: data structures, data diagrams, flow diagrams, etc. This gives an internal design specification - it describes how the product has to be built. So, during verification, we can begin looking at limits and constraints with the product - basic boundary conditions. We also learn about performance nodes and possible failure conditions or scenarios.

These tend to be considered proactive activities because you are attempting to catch problems are early as possible and you are verifying if the "right" thing is being done. This is different from validation which does not ask: are we doing the right thing? It asks: are we doing what we said was the right thing? In other words, during verification you are, to a large extent, definining what the "right thing to do" actually is. In validation you are making sure you adhered to your previous definitions. And that means that validation is inherently more reactive in nature than is verification.

The IEEE/ANSI definition of validation is "The process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements."

These requirements can refer to a lot of things. The idea of verification is asking whether or not the requirements formulated correctly. The idea of verification is asking whether the product was constructed in accordance with those correctly-formulated requirements. In this sense, validation is usually used to refer to the "test phase" of the lifecycle, which assures that the end product (e.g., system, application, etc.) meets stated (and sometimes implied) specifications. There are, in general, eight validation axioms, as they are usually referred to, and those are given here:

1. Testing can be used to show the presence of errors, but never their absence.
2. One of the most difficult problems in testing is knowing when to stop.
3. Avoid unplanned, non-reusable, throwaway test cases.
4. A necessary part of a test case is a definition of the expected output or result. A comparison should be made of the actual versus the expected result.
5. Test cases must be written for invalid and unexpected, as well as valid and expected, input conditions. "Invalid" is defined as a condition that is outside the set of valid conditions and should be diagnosed as such by the program being tested.
6. Test cases must be written in generate desired output conditions. Do not think just in terms of inputs. Determine the input required to generate a pre-designed set of outputs.
7. A program should not be tested (except in unit and integration phases) by the person or organization that developed it.
8. The number of undiscovered errors is directly proportional to the number of errors already discovered.


Thursday, August 9, 2007

What is blackbox testing ?

Blackboxes are the names used for the devices in planes that are expected to record the happenings in the plane and are useful if the plane crashes. However, the software blackbox is a different concept. It refers to the term of treating the software application as a box, of which you don't know the internals and test only on the basis of inputs and outputs.
There are many definitions of blackbox testing:
Also known as functional testing. A software testing type whereby the internal workings of the item being tested are not known by the person doing the testing. The tester only knows the inputs and what the expected outcomes should be and not how the program arrives at those outputs. The tester does not have details about the programming code and does not need any further knowledge of the program other than its specifications.
When black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program actually arrives at those outputs. It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program is necessary. For this reason, the tester and the programmer can be independent of one another, avoiding programmer bias toward his own work. Black box tests are also the only form of test the customer is likely to understand; Therefore, black box testing is absolutely mandatory for acceptance testing. The customer must be able to understand these tests, so that he/she will know for sure whether or not you've met the contract requirements.
The base of the Black box testing strategy lies in the selection of appropriate data as per functionality and testing it against the functional specifications in order to check for normal and abnormal behavior of the system. Now a days, it is becoming common to route the Testing work to a third party as the developer of the system knows too much of the internal logic and coding of the system, which makes it unfit to test the application by the developer.
In order to implement Black Box Testing Strategy, the tester is needed to be thorough with the requirement specifications of the system and as a user, should know, how the system should behave in response to the particular action


Sunday, July 29, 2007

Software test plan and templates

Now that previous posts have discussed to some details about the need for the test plan, let's talk a bit about what the test plan covers:

Test plans, also called test protocol, are formal documents that typically outline requirements, activities, resources, documentation and schedules to be completed.

A sample of what a test plan (a template) is available at the following location: sqatester.com. The actual word document template is available at this link.

A lot of details about what a test plan covers, including the contents of the test plan, references (documents that support the test plan including project plan, SRS, design documents (HLD, LLD)), issues and risks, features included for testing in the test plan and to be excluded from the test plan, and overall approach are all included in the Wikipedia page. This is a good reference page to read if you want to get a lot of detail about what a test plan should cover.

For a shorter reading of what a test plan should cover, refer to this paragraph below where the test plan is broken down into 3 main sections:
1. Beginning of the plan: This contains background information such as header information, title, author, date, project number/code, project description, references to related documents, an explanation of the test objectives, definition, key words.
2. Middle section of the plan: This section starts to contain some meat about the plan, including a data collection / sampling plan, test environment including setup, resources to be used, assumptions that need to be made, customer specific needs, and how to report test-related problems such as failures.
3. Ending section of the plan: This section typically contains analysis information, statistical techniques, what was the original testing need, a definition for test success or failure, how to conduct data analysis, what to do in case testing results in insufficient information, how to present the final information, references.


Facebook activity