So you have decided that you need to ensure that your software development cycle has code coverage, and want to move ahead. In addition to making sure that team members have awareness about the principles and about the methods used, another important decision is about the tools to use. There are a wide variety of tools that are available, and here is a listing of the tools. It is hard to recommend a tool, since the actual tool depends on the targets, depends on the developed language (C++, Python, Java, Web language, etc). Here is a sampling of some of the popular tools.
- LDRA Testbed: Automatic instrumentation techniques
- Cobertura: Cobertura is a free Java tool that calculates the percentage of code accessed by tests. It can be used to identify which parts of your Java program are lacking test coverage. It is based on jcoverage.
- Quilt: Quilt is a Java software development tool that measures coverage , the extent to which unit testing exercises the software under test. It is optimized for use with the JUnit unit test package, the Ant Java build facility, and the Maven project management toolkit.
- Clover: Commercial tool. Clover is designed to measure code coverage in a way that fits seamlessly with your current development environment and practices, whatever they may be. Clover's IDE Plugins provide developers with a way to quickly measure code coverage without having to leave the IDE. Clover's Ant and Maven integrations allow coverage measurement to be performed in Automated Build and Continuous Integration systems and reports generated to be shared by the team.
- NUnit: Meant for unit testing. NUnit is a unit-testing framework for all .Net languages. It is written entirely in C# and has been completely redesigned to take advantage of many .NET language features, for example custom attributes and other reflection related capabilities. NUnit brings xUnit to all .NET languages.
- NCover: Commercial tool. NCover 3 includes Symbol and Branch Point Coverage, but it also has two new metrics: Cyclomatic Complexity and Method Visit Coverage. Cyclomatic complexity describes the number of independent paths through your methods. It's a great indicator of when you should re-factor. Code with a low cyclomatic complexity will help you guarantee that new developers on your teams can get up to speed fairly quickly. While method visit coverage previously existed in NCover Explorer, it only worked when you had symbol points for a method.
- Jester: Jester finds code that is not covered by tests. Jester makes some change to your code, runs your tests, and if the tests pass Jester displays a message saying what it changed. Jester includes a script for generating web pages that show the changes made that did not cause the tests to fail.
- Emma: EMMA is an open-source toolkit for measuring and reporting Java code coverage. EMMA distinguishes itself from other tools by going after a unique feature combination: support for large-scale enterprise software development while keeping individual developer's work fast and iterative.
- Koalog Code Coverage: Commercial license. Koalog Code Coverage is a code coverage computation application written in the Java programming language. Koalog Code Coverage allows you to measure the efficiency of your tests suite, but also to discover dead code in your project.
- EclEmma: EclEmma is a free Java code coverage tool for Eclipse, available under the Eclipse Public License. Internally it is based on the great EMMA Java code coverage tool, trying to adopt EMMA's philosophy for the Eclipse workbench.
- GroboCodeCoverage: There are several commercially available code coverage tools for Java, but they all require a large fee to use. This is a 100% Pure Java implementation of a Code Coverage tool. It uses Jakarta's BCEL platform to post-compile class files to add logging statements for tracking coverage. An old tool.
- Hansel: Hansel is an extension to JUnit that adds code coverage testing to the testing framework. What makes Hansel different from other code coverage tools? Most tools try to generate code coverage reports from a test-run of all available tests. But a much more useful information is how much of the code which a test is supposed to test is covered. Hansel gives you this information.
- CodeCover: CodeCover is a free glass-box testing tool developed in 2007 at the University of Stuttgart. CodeCover measures statement, branch, loop, and MC/DC coverage. CodeCover uses the template engine Velocity.
- rcov: code coverage for Ruby: 20-300 times faster than previous tools. Multiple analysis modes: standard, bogo-profile, "intentional testing", dependency analysis. Detection of uncovered code introduced since the last run ("differential code coverage")
- Insure++: a coverage of source code of application tested with functional tests.
- iSYSTEM winIDEA: measures coverage on a wide variety of embedded processors. It works by recoding execution directly on hardware, without instrumenting code or modifying the program and in real-time.
- LDRA Testbed: Measures statement coverage, branch/decision coverage, LCSAJ Coverage, procedure/function call coverage, branch condition coverage, branch condition combination coverage and modified condition decision coverage (MC/DC) for DO-178B Level A.
- VB Watch: Visual Basic code coverage and performance analysis tool
- BullseyeCoverage: C and C++ code coverage tool
- XDebug: PHP debugging tool, including code coverage
If you know of more tools, please add in the comments.
Sunday, March 22, 2009
Code coverage tools
Posted by Ashish Agarwal at 3/22/2009 08:51:00 AM 0 comments
Labels: Code Coverage, List, Software, Tools
Subscribe by Email |
|
Saturday, March 21, 2009
How important is 100% code coverage ?
It can be hard to achieve 100% code coverage, and there are a number of proponents who do not believe in going the extra bit it takes to achieve that. There are people who do not feel comfortable unless they reach a 100% code coverage, and are passionate about it. What are some of the benefits of trying to attain a 100% code coverage ? What is actually 100% code coverage ?
Let's start with the meaning of what code coverage is (if you missed my previous post where I explained code coverage in more detail). Code coverage is a measurement that answers the question about how much of the source code for an application is run when the unit tests for the application are run, and more important, how much code is missed by the unit cases. If 100% of the lines of source code are run when the unit tests are run, then you have 100% code coverage. Seems fairly simple, right. Have you ever wondered why Service Level Agreements don't seem to promise 100%, but do 99%, or even 99.9% ? This example was just a statement thrown in between to demonstrate how getting to 100% is not easy in any field. Let's take this further.
When you write test cases, the principle behind writing the unit test cases and then executing them is to ensure that an application works as expected. So, if your test cases cover more of the code, then it means that the quality of a higher percentage of the code has been verified through testing. In practice, not everything goes by theory, so while 100% code coverage does not guarantee that an application has no bugs, your confidence in the quality of your code increases significantly the higher your code coverage is.
There are many people who feel that trying to target 95% code coverage is a futile target; at the same time, I believe that targeting 95% code coverage is not the way to go. 95% is a great interim target if you have not done code coverage before, but anything less than 100% is a path that you do not want to take. If you testing of the code is only 95%, you are still risking that 5% of the code is untested - this section of the code may be harmless, but may also have some big bugs that could screw up your happiness later. You allow exceptions to creep in, and then pretty soon, pressure causes you to make more exceptions, and an example has already been set. There will always be a niggling feeling that the untested portions of the code can come back to haunt you later.
Reaching there is not easy. It takes a lot of hard work to get everybody to target the 100% code coverage, and needs determination when the temptation is strong to leave things be in the 90% (even more so when sections of the code are older code that may not have seen many changes).
Posted by Ashish Agarwal at 3/21/2009 09:46:00 PM 0 comments
Labels: Benefits, Code Coverage, Testing
Subscribe by Email |
|
What is code coverage ?
People know what software testing is, and most people in the profession can differentiate between white box and black box testing. However, when you get into more details, and look to identify how testing can provide even greater value, the benefit of measures such as code coverage become apparent. At the same time, there would be a large number of software professionals who are not even aware of what code coverage is, and what are its key benefits. So the idea of this article is to try and articulate some of the benefits.
So what is Code Coverage ?
Well, Code coverage is a type of measure used in software testing which tries to answer the questions about the degree to which the entire source code of an application has been tested. Traditional black box testing, with its focus on functional testing cannot even come close to trying to answer this question, although White Box testing does come closer to trying to answer this question. In fact, if you examine code coverage practices in detail, it is possible to say that code coverage is a form of testing that inspects the code directly and is therefore a form of white box testing.
Code coverage techniques were amongst the first techniques invented for answering the question of systematic software testing. For those proponents of the text plan / case based method of testing, code coverage testing works on the principle that it is entirely possible that sections of a software application remain untouched by test data, and as a result, then it is not possible to say, with any degree of certainty, that these sections do not contain residual errors.
How do you go ahead with actually trying to do code coverage ?
Code Coverage requires support from engineering to proceed. Why is this so ? Getting code coverage in practise requires a different procedure from the normal software build - The target application / software is configured to be built with special options / libraries and/or run under a special environment such that every function that is exercised (executed) in the program(s) is mapped back to the function points in the source code. Doing this systematic process (although requires effort and the value does not seem immediately clear to individual developed and QE) allows developers and quality assurance personnel to look for parts of a system that are rarely or never accessed under normal conditions (error handling and the like) and helps reassure test engineers that the most important conditions (function points) have been tested. Once this exercise has been done, the output is further analysed to see what areas of code have not been exercised and the tests are updated to include these areas as necessary. Once this exercise has been completed (and it may need to be done on a regular basis as the code is in the process of being developed), it gives a much higher level of confidence about the overall quality of the code.
Posted by Ashish Agarwal at 3/21/2009 10:29:00 AM 0 comments
Labels: Code Coverage, QE, Quality
Subscribe by Email |
|
Tuesday, March 17, 2009
Articles on the product development cycle
These are a series of articles on the product development cycle, meant for a product development cycle where either a new product or a new version is being developed. These articles are meant to illustrate different stages of the development cycle although some of the stages could be overlapping:
Requirements Gathering (link)
Requirements Gathering contd .. (link)
Requirements Planning - Template (link)
Planning a patch or minor release (link)
Rolling out the patch (link)
What is a minor / dot release ? (link)
Planning a minor / dot release - Challenges (link)
Actual Kickoff of Development Effort (link)
PreRelease / Beta planning (link)
Planning for metrics (link)
To be contd ..
Posted by Ashish Agarwal at 3/17/2009 09:34:00 PM 1 comments
Labels: Cycle, Development, Product
Subscribe by Email |
|
Monday, March 16, 2009
Questions regarding software testing and functionality
A couple of questions related to software testing, especially with regard to functionality ..
What if the application has functionality that wasn't in the requirements?
One might wonder how it is possible to have a situation where the application has functionality that was not in the requirements; but it is possible. In a badly controlled software project, it is possible that functionality may be included based on direct request from clients; another case is when the project has been partly done by another company or is a migration project. In such cases, the software may have many undocumented functionality. The implications of such functionality are related to the extra effort required for testing, for documentation, for bug fixing, for internationalization.
Given the impact, it is necessary to do the serious effort needed to determine if an application has significant unexpected or hidden functionality, and the very fact that it may be necessary to do this analysis indicates problems in the software development process. What do you do if such functionality is found? If the functionality isn't necessary to the purpose of the application, a decision should be taken to determine whether it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer.
If not removed, analysis needs to be done to determine added testing needs or regression testing needs (and such extra effort may be non-trivial, or may add to the overall risks). Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only effects areas such as minor improvements in the user interface, for example, it may not be a significant risk. However, in no case should such functionality be taken casually.
What if the software is so buggy it can't really be tested at all?
One would not like to be in such a situation, but can happen easily enough. Suppose the software cycle has been under a lot of stress in the design and development phase, then code can be real buggy. Further, if the processes related to review and code standards are not implemented, then the software could be real buggy. What does QE do in this case, given that they still need to do their job and complete testing.
The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem. If required, development could be tasked with only bug fixing and no new feature work until a situation is reached where the code is much more bug free; and schedules can also be extended.
Posted by Ashish Agarwal at 3/16/2009 09:39:00 AM 0 comments
Labels: Development, Effort, Functionality, Requirements, Testing
Subscribe by Email |
|
Saturday, March 7, 2009
What are Web Applications ? (WebApps)
You must have heard of Web Applications for a long time now. Nowadays, you even hear of Web 2.0 Apps. But what exactly are Web Applications and how do they impact you ?
In the early days of the web, web sites consisted of static pages, which severely limited interaction with the user since there was no interactivity or very limited interactivity. In the early 1990’s, this limitation was removed when web servers were enhanced to allow communication with server-side custom scripts. As a result, applications were no longer just static brochure-ware, edited only by those who knew the arcane mysteries of HTML; with this single change, normal users could interact with the application for the first time. The trend towards increased interactivity has continued apace, with the advent of “Web 2.0”, a term that encompasses many existing technologies, but heavily features highly interactive, user centric, web-aware applications.
Web-based applications are computer programs that execute in a web browser environment (the overall environment could be a closed group intranet, or a public network such as the internet). An example of such an application would be an online store such as Amazon.com accessed via Firefox or Internet Explorer. Web applications are popular due to the ubiquity of web browsers, and the convenience of using a web browser as a client, sometimes called a thin client. The ability to update and maintain web applications without distributing and installing software on potentially thousands of client computers is a key reason for their popularity.
To put it even more simply, A Web application is just an application that is deployed on the Web. It is a Web page, or series of Web pages, allowing users to accomplish a task like obtaining information and forms, shopping, applying for a job, listening to Internet radio, or any of the many activities possible on the Web. To use a Web application, a user needs to know a URL for the application, and possibly a name and password. Another way to think of a Web application is a Web site offering a great deal of functionality. A web application can provide any functions that may historically be found on a desktop computer. There are web applications to provide weather information for your locale, to track sales calls for a sales force, or sales expenses, or on any topic at all.
Posted by Ashish Agarwal at 3/07/2009 07:55:00 PM 0 comments
Labels: Definition, Web Applications, WebApp
Subscribe by Email |
|
Sunday, March 1, 2009
Testing of Web Applications
The quality of a web application can be pretty evident right from the onset (from the beginning of testing). Some of the key things to check for, and that are visible right in the beginning are:
- Slow response time,
- Problems with the accuracy of information,
- Bad design / workflow problems or not having ease of use will compel the user
to click to a competitor's site
Problems such as these that are easily visible directly translate into loss of users, declining or stagnant sales and a very poor image of the company.
These are outcomes that companies seek to avoid at any costs, and that is why there is the need for a strong QE effort. As a part of this, the following techniques can be used to do a more thorough checking:
1. Good Functionality testing - This sort of testing makes sure that the features that are visible to a user and affect interactions are working properly and as desired.
Some of these objects in a WebApp include: User enterable forms, Searches and their results, Pop-up windows (most users hate them because of their heritage), shopping carts.
2. Usability testing - Many users have a low tolerance for anything that is difficult to use, making having a usability testing program a critical part of the testing of any WebApp. You need to ensure that a user's first impression is very important; what makes this more complex is that now-a-days applications have become complicated and cluttered with an increasing number of features.
The main steps involved in usability testing are:
- Identify the purpose of WebApp.
- Identify the intended users.
- Define the tests, review them for thoroughness and conduct usability testing.
- Collect information through various mechanisms.
- Carry out an analysis of the acquired information.
- Make the necessary changes based on the acquired information.
3. Navigation testing - Navigation testing makes sure that all navigation syntax and semantics are exercised to uncover any navigation errors. It should not happen that a user clicks on a navigation aid and then either reaches a dead end or goes off into a wrong direction.
4. Forms testing - WebApps that use forms requires tests to ensure that each field is working properly (including the validations such as not allowing users to enter more than a certain amount of text, fields not being left blank, etc) and that the form posts all the data as intended by the designer.
5. Content testing - Content is evaluated at both a syntactic and semantic level.At the syntactic level, spelling, punctuation, and grammar are assessed. At semantic level, correctness, consistency, and lack of ambiguity are all assessed. It creates a very bad impression if the user finds spelling mistakes (user assumes that there must be something wrong if the company put wrong spellings or wrong grammar and did not find it till now, or the company does not care that there are problems on the site)
6. Compatibility testing - This testing is done by executing the webApp under every browser/ platform combination to ensure that the web applications are working properly under different environments. This sort of testing is easier said than done, since the number of browsers and operating systems in the market are huge.
7. Performance testing - This testing evaluates the system performance under normal and heavy usage. An application that takes long to respond may frustrate the user which could result to move to a competitor's site. This testing ensures that the website server responds to browser requests within defined parameters. The system should work perfectly and speedily under normal expected usage, and if possible, should be handle some amount of extra load.
8. Load testing - The purpose of this testing is to present real world experiences, typically by generating many users simultaneously accessing the web application. Typically, companies use automated test tools to increase the ability to conduct
a valid load test as it emulates thousand of users by sending requests simultaneously to the application. Critical, as failure under heavy loads does not convey a good impression, and may make the system susceptible to attacks by recreating heavy loads.
9. Security testing - Security is one of the primary concerns when communicating and conducting businesses over internet. One break-in can spoil the reputation of a company and lead to loss of business, stealing of user data, and other such cases with horrible consequences. Finding the vulnerabilities in an application that could grant an unauthorized user to access the system is important. Equally important is being able to track all access to the system, and do a frequent scan of these accesses.
Posted by Ashish Agarwal at 3/01/2009 10:29:00 PM 0 comments
Labels: Load, Performance, Security, Testing, Web Applications, WebApp
Subscribe by Email |
|
WebApp testing - a short summary
A WebApp or Web Application is a type of application / software that is accessed via a web browser (such as Internet Explorer, Firefox, Opera, Safari) over a network such as the Internet or an intranet. It is typically coded in a browser-supported language (such as HTML, JavaScript, Java, etc.) with the browser environment making the application executable.
With such applications becoming more widespread, testing of such applications is much bigger than it was earlier. Here is a short summary about WebApp testing, and future posts will explore this area in more detail.
WebApp testing is a collection of related activities that has the goal of uncovering the errors in Web Applications related to content, function, usability, navigability, performance, capacity, and security. To accomplish this a testing
strategy that encompasses both reviews and executable testing is applied throughout the Web engineering process.
Generally testing of Web Applications is done by the same set of people who would be involved if the application was a normal client-server application, so webApp testing is done by web engineers and managers, customers, end-users and other stakeholders. This is generic advice for testing, and very relevant for Web Applications. Testing should not wait until the project is finished. It should start before you start thinking of writing a single line of code.
Testing is the process of finding errors and correcting them. The same philosophy goes with Web Applications also. In fact, WebApp testing becomes a more challenging
task for the web engineers as these applications reside on a different network and accessible by varied environments encompassing different operating systems, browsers, platforms. With such varied environments, the possibility of finding errors increases.
Web based applications present new challenges, with some of these challenges are:
1. Short release cycles
2. Constantly changing technology
3. Possible huge number of users during initial website launch
4. Inability to control the user's running environment
5. 24-hour availability of the web site.
Posted by Ashish Agarwal at 3/01/2009 10:11:00 AM 0 comments
Labels: Browsers, Internet, Intranet, Network, Processes, Testing, Web Applications, WebApp
Subscribe by Email |
|