Subscribe by Email


Showing posts with label Metrics. Show all posts
Showing posts with label Metrics. Show all posts

Wednesday, April 3, 2019

Taking the time to define and design metrics

I recall my initial years in software products where there was less of a focus on metrics. In fact, in some projects, there was a question of accounting for defects and handling them, and the daily defect count was held on the whiteboard, but that was about the extent of it. Over a period of time, however, this has come to change. Software development organizations have come to realize that there is a lot of optimization that can be done, such as in these areas:
1. Defect accounting - How many defects are generated by each team and team members, how many of these defects move towards being fixed vs. being withdrawn, how many people generate defects that are critical vs. trivial, and so on.
2. Coding work, efficiency, code review records, number of line of code being written, and so on.
You get an idea, there are a number of ways in which organizations are trying to determine information about the processes within a software cycle, so that this information can be used to determine optimization, as well as figure out the awards for the appraisal of employees. This kind of data helps in providing a quantitative part of the overall appraisal counselling and to some extent, being able to give the manager a comparison between the employees.
However, such information and metrics are not easy to come by and cannot be done on the fly. Trying to create metrics when the project is ongoing or expecting people to do it along with their regular jobs will lead to sub-standard metrics, or even getting some wrong data, and not enough effort to screen the resulting data to ensure that the data is as accurate as it can be.
Typically, a good starting point for ongoing project cycles is to do a review at regular periods so that the team can contribute as to what metrics would be useful, and why. Another comparison point would be about talking to other teams to see what metrics they have found useful. And during the starting period of a project, when the discussion stage is ongoing, there needs to be a small team or couple of people who can figure out the metrics that need to be created during the cycle.
There may be a metrics engine or some tool being used in the organization, and there may be a process for new metrics to be added to the engine, or even for getting existing metrics to be added for a new project, and the effort and coordination for that also needs to be planned.
Basic concept of this article is -> Get metrics for your project, and design for it rather than treating it as an after-thought. 


Tuesday, August 6, 2013

What is meant by an optimal route?

- For selecting a path or route, a routing metric has to be applied to a number of routes so as to select the best out of them. 
- This best route is called the optimal route with respect to the routing metric used. 
- This routing metric is computed with the help of the routing algorithms in computer networking.
- It consists of information such as network delay, hop count, network delay, load, MTU, path cost, communication cost, reliability and so on.
- Only the best or the optimal routes are stored in the routing tables that reside in the memory of the routers. 
- The other information is stored in either the topological or the link state databases. 
- There are many types of routing protocol and each of them has a routing metric specific to it. 
- Some external heuristic is required to be used by the multi-protocol routers for selecting between the routes determined using various routing protocols. 
For example, the administrative distance is the value that is attributed to all the routes in Cisco routers. 
- Here, smaller distances mean that the protocol is a reliable one. 
- Host specific routes to a certain device can be set up by the local network admin. 
- This will offer more control over the usage of the network along with better overall security and permission for testing. 
- This advantage comes handy especially when it is needed to debug the routing tables and the connections. 

In this article we discuss about the optimal routes. 
- With the growing popularity of the IP networks as the mission critical tools for business, the need for methods and techniques using which the network’s routing posture can be monitored is increasing.
- Many routing issues or even incorrect routing can lead to undesirable effects on the network such as downtime, flapping or performance degradation. 
- Route analytic are the techniques and tools that are used for monitoring the routing in a network. 

The performance of the network is measured using the following 2 factors:
  1. Throughput or the Quantity of service: This includes the amount of data that is transmitted and time it takes to transfer.
  2. Average packet delay or Quality of service: This includes the time taken by a packet to arrive at its destination and the response of the system to the commands entered by the user.
- There is always a constant battle between the fairness and optimality or we can say between quantity of service and quality of service. 
- For optimizing the throughput, the paths existing between the nodes have to be saturated and the response time from source point to destination point must be noticed. 

For finding the optimal routes, we have two types of algorithms namely:
  1. Adaptive Algorithms: These algorithms are meant for the networks in which the routes change in a dynamic manner. Here the information regarding the route to be followed is obtained at the run time itself from adjacent as well as the all other routers. The routes change whenever there is a change in the load, change in the topology and every delta T seconds.
  2. Non – adaptive algorithms: These algorithms the same routes cannot be followed every time. Therefore the measurements that were made for the previous condition cannot be used for the current condition. The routes thus obtained are called static routes and are computed at the boot time.

Finding optimal routes requires following the principle of optimality according to which the optimal path between an intermediate router and the destination router lies on the same route from the source to the destination route. 


Friday, October 19, 2012

What are the default test plans attributes? How to define new test plan attributes?


Test plan is one of the important aspects of the silk test and is often characterized by some default attributes called the default test plan attributes. Well, the default test plan attributes are what we are going to talk about in this article. We shall also discuss how new test plan attributes can be defined. 

What is a Test Plan?

- A test plan forms the basis of the primary document that lays down the foundation for the testing that is to be carried out in a manner that is well organized. 
- Test plan is very important for the project managers as it is the document using which the testing projects can be managed well. 
- A badly drawn test plan is just same as not having any test plan.
- On the other hand, an intelligently drawn plan helps in smooth execution of the test cases and analysis of the other testing activities. 
- It would not be wrong to say that the test plan is actually a kind of dynamic document. 
- It acts especially in the spiral environments which have a tendency to change constantly. 
- The test plan is said to be a dynamic document since it changes according to the changes that occur in the software system or application. 
- Basically, a frame work is provided by the test plan which consists of a majority of the considerations that are made while planning. 

Attributes of Test Plan

Below mentioned are some of the attributes of a good test plan:
  1. It stands a greater probability of catching majority of the defects of an application under test or AUT.
  2. It provides greater test coverage for covering most of the test code.
  3. It stays quite flexible throughout the testing process.
  4. It is easy to be executed over and again automatically.
  5. It properly defines the tests that are to be performed.
  6. It lists out the expected results as clearly as possible.
  7. It helps in reconciliation of the defects whenever a defect is found.
  8. It helps in defining the objectives of testing.
  9. It helps in defining the testing strategy in definite terms.
  10. It helps in defining the criteria of test exit.
  11. It always stays meaningful.
  12. It never becomes become redundant.
  13. It helps in identification of the risks
  14. It helps in defining the test requirements as clearly as possible.
  15. It helps in describing the test deliverable as clearly as possible.
Every test plan created is concluded with the following 3 steps:
  1. Defining metrics
  2. Conducting periodic audits as well as scheduling them
  3. Approving the test plan
In the test plan view window, there is a tab titled “attributes” clicking on which will enable you to see all the project attributes that have been assigned to that particular test. Attributes can be thought of as an administrator created characteristics that are quite applicable to the tests. 
An attribute is defined as follows:
- item
Description
- Attribute
Name of the attribute
- value
Value that has been assigned to the attribute
- Type
Attribute type
- Inheritance
Whether or not the attribute is inherited

- The concept of inheritance in attributes is somewhat similar to what we have regarding the inheritance of success conditions and properties. 
- Attributes that are inherited throughout all the child test definitions and sub folders are assigned to a parent node. 
- To define an attribute follow the steps:
  1. Open the test project.
  2. Click define attributes menu
  3. Click new button.
  4. Enter the name for the new attribute.
  5. Select the attribute type.
  6. Click ok.


Monday, July 16, 2012

What are the metrics that can be used during performance testing?


One of the most important parts that together make up the complete and effective software testing is the performance testing. Perhaps no software system or application can do without performance testing and testing of any software system or application is incomplete without this one. 
"The determination of the performance of a software system or application in terms of how and when and responsiveness and stability under different particular work loads is nothing but performance testing".

Apart from this there are several other attributes that can be taken care of by the performance testing like for example:
  1. Scalability
  2. Reliability
  3. Resource usage and so on.
The concept of performance testing falls under the concept of performance engineering. The practice of performance testing is aimed at building the performance in to the architecture as well as design of the software system or applications before the actual coding of the software system or application begins. 

We have so many other types of testing that together make up the complete performance testing and have been mentioned below:
  1. Load testing
  2. Stress testing
  3. Soak testing or Endurance testing
  4. Spike testing
  5. Isolation testing and
  6. Configuration testing

Metrics used during Performance Testing


The performance testing cannot be carried out all alone by itself! Rather it is supported by some specific metrics called performance metrics. In this article we shall discuss about the metrics that are to be used during the performance testing.  We shall discuss them one by one:

 1. Average response time: 
This is the time that takes in to consideration all the response cycle ups or round trip requests until a particular point is reached and is the mean of all the response times. Response times can be measured in any of the following way:
      a) Time to last byte or 
      b) Time to first byte
   
      2. Peak response time: 
    This is similar to the previously mentioned metric and represents the longest cycle at a particular point in a particular test. Peak response time is an indication of some potential problem in our software system or application.
   
     3. Error rate: 
   The occurrence of errors under load during the processing of the requests is perhaps the most expected thing during the testing. it has been noted that the most of the errors are reported when the load on the application reaches a peak point and further from there the software system or application is not able to process any requests. Thus error rate provides a percentage of the HTTP status code responses as errors on the servers.
    
     4. Throughput: 
   This performance metric gives you a percentage of the flow of data back and forth from the servers and is measured in units of KB per second.
   
    5. Requests per second (RPS): 
    It gives a measure of the requests that are sent to the target server including requests for CSS style sheets, HTML pages, JavaScript libraries, flash files, XML documents and so on.

    6. Concurrent users: 
    This performance metric is perhaps that best way for expressing the load that is being levied on the software system or application during the performance testing. This metric is not to be equated to the RPS since there are possibilities of one user only generating a high number of requests and on the other hand a user not constantly generating requests.

     7. Cross result graphs: 
     It shows difference between post tuning and pre-tuning tests. 


Friday, July 6, 2012

Describe the concept of phase containment?


In this article we have focussed on an important concept namely phase containment.

Process of Phase Containment


- The process of phase containment deals with the removal of the defects and bugs present in a software system or application while it is still under its SDLC or software development life cycle. - The process of phase containment prefers the early removal of the bugs and defects. 
- It is named so because this process is all about containing faults in one specific phase of the software development life cycle before they get enough time to escape out and affect the software development in the successive phases of the software development life cycle. 

"There are two types of error. One type of the errors are the one which were introduced in the preceding phase of software development and now have accumulated in the current phase and the second types of error are the one which have been introduced in the current phase of software development itself. But the former kinds of errors are called defects and not probably errors". 

- The concept of the phase containment is promoted whenever this concept is related to the organization’s profitability and cost.
- But in order to relate the concept with the organization’s cost and profitability, the identification of the errors and defects that escaped from the previous phases of the software development life cycle and found their place in the successive phases of the software development. 
- Another thing that is required is the determination of the average costs of the defects and errors that were caught in the later phases of software development. 
- It becomes difficult to sort out errors and faults once the software product is out in the market as proven by some research. 

Methodologies to gain control of software product


- So many technologies and methodologies have been developed today to gain control over the quality of the software product.
- They are:
  1. Static analysis: This activity involves the analyzation of the program code with the purpose of formatting the errors prevailing in the software system and specific coding.
  2. Unit testing: This activity involves the developer leveraging his/ her knowledge for breaking the program code.
  3. Code reviews: This activity involves taking the steps to ensure the security of the software system or application and better accountability.
  4. Code complete criteria: This step involves providing consistent hand off to the development team.

Metrics used in Phase Containment Process


- The phase containment process makes use of the phase containment metrics.
- These phase containment metrics serve the purpose of making sure whether the developers are on the track or this process is on the track i.e., the process whether is working as desired for the company and organization or not.
- Commonly three types of metrics that are used in the process of phase containment namely:
  1. Trailing metric: The purpose of this metric is to find out the downstream impact of the process of the phase containment.
  2. Adoption metric: This phase containment metric is intended for making sure that whether or not the software systems developers are adhering the to standards of the phase containment process.
  3. Effectiveness metric: This type of phase containment matrix is used to make sure that the phase containment process is working out well or not and how the developers are maintaining it.
This process of phase containment is used to make sure that the all the aspects of the quality assurance are incorporated in to all the phases of the software development life cycle process.


Friday, September 23, 2011

Estimation techniques - Problem based Estimation

In software estimation, lines of code and function point metrics can be used in two ways:
- it can be used as an estimation variable that could size each element of the software.
- it can be used as baseline metrics that are collected from past projects and are used with the estimation variables to develop cost and effort.

Lines of code and function point are different techniques but there are some characteristics that are common. Project planning begins with scope of the software and software is decomposed into problem functions and are estimated individually and then lines of code and function point are estimated for each function.

Baseline metrics are applied to estimation variable and cost and effort is derived. It should be kept in mind when collecting productivity metrics for projects, one should be sure to establish a taxonomy of project types. This will enable to compute domain specific averages making estimation more accurate.

LOC and FP techniques differ in how decomposition is used. Consider a case when LOC is used, it is essential to use decomposition and that too to a fairly detailed level. In order to get a higher level of accuracy, it is necessary to have a high degree of partitioning.

Now considering the case of FP, the usage of decomposition is different. It is required to get the five information domain characteristics, and the complexity adjacent values. Once these are available, and using past data, an estimate can be generated.

Alongside, a project planner estimates a range of values using historical data; these are the optimistic, most likely, and pessimistic sizes for each function. Based on these, an expected value for the estimation variable can be calculated.


Monday, September 12, 2011

How to define effective metrics? What is the use of quality metrics?

The goal of software engineering is to develop a product that is of high quality. Metrics that are derived from these measures indicates the effectiveness of individual. Metric is a standard for measurement. A metric is a measure that captures performance and allows comparisons and supports business strategy.

The use of quality metrics is to spot the performance trends, comparing alternatives and predicting the performance. However, the costs and benefits of a particular quality metric should be considered as collecting data will not necessarily result in higher performance levels.

Effective metrics should define performance in quantifiable entity and a capable system exists to measure the entity. Effective metrics allow for actionable responses if the performance is unacceptable.

Identifying effective metrics is a bit difficult. Ranges can be identified for an acceptable performance of a metric. It can be referred as breakpoints in case metrics are defined for services and targets, tolerances, or specifications for manufacturing.

Breakpoints are levels where there is a chance that the improved performance will change the behavior of the customer. A target is a desired value of a characteristic. A tolerance is an allowable deviation from target value.


Monday, September 5, 2011

What are different web engineering project metrics?

The objective of a good web application is that it delivers a combination of good content and appropriate functionality for the end user. Web engineering project metrics are defined that assess its internal productivity and quality are:

- Number of static web pages measure provides an indication of the overall size of the application and the effort required to develop it. This measure has less complexity and requires less effort to get construct.
- Number of dynamic web pages measure high complexity and more effort to get construct. It provides an indication of the overall size of the application and effort required to develop it.
- Number of internal page links measure gives an indication of degree of architectural coupling within the web application. Effort on navigation and construction increases as the number of page links increase.
- As Number of persistent data objects increases, the complexity and effort to implement it also grows.
- As Number of external systems interfaced increases, the complexity of the system and effort required for the development also increases.
- Number of static content objects includes static text, graphics, video, animation and audio within the application. Multiple content objects appear on single web page.
- Number of dynamic content objects includes objects based on end user action and includes text, graphic, video, animation and audio within the application. Multiple content objects appear on single web page.
- As the Number of executable functions increases, the modeling and construction effort also increases. A metric can be defined reflecting the degree of end user customization required for web application. An executable function provides a computational service to end user.

Web application metrics can be computed and correlated with measures like effort, errors and defects uncovered, models or documentation pages produced.


Thursday, September 1, 2011

What is meant by Use-case oriented metrics?

Using the use case method is not dependent on the programming language. Typically, you can use use cases as another method of generating the size of the application; the value of use cases is in most cases to the lines of code (LOC) and also to the number of test cases that will have to be written for comprehensive testing of the application. Like many other estimation techniques, the use case method is used early in the software life cycle, before design, development and testing happens.

A use case is written to capture user interactions and functions that describe a system. Many users try to work out what a standard size for a use case is, but use cases can exist at different levels of abstraction. As a result, though there are many attempts to use the use case method as a instrument of trying to measure the effort required for building the application, the inability to define a size and coverage of the use case, means that not many people have been successfully able to do this.

Estimation through use case method is not a simple one; there are multiple steps required such as determining technical factors, determining environmental factors, determining use case points, determining product factors, and then overall, putting all these factors together to get use case points which translate into man-days or man-weeks.


Tuesday, August 30, 2011

What are different object oriented metrics in software measurement?

Object Oriented Metrics


Lines of code and Function point metrics can be used for object oriented projects but they do not provide enough granularity for schedule. Some object oriented metrics are as follows:

- Number of scenario scripts
A scenario script describes the interaction between user and application. It is directly related to application size and number of test cases developed to exercise the system.

- Number of key classes
Key classes are independent components. The number of key classes is the indication of the amount of effort that is required to develop the software and it also indicates the potential amount of reuse applied during system development. The key classes are directly related to problem domain.

- Number of support classes
Support classes are not directly related to problem domain. Support classes can be developed for key class. Number of support classes indicates amount of effort required to develop software and potential amount of reuse to be applied.

- Number of subsystems
Subsystem is gathering of classes supporting a function visible to the end user. A schedule is laid out in which work on subsystem is partitioned.

- Average number of support classes per key class
Estimation becomes easy and simplified if average number of support classes per key class is known.

As database grows, relationships between object oriented measures and project measures provides metrics for project estimation.


What are different metrics used for software measurement?

Software measurement can be categorized in two ways:
- direct measures of software process.
- indirect measures of product.
There are many factors that can affect the software work so metrics should not be used to compare individuals or teams.

Size oriented software metrics are derived by normalizing quality measures by considering size of the software that is produced. Lines of code is chosen as the normalization value to develop metrics that can get absorbed with similar metrics from other projects. Size oriented metrics are widely used but there is always a debate about their validity and applicability continues.

Function oriented metrics uses measure of functionality that is delivered by an application as a normalization value. Function point metric is based on characteristics of software's information domain and complexity. Function point is language independent and it is based on data that is likely to be known early in evolution.

The quality of the design and the language used to implement the software defines the relationship between lines of code and function points. Function points and LOC based metrics are exact predictor of software development effort and cost.


Monday, August 29, 2011

What are different process metrics and software process improvement?

Process metrics have long term impact. Their intent is to improve the process itself. These metrics are collected across all projects. To make an improvement in any process, develop metrics based on attributes and then use these attributes as indicators leading a strategy for improvement.

The three factors people, product and technology are connected to a process which sits at the center. The efficiency of a software process is measured indirectly. A set of metrics is derived based on the outcomes derived from process. These outcomes include error measures that are uncovered before the release of software, defects reported by end users and other measures. The skill and motivation of the software people doing the work are the most important factors that influence software quality.

Private metrics are private to an individual and serve as an indicator for individual only. It includes defect rates by individuals and software components and the errors that are found during development. Private data can serve as an important driver as individual software engineer works to improve.

Public metrics assimilate the information that was private to individuals and teams. Calendar times, project level defect rates, errors during formal technical reviews are reviewed to uncover indicators improving team performance.

Software process metrics benefits the organization and improve its overall level of process maturity.


Sunday, August 14, 2011

What are User Interface Design and Operation oriented Metrics?

User interface design metrics are fine but above all else, be absolutely sure that your end users like the interface and are comfortable with the interactions required.
- Layout appropriateness is a design metric for human computer interface. The layout entities like graphic icons, text, menus, windows are used to assist the user.
- The cohesion metric for user interface measures the connection of on screen content to other on screen content. UI cohesion is high if data on screen belongs to single major data object. UI cohesion is low if different data are present and related to different data objects.
- The time required to achieve a scenario or operation, recover from an error, text density, number of data or content objects can be measured by direct measures of user interface interaction.

Operation oriented metrics are:
- Operation complexity is computed using complexity metrics because operations should be limited to a specific responsibility.
- Operation size depends on lines of code. As the number of messages sent by
a single operation increases, responsibilities have not been well allocated within a class.
- Average number of parameters per operation is defined as: larger the number of operation parameters, more complex is the relation between objects.


Saturday, August 13, 2011

What are the metrics for object oriented design?

A more objective view of the characteristics of design can benefit both an experienced designer and the novice. The characteristics that can be measured when we assess an object oriented design are:

- Size which has four views: population, volume, length and functionality.
- Complexity is measured in terms of structural characteristics by checking how classes of an object oriented design are interrelated.
- Sufficiency is defined as the degree to which an abstraction possesses the features required from the point of view of current application.
- Coupling is defined as different connections between the elements of the object oriented design.
- Completeness is defined as the feature set against which we compare the abstraction or design component. It considers multiple points of view. It indirectly implies the degree to which abstraction or design component can be reused.
- Similarity is defined as the degree to which two or more classes are similar in structure, function, behavior etc.
- Volatility for object oriented design is defined as the likelihood that a change will occur.
- Cohesion is defined as the degree to which the set of properties it possesses is part of problem or design domain.
- Primitiveness is the degree to which the operation is not constructed out of a sequence of other operations within the class.


Saturday, August 6, 2011

What are different metrics for testing?

Software testers must rely on analysis, design, and code metrics to guide them in design and execution of test cases. Metrics for testing fall into two broad categories:
- metrics that attempt to predict the likely number of tests required at various testing levels.
- metrics that focus on test coverage for a given component.

Function based metrics use a predictor for overall testing effort. Architectural design metrics provide information on the ease and difficulty which is associated with integration testing.

The metrics defined for object oriented provide a general indication of the amount of testing effort required to exercise an object oriented system. Object oriented testing can be quite complex. Metrics can assist in targeting testing resources at threads, scenarios, and packages of classes that are suspect based on measured characteristics. Design metrics that has direct influence on test-ability of object oriented system include:

- Lack of cohesion in methods (LCOM): Higher the value of LCOM, more states must be tested.
- Percent public and protected (PAP): High value of PAP increases the possibility of side effects among classes because public and protected attributes lead to high coupling.
- Public access to data members (PAD): High value of PAD increases the possibility of side effects among classes.
- Number of root causes (NOR): NOR is the count of distinct class hierarchy described in design model. As NOR increases, testing effort increases.
- Fan-in (FIN): It is an indication of multiple inheritance. If it is greater than 1, a class inherits its attributes and operations from more than one root class.
- Number of children (NOC) and depth of inheritance tree (DIT): The super class methods will have to be retested for each sub class.


Friday, August 5, 2011

What are different component level design metrics?

Component level design focuses on internal workings of software and include measures: cohesion, coupling, and complexity. It helps in judging the quality of a component level design. Once a procedural design is developed, component level design metrics can be applied. It is possible to compute measures of the functional independence, coupling and cohesion of a component and to use these to assess the quality of design.

The cohesiveness of a module can be described by a set of metrics:
- Data slice which is defined as a backward walk that searches for data values that can affect the state of the module.
- Data tokens are variables defined for a module.
- Glue tokens are set of data tokens on data slice.
- Superglue tokens are data tokens common to every data slice in a module.
- Stickiness of glue token is directly proportional to number of data slices it binds.

Coupling metrics is an indication of connectedness of a module to other modules. The metric for module coupling encompasses data and control flow coupling, global coupling and environmental coupling.

Complexity metrics are used to predict critical information about the reliability and maintainability of software systems from automatic analysis of source code. It also provides feedback during software project to help control the design activity. Cyclomatic complexity is the most widely used complexity metric.


Thursday, August 4, 2011

What is the framework for product metrics? What are the measurement principles?

A fundamental framework and a set of basic principles for the measurement of product metrics for software should be established. Talking in terms of software engineering:

- Measure provides a quantitative indication of extent, amount, dimension, size of an attribute of a product or process. Measure is established when a single data point has been collected.
- Measurement is an act of determining a measure. Measurement occurs when one or more data points are collected.
- Metric is the quantitative measure of the degree to which a system, component or process possess a given attribute. It relates individual measures in some way.
- Indicator is a metric or combination of metrics providing insight into software process, project or product itself.

There is a need to measure and control software complexity. It should be possible to develop measures of different attributes. These measures and metric can be used as independent indicators of the quality of analysis and design models.

Product metrics assist in evaluation of analysis and design models, gives an indication of the complexity and facilitate design of more effective testing. Steps for an effective measurement process are:
- Formulation which means the derivation of software measures and metrics.
- Collection is the way ti accumulate data required to derive the metrics.
- Analysis is the computation of metrics.
- Interpretation is the evaluation of metrics.
- Feedback is the recommendation derived after interpretation.

Metrics characterization and validation includes:
- Metric should have desirable mathematical properties.
- The value of metric should increase or decrease in the manner in which a software characteristic increases when positive trait occurs or decreases when undesirable traits are encountered.
- Metric should be validated empirically.


Friday, June 3, 2011

Some best practices that contribute to improved software testing Part III

There is always a search for best practices going on. Some are well known and some hidden. Testing does not stand alone. It is intimately dependant on the development practices. These practices have come from many sources. These practices can be divided in three parts:
- Basic Practices
- Foundation Practices
- Incremental Practices

The incremental practices include:
- Teaming Testers with Developers
This practice should understand the kinds of teaming that are beneficial and the environments in which they are employed. This practice should be more than just a concept.
- Code coverage
Code coverage is the numerical metric that measures the elements of code. this practice should include the information about the tools and methods of how to employ code coverage and track results.
- Automated Environment Generator
Setting up test environments to execute test cases is the most difficult task. This practice should capture the issues, tools and techniques that are associated with setting up the environment, the break down and automatic running of test cases.
- Testing to help ship on demand
Testing process should be viewed as one that enables changes that occur late and handles market pressures and still do not break the product or ship schedule. This practice should identify how to work this concept in organizations.
- State task diagram
State transition diagrams are used to capture functional operations of an application. It allows you to create test cases automatically. This practice has more than one application and one need to capture the tools, methods and uses.
- Memory Resource Failure Simulation
This practice addresses loss of memory because of poor management or lack of garbage collection. It should develop methods and tools for use on different platforms and language environments.
- Statistical Testing
the concept of statistical testing is to use software testing as a means to assess the reliability of software as opposed to a debugging process. It needs to work on the software along an operational profile and measuring interfailure times used to estimate reliability.
- Semiformal Methods
A semi formal method is one where specifications that are captured may be in state transition diagrams or tables that can be used for even test generation.
- Check-in tests for code
Check-in tests couple an automatic test program with the change control system. the chances of the code breaking the build are minimized.
- Minimizing regression test cases
To minimize regression tests, several methods are there out of which one method looks at code coverage and distill test cases to a minimal set. Sometimes, it does confuse a structural metric with a functional test.
- Instrumented versions for MTTF
Mean time between failures(MTTF) can be measured if the failures are recorded and returned to vendor. It enhances the quality of the product that is meaningful to user. It also captures first failure data that benefits the diagnosis and problem determination.
- Benchmark Trends
This practice could be initiated by benchmarking and then advance the practice to include a large pool with customers and competitors.
- Bug Bounties
These are the initiatives that charge the organization with a focus on detecting software bugs.


Tuesday, May 24, 2011

What are different approaches for software test estimation?

The best approach to software test estimation depends highly on a particular organization and the project and the experience of the personnel who are involved. Consider two projects of same size and complexity: one is life critical medical equipment and second was low cost computer game. In this, the appropriate test effort for medical equipment software is very large compared to the other one.

Some approaches that can be considered are:
- METRICS BASED APPROACH
This approach focuses on collecting the data for various projects of the organization and then this information can be used for any future test project planning. The expected required test time can be adjusted based on metrics or other information that is available.
- IMPLICIT RISK CONTEXT APPROACH
This approach focuses on using implicitly the risk context by a QA manager or project manager in combination with the past experiences to choose level of resources to allocate to testing. It is an intitutive guess based on experience.
- ITERATIVE APPROACH
This approach focuses on making an initial rough estimate. A refined estimate is made once the testing begins and after a small percentage of first estimate's work is done. The test plans can be refactored and a new estimate can be made. Repeat the cycle as and when necessary.
- TEST WORK BREAKDOWN APPROACH
This approach focuses on beaking the expected testing tasks into smaller tasks for which estimates can be made with reasonable accuracy. One point that has to be kept in mind is that an accurate and predictable breakdown for testing tasks is poosible.
- PERCENTAGE OF DEVELOPMENT APPROACH
This approach focuses on an estimation method for testing based on estimated programming effort.This method depends on project to project variations in risk, personnel, application types, complexity levels.


Tuesday, April 26, 2011

What are steps involved in deriving test cases? What are Validation, Alpha, Beta testing? What are test metrics?

The steps in deriving the test cases using use cases are:
- Using the RTM, the use cases are prioritized. Importance is gauged based on the frequency with which each function of the system is used.
- Use case scenarios are developed for each use case. The detailed description for each use case scenario can be very helpful in later stage.
- For each scenario, take at least one test case and identify the conditions that
will make it execute.
- Data values are determined for each test case.

After system testing is culminated, validation testing is performed which consists of a series of black box tests. It focuses on user-visible actions and user-recognizable output.
Alpha and Beta Testing are a series of acceptance tests. Alpha testing is performed in a controlled environment normally at developer's site. In alpha testing, developers record all errors and usage problems while end users use the system. Beta testing is done at customer's site and developers are not present. In beta testing, end-users records all errors and usage problems.

The amount of testing effort needed to test object oriented software can be indicated by the metrics used for object-oriented design quality. These metrics are:
- Lack of Cohesion in Methods (LCOM)
- Percent Public and Protected (PAP)
- Public Access To Data Members (PAD)
- Number of Root Classes (NOR)
- Number of Children (NOC) and Depth of the Inheritance Tree (DIT)


Facebook activity