Subscribe by Email


Showing posts with label Quality metrics. Show all posts
Showing posts with label Quality metrics. Show all posts

Monday, September 12, 2011

How to define effective metrics? What is the use of quality metrics?

The goal of software engineering is to develop a product that is of high quality. Metrics that are derived from these measures indicates the effectiveness of individual. Metric is a standard for measurement. A metric is a measure that captures performance and allows comparisons and supports business strategy.

The use of quality metrics is to spot the performance trends, comparing alternatives and predicting the performance. However, the costs and benefits of a particular quality metric should be considered as collecting data will not necessarily result in higher performance levels.

Effective metrics should define performance in quantifiable entity and a capable system exists to measure the entity. Effective metrics allow for actionable responses if the performance is unacceptable.

Identifying effective metrics is a bit difficult. Ranges can be identified for an acceptable performance of a metric. It can be referred as breakpoints in case metrics are defined for services and targets, tolerances, or specifications for manufacturing.

Breakpoints are levels where there is a chance that the improved performance will change the behavior of the customer. A target is a desired value of a characteristic. A tolerance is an allowable deviation from target value.


Wednesday, December 29, 2010

How are the metrics determined for your application?

Objectives of metrics are not only to measure but also understand the progress to the organizational goal. The parameters for determining the metrics for an application are:

- Duration.
- Complexity.
- Technology Constraints.
- Previous experience in same technology.
- Business domain.
- Clarity of the scope of the project.

One interesting and useful approach to arrive at the suitable metrics is using
the Goal-Question-Metric Technique.
The GQM model consists of three layers: a Goal, A set of Questions, and lastly a set of corresponding Metrics. It is thus a hierarchical structure starting with a goal(specifying purpose of measurement, object to be measured, issue to be measured, and viewpoint from which the measure is taken).
The goal is refined into several questions that usually break down the issue into its major components. Each question is then refined into metrics, some of them objective, some of them subjective. The same metric can be used in order to answer different questions under the same goal. Several GQM models can also have questions and metrics in common, making sure that when the measure is actually taken, the different viewpoints are taken into account correctly.

Metrics are determined when the requirements are understood in a high-level, at this stage, the team size, project size must be known to an extent, in which the project is at a "defined" stage.


Tuesday, December 28, 2010

What are different types of general metrics? Continued...

- Design To Requirements Traceability
this metric provides the analysis on the number of design elements matching requirements to the number of design elements not matching requirements. It is calculated at stage completion and calculated from software requirements specification and detail design.

Formula:
Number of design elements.
Number of design elements matching requirements.
Number of design elements not matching requirements.

- Requirements to Test case Traceability
This metric provides the analysis on the number of requirements tested vs the number of requirements not tested. It is calculated at stage completion. It is calculated from software requirements specification, detail design and test case specification.

Formula:
Number of requirements.
Number of requirements tested.
Number of requirements not tested.

- Test cases to Requirements Traceability
This metric provides the analysis on the number of test cases matching requirements vs the number of test cases not matching requirements. It is calculated at stage completion. It is calculated from software requirements specification and test case specification.

Formula:
Number of requirements.
Number of test cases with matching requirements.
Number of test cases not matching requirements.

- Number of defects in coding found during testing by severity
This metric provides the analysis on the number of defects by the severity. It is calculated at stage completion. It is calculated from bug report.

Formula:
Number of defects.
Number of defects of low priority.
Number of defects of medium priority.
Number of defects of high priority.

- Defects - state of origin, detection, removal
This metric provides the analysis on the number of defects by the stage of origin, detection and removal. It is calculated at stage completion. It is calculated from bug report.

Formula:
Number of defects.
Stage of origin.
Stage of detection.
Stage of removal.

- Defect Density
This metric provides the analysis on the number of defects to the size of the work product. It is calculated at stage completion. It is calculated from defects list and bug report.
Formula:
Defect Density = [total number of defects/Size(FP/KLOC)] *100


Monday, December 27, 2010

What are different types of general metrics? Continued...

- Review Effectiveness
This metric will indicate the effectiveness of the review process.It is calculated at the completion of review or completion of testing stage. It is calculated from peer review report, peer review defect list and bugs reported by testing.
Formula:
Review Effectiveness = [(Number of defects found by reviews)/((Total number of defects found y reviews)+Testing)] * 100

- Total number of defects found by reviews
This metric will indicate the total number of defects identified by the review process. the defects are further categorized as high, medium or low. It is calculated at completion of reviews. It is calculated from peer review report and peer review defect list.

Formula: Total number of defects identified in the project.

- Defect vs Review Effort - Review Yield
This metric will indicate the effort expended in each stage for reviews to the defects found. It is calculated at completion of reviews. It is calculated from peer review report and peer review defect list.

Formula : Defects/Review Effort

- Requirements Stability Index (RSI)
This metric gives the stability factor of the requirements over a period of time, after the requirements have been mutually agreed and base lined between company and the client. It is calculated at stage completion and project completion. It is calculated from change request and software requirements specification.

Formula:
RSI = 100 * [(Number of base-lined requirements)-(Number of changes in requirements after the requirements are base-lined)]/(Number of base-lined requirements)

- Change Requests by State
This metric provides the analysis on state of the requirements. It is calculated at stage completion. It is calculated from change request and software requirements specification.
Formula:
Number of accepted requirements, Number of rejected requirements, Number of postponed requirements.

- Requirements to Design Traceability
This metric provides the analysis on the number of requirements designed to the number of requirements that were not designed. It is calculated at stage completion. It is calculated from software requirement specification and detail design.
Formula:
Total number of requirements, Number of requirements designed, Number of requirements not designed.


Sunday, December 26, 2010

What are different types of general metrics? Continued...

- Overall Review Effectiveness
This metric will indicate the effectiveness of the testing process in identifying the defects for a given project during the testing stage. It is calculated at monthly basis and after build completion or project completion. It is calculated from test reports and customer identified defects.

Overall Test Effectiveness OTE = [(Number of defects found during testing)/(Total number of defects found during testing + Number of defects found during post delivery)] *100

- Effort Variance (EV)
This metric gives the variation of actual efforts vs. the estimated effort. This is calculated for each project stage. It is calculated at stage completion as identified in SPP. It is calculated from estimation sheets for estimated values in person hours, for each activity within a given stage and actual worked hours values in person hours.

EV = [(Actual person hours - Estimated person hours)/Estimated person hours] * 100

- Cost Variance (CV)
This metric gives the variation of actual cost vs the estimated cost. This is calculated for each project stage. It is calculated at stage completion. It is calculated from estimation sheets for estimated values in dollars or rupees for each activity within that stage and the actual cost incurred.

CV = [(Actual Cost-Estimated Cost)/Estimated Cost] * 100

- Size Variance
This metric gives the variation of actual size vs the estimated size. This is calculated for each project stage. It is calculated at stage and project completion. It is calculated from estimation sheets for estimated values in function points or KLOC and from actual size.

Size Variance = [(Actual Size-Estimated Size)/Estimated Size] * 100

- Productivity on Review reparation - Technical
This metric will indicate the effort spent on preparation for review. This is to use this to calculate for languages used in the project.It is calculated at monthly or after build completion. It is calculated from peer review report.

For every language used, calculate
(KLOC or FP)/hour(*Language) where Language - C,C++, Java,XML etc...

- Number of defects found per review meeting
This metric will indicate the number of defects found during the review meeting across various stages of the project. It is calculated at monthly or after the completion of review. it is calculated from peer review report and peer review defect list.

Formula : Number of defects/Review Meeting

- Review Team Efficiency(Review Team Size Vs Defects Trend)
This metric will indicate the review team size and the defects trend. This will help to determine the efficiency of the review team. It is calculated at monthly and completion of review. It is calculated from peer review report and peer review defect list.

Formula : Review team size to the defects trend.


Saturday, December 25, 2010

What are different types of metrics?

Software Process Metrics are measures which provide information about the performance of the development process itself. The purpose of software process metrics are:
- to provide an indicator to the ultimate quality of software being produced.
- assists to the organization to improve its development process by highlighting the areas of inefficiency or error-prone areas of the process.

Software Product Metrics are measures of some attributes of the software product. The purpose of software product metrics is to assess the quality of the output.

What are most general metrics?


Requirements Management
Metrics Collected:
- requirements by state - accepted, rejected, postponed.
- number of base lined requirements.
- number of requirements modified after base lining.
Derived Metrics:
- Requirements stability Index(RSI)
- Requirements to Design Traceability

Project Management
Testing and Review
Metrics Collected:
- Number of defects found by reviews.
- Number of defects found by testing.
- Number of defects found by client.
- Total number of defects found by reviews.
Derived Metrics:
- Overall review effectiveness (ORE)
- Overall test effectiveness.

Peer Reviews
Metrics Collected:
- KLOC/FP per person hour for preparation.
- KLOC/FP per person hour for review meeting.
- Number of pages/hour reviewed during preparation.
- Average number of defects found by Reviewer during preparation.
- Number of pages/hour reviewed during review meeting.
- Average number of defects found by Reviewer during review meeting.
- Review team size vs defects.
- Review speed vs defects.
- Major defects found during review meeting.
- Defects vs Review Effort.
Derived Metrics:
- Review effectiveness.
- Total number of defects found by reviews for a project.


Thursday, June 10, 2010

Metrics in Software Development - Quality metrics

Software Quality Metrics focus on the process, project and product. By analyzing the metrics the organization the organization can take corrective action to fix those areas in the process, project or product which are the cause of the software defects.
The quality of the product is ensured by the following metrics :
- Number of defects found per KDSI (also known as defect density).
- Number of changes requested by the customer after the software is delivered.
- MTBF(Mean Time Between Failures).
- MTTR(Mean Time To Repair) i.e. the average time that is required to remove a detect after it has been detected.

Common software metrics include bugs per line of code, code coverage, cohesion, coupling, cyclomatic complexity, function point analysis, number of classes and interfaces, number of lines of customer requirements, order of growth,source lines of code, Robert Cecil Martin’s software package metrics.

To measure the quality of the output of a development phase, the SQA team has to define the metrics. For instance, during the requirements engineering phase, SRS document is written. Then design, implementation, and testing are done. While doing the design or implementation or while carrying out the testing, the SRS document may have to be changed. The number of changes made to the SRS document can be a metric to measure the quality of the requirements engineering process.
Although there are many measures of software quality, correctness, maintainability, integrity and usability provide useful insight.


Facebook activity