Subscribe by Email


Showing posts with label Defect Metrics. Show all posts
Showing posts with label Defect Metrics. Show all posts

Wednesday, April 3, 2019

Taking the time to define and design metrics

I recall my initial years in software products where there was less of a focus on metrics. In fact, in some projects, there was a question of accounting for defects and handling them, and the daily defect count was held on the whiteboard, but that was about the extent of it. Over a period of time, however, this has come to change. Software development organizations have come to realize that there is a lot of optimization that can be done, such as in these areas:
1. Defect accounting - How many defects are generated by each team and team members, how many of these defects move towards being fixed vs. being withdrawn, how many people generate defects that are critical vs. trivial, and so on.
2. Coding work, efficiency, code review records, number of line of code being written, and so on.
You get an idea, there are a number of ways in which organizations are trying to determine information about the processes within a software cycle, so that this information can be used to determine optimization, as well as figure out the awards for the appraisal of employees. This kind of data helps in providing a quantitative part of the overall appraisal counselling and to some extent, being able to give the manager a comparison between the employees.
However, such information and metrics are not easy to come by and cannot be done on the fly. Trying to create metrics when the project is ongoing or expecting people to do it along with their regular jobs will lead to sub-standard metrics, or even getting some wrong data, and not enough effort to screen the resulting data to ensure that the data is as accurate as it can be.
Typically, a good starting point for ongoing project cycles is to do a review at regular periods so that the team can contribute as to what metrics would be useful, and why. Another comparison point would be about talking to other teams to see what metrics they have found useful. And during the starting period of a project, when the discussion stage is ongoing, there needs to be a small team or couple of people who can figure out the metrics that need to be created during the cycle.
There may be a metrics engine or some tool being used in the organization, and there may be a process for new metrics to be added to the engine, or even for getting existing metrics to be added for a new project, and the effort and coordination for that also needs to be planned.
Basic concept of this article is -> Get metrics for your project, and design for it rather than treating it as an after-thought. 


Tuesday, January 18, 2011

Software Six Sigma for Software Engineering

Software Six Sigma is a strategy to enhance and sustain continuous improvements in software development process and quality management. It uses data and statistical analysis to measure and improve company's performance by eliminating defects in manufacturing and service related processes.

ATTRIBUTES OF SIX SIGMA


- genuine metric data.
- accurate planning.
- real time analysis and decision support by the use of statistical tools.
- high quality product.
- software improvement costs and benefits.

STEPS IN SIX SIGMA METHODOLOGY


- Customer requirements are defined, project goals via well defined methods.
- Quality performance is determined by measuring existing process and its output.
- Analyzing the defect metrics.
- Process improvement is done by eliminating the root causes of defects.
- Process control to ensure changes made in future will not introduce the cause of defects again.
These steps are referred to as DMAIC(define, measure,analyze,improve and control) method.

- Design the process to avoid the root causes of defects and to meet customer requirements.
- Verify the process model will avoid defects and meet customer requirements.
This variation is called DMADV(define, measure, analyze, design, and verify) method.


Friday, December 24, 2010

What is a Metric? What are different metrics used for testing?

Metric is a measure to quantify software, software development resources and/or software development process. A metric can quantify any of the following factors:
- Schedule
- Work Effort
- Product Size
- Project Status
- Quality Performance
Metric enables estimation of future work. That is, considering the case of testing depending the product is fit for shipment or delivery depends on the rate the defects are found and fixed. Defect collected and fixed is one kind of metric. It is beneficial to classify metrics according to their usage.
Defects are analyzed to identify which are the major causes of defect and which is the phase that introduces most defects. This can be achieved by performing pareto analysis of defect causes and defect introduction phases. The main requirements for any of these analysis is Software Defect Metrics.
Metric is represented as percentage. It is calculated at stage completion or project completion. It is calculated from bug reports and peer review reports.

Few of the Defect Metrics are:



- Defect Density: (Number of defects reported by SQA + Number of defects reported by peer review)/ Actual Size.
The size can be in KLOC, SLOC, or Function points. The method used in the organization to measure the size of the software product. The SQA is considered to be the part of the software testing team.

- Test Effectiveness: t/(t+Uat) where t= total number of defects reported during testing and Uat= total number of defects reported during user acceptance testing. User acceptance testing is generally carried out using the acceptance test criteria according to the acceptance test plan.

- Defect removal Efficiency:
(Total number of Defects Removed / Total Number of Defects Injected) * 100 at various stages of SDLC. This metric will indicate the effectiveness of the defect identification and removal in stages for a given project.

Requirements: DRE = [(Requirements defects corrected during requirements phase)/(Requirement Defects injected during requirements phase)] * 100
Design: DRE = [(Design defects corrected during design phase)/(Defects identified during requirements phase + Defects injected during design phase)] * 100
Code: DRE = [(Code defects corrected during coding phase)/(Defects identified during requirements phase + Defects identified during design phase + Defects injected during design phase)] * 100
Overall: DRE = [(Total defects corrected at all phases before delivery) / (Total defects detected at all phases before and after delivery)] * 100

- Defect Distribution: Percentage of Total defects distributed across requirements analysis, design reviews, code reviews, unit tests, integration tests, system tests, user acceptance tests, review by project leads and project managers.


Facebook activity