Subscribe by Email


Showing posts with label Design Metrics. Show all posts
Showing posts with label Design Metrics. Show all posts

Wednesday, April 3, 2019

Taking the time to define and design metrics

I recall my initial years in software products where there was less of a focus on metrics. In fact, in some projects, there was a question of accounting for defects and handling them, and the daily defect count was held on the whiteboard, but that was about the extent of it. Over a period of time, however, this has come to change. Software development organizations have come to realize that there is a lot of optimization that can be done, such as in these areas:
1. Defect accounting - How many defects are generated by each team and team members, how many of these defects move towards being fixed vs. being withdrawn, how many people generate defects that are critical vs. trivial, and so on.
2. Coding work, efficiency, code review records, number of line of code being written, and so on.
You get an idea, there are a number of ways in which organizations are trying to determine information about the processes within a software cycle, so that this information can be used to determine optimization, as well as figure out the awards for the appraisal of employees. This kind of data helps in providing a quantitative part of the overall appraisal counselling and to some extent, being able to give the manager a comparison between the employees.
However, such information and metrics are not easy to come by and cannot be done on the fly. Trying to create metrics when the project is ongoing or expecting people to do it along with their regular jobs will lead to sub-standard metrics, or even getting some wrong data, and not enough effort to screen the resulting data to ensure that the data is as accurate as it can be.
Typically, a good starting point for ongoing project cycles is to do a review at regular periods so that the team can contribute as to what metrics would be useful, and why. Another comparison point would be about talking to other teams to see what metrics they have found useful. And during the starting period of a project, when the discussion stage is ongoing, there needs to be a small team or couple of people who can figure out the metrics that need to be created during the cycle.
There may be a metrics engine or some tool being used in the organization, and there may be a process for new metrics to be added to the engine, or even for getting existing metrics to be added for a new project, and the effort and coordination for that also needs to be planned.
Basic concept of this article is -> Get metrics for your project, and design for it rather than treating it as an after-thought. 


Monday, April 18, 2011

What is Deployment Level Design ? What are Design Metrics?

DEPLOYMENT LEVEL DESIGN


The Deployment-level Design creates a model that shows the physical architecture of the hardware and software of the system. The Deployment Diagram is made up of nodes and communication associations. Nodes would represent the computers. The communication associations show network connectivity. To develop deployment level design, distribute the software components identified in the component-level design to the computer node where it will reside.

DESIGN METRICS


There are many sets of metrics for the object-oriented software. Chidamber and Kemere Metrics suite consist of six class based design metrics:
Weighted Methods per Class (WMC)
- This is computed as the summation of the complexity of all methods of a class.

Depth of the Inheritance Tree (DIT)
- It is defined as the maximum length from the root superclass to the lowest subclass.

Number of Children (NOC)
- Children of a class are the immediate subordinate of that class.
- As the number of children increases, reuse increases.
- Of course, as the number of children increases, the number of testing the children of the parent class also increases.

Coupling Between Object Classes (CBO)
- It is the number of collaboration that a class does with other object.
- As this number increases, the re-usability factor of the class decreases.

Response for a class (RFC)
- It is the number of methods that are executed in response to a message given to an object of the class.
- As this number increases, the effort required to test also increases because it increases the possible test sequence.

Lack of Cohesion in Methods (LCOM)
- It is the number of methods that access an attribute within the class.
- If this number is high, methods are coupledtogether through this attribute.


Facebook activity