Subscribe by Email


Showing posts with label Review. Show all posts
Showing posts with label Review. Show all posts

Friday, August 9, 2019

The Importance of Code Walkthroughs and Reviews in Software Development

In the world of software engineering, the value of structured review processes—like walkthroughs, code reviews, and requirement validations—is a topic that comes up often in academic settings. Students are taught that peer reviews, design validations, and test plan evaluations are essential components of high-quality development. But when real-world project pressures begin to mount, these structured activities are often the first to be cut or minimized.

Why? Often, project managers push to reduce perceived overhead to meet aggressive deadlines. The result is a project that may hit timeline goals but suffers from bugs, misaligned features, or unstable architecture down the line. Let’s dive deeper into various review types and examine why they matter at every stage of software development.


✅ Requirements and Design Review

The earliest review point in any software project occurs during requirements gathering and design planning. Here's why these are critical:

  • Requirements Review: Ensures that functional and non-functional requirements are complete, unambiguous, and agreed upon by all stakeholders. Overlooking this step can lead to costly changes later.

  • Design Review: Allows experienced architects and developers to scrutinize the proposed architecture. Questions like "Is this scalable?", "Does this integrate well with our existing modules?", or "Can this be simplified?" are raised.

Real Impact: In several projects I’ve overseen, design reviews led to architectural simplifications, which made implementation easier and performance stronger.


๐Ÿงช Test Plans and Test Cases Review

Testing is your quality gate. But what ensures the quality of the test cases themselves?

  • Test Plan Review: Ensures that testing objectives align with product requirements. Missing out on corner cases or performance scenarios can result in critical defects reaching production.

  • Test Case Review: Detailed test cases should be reviewed by both developers and testers. Developers understand the logic deeply and can point out missing validation steps.

Developer Involvement is Key: Developers might know hidden limitations or design shortcuts, and their involvement helps testers create more realistic scenarios.


๐Ÿ” Code Walkthroughs

A code walkthrough isn’t about blaming—it’s about understanding and improving.

  • Purpose: Typically done for complex or high-impact sections of the codebase.

  • Timing: Often scheduled at the end of a sprint or right before major merges.

Benefits:

  • Improves code readability and maintainability.

  • Detects logical errors or performance bottlenecks early.

  • Encourages knowledge sharing between team members.

Case Study: In one situation, a critical module suffered from repeated defects. Post-implementation code walkthroughs revealed poor exception handling and lack of logging, which were then corrected.


๐Ÿž Defect Review

Not every reported defect should be fixed immediately. That’s where a structured defect review process can help.

  • Defect Committee Review: Validates whether the defect is real, reproducible, and impactful. Some reported issues might stem from user misunderstanding or edge cases that don't warrant immediate attention.

Key Benefits:

  • Prevents unnecessary fixes.

  • Helps in prioritizing high-severity issues.

  • Balances developer workload.

Efficiency Tip: Record defect metrics like how many defects were rejected or deferred. This helps refine QA processes.


๐Ÿ”ง Defect Fix Review

Sometimes, fixing one bug introduces two more. This is especially true for legacy systems or tightly coupled codebases.

  • Fix Review: Especially critical when touching core modules or integrating new components.

  • Overlap with Walkthroughs: These reviews often double as code walkthroughs for patches.

Why It Matters: A seemingly simple null check might affect validation rules elsewhere. Peer reviews catch these issues before they go live.


๐Ÿ“Š Are Reviews Time-Consuming?

Many teams worry about the overhead. But it’s important to compare short-term time cost with long-term stability and reduced defect rates.

  • A one-hour review might prevent days of debugging.

  • Improved code quality leads to better team morale and reduced burnout.

Pro Tip: Use lightweight tools like GitHub PR reviews, automated style checkers, and static analysis tools to enhance the review process without overburdening the team.


๐Ÿš€ Final Thoughts

Reviews may feel like slowdowns in the high-speed world of software releases. But in reality, they serve as powerful guardrails. Incorporating them consistently across your SDLC (Software Development Life Cycle) reduces risk, improves communication, and leads to better software products.

Whether you are a startup racing to launch your MVP or an enterprise handling millions of transactions, structured code walkthroughs and reviews can be the difference between success and disaster.

Don't skip them. Plan for them. Respect them.


๐Ÿ“š Further Learning and References

๐Ÿ“˜ Amazon Books on Software Reviews

๐ŸŽฅ YouTube Videos Explaining the Concept


Code review best practices



Code Review Tips (How I Review Code as a Staff Software Engineer)



Code Review, Walkthrough and Code Inspection





Friday, March 15, 2013

Developing a list of features for current and future versions of the product - Part 5

These are a series of posts on the process of generating a list of features for the software product; such a list of features can be aggregated, built upon and prioritized for the current and future versions of the product. These are important to ensure that you have the best of features for the future of the product, features that will make the product successful. However, before you can do that, you need to ensure that you have reviewed all the ways that you can generate a list of features, including from customers, from reviewers, and from other stakeholders. In the previous post (Developing a list of features for current and feature versions of the product - Part 4), I talked about the help pages for the product on the site of the company, and how data analysis of customer visits to such pages can help determining which features are popular and also which features cause the maximum problems to customer. In this post, I will add more details on the capturing of feature requirements from different sources.
A person who views different software products that do the same kind of work typically has a very good idea of which features are the most exciting, the ones that are the most sought after, and the ones that get good reviews. For example, if there is a new version of a software that helps users do printing, then the organization would go to technical experts from different media organizations such as Cnet, New York Times Technical Section, and many others, and look for a good review from them. Now, this process is not so easy as picking up the phone and requesting a review. For many of these media people who write the technical columns which review new products, they are experts and also fairly busy people, and they need to be approached in all seriousness, providing them with the product, time to play with the product, a list of the top features currently features in the product.
When such a discussion finally takes place, the reviewer will typically spend time with the person from the organization (typically this would be the Product Manager of the specific product), and it is during this conversation and the show and tell that the product manager will be able to generate a fair number of comments. These comments would be able to provide the Product Manager with a list of items that the reviewer feels are done well, which are sellable and which excite users. At the same time, the reviewer will also provide a list of features which are missing something, features that the reviewer feels are incomplete or with workflows not optimized for the intended users of the application; and most important, the reviewer will also have a list of features that are important for the customer, but for whatever reason, are not present in the customer. It could be that such a feature got discussed but was dropped from being included in the product for some reason.
With such a list, some study will reveal a list of big features, big feature modification and small improvements that can be done in the feature level, these would need to be balanced against features resourced from other stakeholders, and eventually a list of prioritized features can be generated, which can then be set against the current version of the product, or the next version.

Read more about this in the next post (Developing a list of features for current and future version - Part 6).


Thursday, November 24, 2011

What are differences between verification and validation?

Verification and validation together can be defined as a process of reviewing and testing and inspecting the software artifacts to determine that the software system meets the expected standards.

Though verification and validation processes are frequently grouped together, there are plenty of differences between them:

- Verification is a process which controls the quality and is used to determine whether the software system meets the expected standards or not. Verification can be done during development phase or during production phase. In contrast to this, validation is a process which assures quality. It gives an assurance that the software artifact or the system is successful in accomplishing what it is intended to do.

- Verification is an internal process whereas validation is an external process.

- Verification refers to the needs of the users while validation refers to the correctness of the implementation of the specifications by the software system or application.

- Verification process consists of following processes: installation, qualification, operational qualification, and performance qualification whereas Validation is categorized into:
prospective validation
retrospective validation
full scale validation
partial scale validation
cross validation
concurrent validation


- Verification ensures that the software system meets all the functionality whereas validation ensures that functionalities exhibit the intended behavior.

- Verification takes place first and then validation is done. Verification checks for documentation, code, plans, specifications and requirements while validation checks the whole product.

- Input for verification includes issues lists, inspection meetings, checklists, meetings and reviews. Input for validation includes the software artifact itself.

- Verification is done by developers of the software product whereas validation is done by the testers and it is done against the requirements.

- Verification is a kind of static testing where the functionalities of a software system are checked whether they are correct or not and it includes techniques like walkthroughs, reviews and inspections etc. In contrast to verification, validation is a dynamic kind of testing where the software application is checked against its proper execution.

- Mostly reviews form a part of verification process whereas audits are a major part of validation process.

Verification, Validation, and Testing of Engineered Systems
Fundamentals of Verification and Validation

Verification and Validation in Computational Science and Engineering


Wednesday, November 23, 2011

What are different methods of verification and validation?

Verification and validation together can be defined as a process of reviewing and testing and inspecting the software artifacts to determine that the software system meets the expected standards. There are various methodologies for verification different kinds of data in software applications. The different methods have been discussed below:

- File verification
It is used to check the integrity and the level of correctness of file. It is used to detect errors in the file.
- CAPTCHA
It is a kind of device that is used to verify that the user of the website is a human being and not some false program intended to hamper the security of the system.
- Speech verification
This kind of verification is used to check the correctness of the spoken statements and sentences.
- Verify command in DOS.

Apart from verification techniques for software applications there are several other techniques for verification during the development of software. They have been discussed below:

- Intelligence verification
This type of verification is used to adapt the test bench changes to the changes in RTL automatically.
- Formal verification
It is used to verify the algorithms of the program for their correctness by some mathematical techniques.
- Run time verification
Run time verification is carried out during execution. It is done to determine if the program is able to execute properly and within the specified time or not.
- Software verification
This verification type uses several methodologies for the verification of the software.

There are several other techniques used for verification in circuit development. - Functional verification
- Physical verification
- Analog verification

Verification, Validation, and Testing of Engineered Systems
Fundamentals of Verification and Validation

Verification and Validation in Computational Science and Engineering


Wednesday, November 2, 2011

Define the tracking progress for an object oriented project?

For a object oriented project, tracking becomes really difficult and establishing some meaningful milestones is also a difficult task as there are many things that are happening at once. For tracking an object oriented project, following milestones are considered to be completed when the below mentioned criteria are met:

Milestone for Object Oriented Analysis is considered completed when the following conditions are satisfied :
- Every class is defined and reviewed.
- Every class hierarchy are defined and reviewed.
- Class attributes are defined and reviewed.
- Class operations are defined and reviewed.
- Classes that are reused are noted.
- The relationships among classes are defined and reviewed.
- Behavioral model is created and reviewed.

Milestone for Object Oriented Design is considered completed when the following conditions are satisfied :
- Subsystems are defined and reviewed.
- Classes are allocated to sub systems.
- Classes allocated are reviewed.
- Tasks are allocated.
- Task allocated are reviewed.
- Design classes are created.
- These design classes are reviewed.
- Responsibilities are identified.
- Collaborations are identified.

Milestone for Object Oriented Programming is considered completed when the following conditions are satisfied :
- Classes from design model are implemented in code.
- Extracted classes are implemented.
- A prototype is built.

Milestone for Object Oriented Testing is considered completed when the following conditions are satisfied :
Debugging and testing occur in concert with one another. The status of debugging is often assessed by considering the type and number of bugs.
- The correctness of object oriented analysis and design model is reviewed.
- The completeness of object oriented analysis and design model is reviewed.
- Collaboration between class and responsibility is developed and reviewed.
- Test cases designed are conducted for each class.
- Class level tests are conducted for each class.
- Cluster testing is completed and classes are integrated.
- Tests related to system testing are established and completed.

Each of the milestone is revisited as object oriented process model is iterative in nature.


Tuesday, December 28, 2010

What are different types of general metrics? Continued...

- Design To Requirements Traceability
this metric provides the analysis on the number of design elements matching requirements to the number of design elements not matching requirements. It is calculated at stage completion and calculated from software requirements specification and detail design.

Formula:
Number of design elements.
Number of design elements matching requirements.
Number of design elements not matching requirements.

- Requirements to Test case Traceability
This metric provides the analysis on the number of requirements tested vs the number of requirements not tested. It is calculated at stage completion. It is calculated from software requirements specification, detail design and test case specification.

Formula:
Number of requirements.
Number of requirements tested.
Number of requirements not tested.

- Test cases to Requirements Traceability
This metric provides the analysis on the number of test cases matching requirements vs the number of test cases not matching requirements. It is calculated at stage completion. It is calculated from software requirements specification and test case specification.

Formula:
Number of requirements.
Number of test cases with matching requirements.
Number of test cases not matching requirements.

- Number of defects in coding found during testing by severity
This metric provides the analysis on the number of defects by the severity. It is calculated at stage completion. It is calculated from bug report.

Formula:
Number of defects.
Number of defects of low priority.
Number of defects of medium priority.
Number of defects of high priority.

- Defects - state of origin, detection, removal
This metric provides the analysis on the number of defects by the stage of origin, detection and removal. It is calculated at stage completion. It is calculated from bug report.

Formula:
Number of defects.
Stage of origin.
Stage of detection.
Stage of removal.

- Defect Density
This metric provides the analysis on the number of defects to the size of the work product. It is calculated at stage completion. It is calculated from defects list and bug report.
Formula:
Defect Density = [total number of defects/Size(FP/KLOC)] *100


Monday, December 27, 2010

What are different types of general metrics? Continued...

- Review Effectiveness
This metric will indicate the effectiveness of the review process.It is calculated at the completion of review or completion of testing stage. It is calculated from peer review report, peer review defect list and bugs reported by testing.
Formula:
Review Effectiveness = [(Number of defects found by reviews)/((Total number of defects found y reviews)+Testing)] * 100

- Total number of defects found by reviews
This metric will indicate the total number of defects identified by the review process. the defects are further categorized as high, medium or low. It is calculated at completion of reviews. It is calculated from peer review report and peer review defect list.

Formula: Total number of defects identified in the project.

- Defect vs Review Effort - Review Yield
This metric will indicate the effort expended in each stage for reviews to the defects found. It is calculated at completion of reviews. It is calculated from peer review report and peer review defect list.

Formula : Defects/Review Effort

- Requirements Stability Index (RSI)
This metric gives the stability factor of the requirements over a period of time, after the requirements have been mutually agreed and base lined between company and the client. It is calculated at stage completion and project completion. It is calculated from change request and software requirements specification.

Formula:
RSI = 100 * [(Number of base-lined requirements)-(Number of changes in requirements after the requirements are base-lined)]/(Number of base-lined requirements)

- Change Requests by State
This metric provides the analysis on state of the requirements. It is calculated at stage completion. It is calculated from change request and software requirements specification.
Formula:
Number of accepted requirements, Number of rejected requirements, Number of postponed requirements.

- Requirements to Design Traceability
This metric provides the analysis on the number of requirements designed to the number of requirements that were not designed. It is calculated at stage completion. It is calculated from software requirement specification and detail design.
Formula:
Total number of requirements, Number of requirements designed, Number of requirements not designed.


Sunday, December 26, 2010

What are different types of general metrics? Continued...

- Overall Review Effectiveness
This metric will indicate the effectiveness of the testing process in identifying the defects for a given project during the testing stage. It is calculated at monthly basis and after build completion or project completion. It is calculated from test reports and customer identified defects.

Overall Test Effectiveness OTE = [(Number of defects found during testing)/(Total number of defects found during testing + Number of defects found during post delivery)] *100

- Effort Variance (EV)
This metric gives the variation of actual efforts vs. the estimated effort. This is calculated for each project stage. It is calculated at stage completion as identified in SPP. It is calculated from estimation sheets for estimated values in person hours, for each activity within a given stage and actual worked hours values in person hours.

EV = [(Actual person hours - Estimated person hours)/Estimated person hours] * 100

- Cost Variance (CV)
This metric gives the variation of actual cost vs the estimated cost. This is calculated for each project stage. It is calculated at stage completion. It is calculated from estimation sheets for estimated values in dollars or rupees for each activity within that stage and the actual cost incurred.

CV = [(Actual Cost-Estimated Cost)/Estimated Cost] * 100

- Size Variance
This metric gives the variation of actual size vs the estimated size. This is calculated for each project stage. It is calculated at stage and project completion. It is calculated from estimation sheets for estimated values in function points or KLOC and from actual size.

Size Variance = [(Actual Size-Estimated Size)/Estimated Size] * 100

- Productivity on Review reparation - Technical
This metric will indicate the effort spent on preparation for review. This is to use this to calculate for languages used in the project.It is calculated at monthly or after build completion. It is calculated from peer review report.

For every language used, calculate
(KLOC or FP)/hour(*Language) where Language - C,C++, Java,XML etc...

- Number of defects found per review meeting
This metric will indicate the number of defects found during the review meeting across various stages of the project. It is calculated at monthly or after the completion of review. it is calculated from peer review report and peer review defect list.

Formula : Number of defects/Review Meeting

- Review Team Efficiency(Review Team Size Vs Defects Trend)
This metric will indicate the review team size and the defects trend. This will help to determine the efficiency of the review team. It is calculated at monthly and completion of review. It is calculated from peer review report and peer review defect list.

Formula : Review team size to the defects trend.


Thursday, February 19, 2009

Joke about reviews

If you are in the software business, you must have heard about reviews. Reviews are an integral part of the software development process. It is said about reviews that they uncover many mistakes, many problems, and add a lot of value. Reviews can happen for a variety of documents (test plans, test cases, High level design, Low Level design, Architecture, Project Plan, Project proposals, Code reviews, the list of documents and processes that need review is as big as the number of documents in a software cycle). In a future post, I will cover reviews in more details, but right now, here is an old joke that describes in very painful terms as to why reviews are necessary.

Why we need reviews.

In an ancient monastery in a far away place, a new monk arrived to join his brothers in copying books and scrolls in the monastery's scriptorium. He was assigned as a rubricate on copies of books that had already been copied by hand.

One day, while working on the monks' Book of Vows, he asks old Father Florian, the Armarius of the Scriptorium, 'Does not the copying by hand of other copies allow for chances of error? How do we know we are not copying the mistakes of someone else? Are they ever checked against the original?'

Fr. Florian was set back a bit by the obvious logical observation of this youthful monk. 'A very good point, my son. I will take one of the latest copies of the Book of Vows down to the vault and compare it against the original.' Fr. Florian went down to the secured vault and began his verification.

A day passed and the monks began to worry and went down looking for the old priest. They were sure something may have happened. As they approached the vault they heard sobbing and wailing... they opened the door and found Fr. Florian crying over the new copy and the original, ancient Book of Vows, both opened before him on the table. It was obvious to all that the poor man had been crying his old heart out for a long time.

'What is the problem, Reverend Father???' asked one of the monks.

'Oh, my Lord,' sobbed the priest, 'The word is 'CELEBRATE'!!!' (not celibate)

And this is why we need reviews.


Facebook activity