Subscribe by Email


Wednesday, December 4, 2019

Mastering Risk Management in Project Leadership: A Practical Guide for Project Managers

In any comprehensive course on project management, one theme repeatedly emerges as central to project success: effective risk management. It's not simply a best practice—it is a core discipline that every competent project or program manager must master. Many seasoned professionals even argue that once a project is underway, risk management becomes the most critical and continuous area of focus.

Despite its importance, risk management often gets sidelined in the hustle of project execution. A large part of this is due to its subjective nature—risk isn’t always visible or easily quantifiable. However, subjective does not mean intangible. With the right processes and mindset, project managers can consistently identify, assess, and mitigate risks in a structured way.

Based on my own experience leading and mentoring project teams, I believe that there are two fundamental pillars of effective risk management:

1. Recognizing Common, Known Risk Areas

Every organization that operates at a mature level has a set of known risk factors that tend to repeat across different projects. These risks are often related to:

  • Schedule delays

  • Team attrition or sudden personnel transfers

  • Feature creep or uncontrolled scope changes

  • Budget constraints

  • Vendor reliability

These types of risks are considered "known knowns"—they're the usual suspects. A proactive project manager should have access to historical data or a shared risk register that documents past risks, their impact, and how they were mitigated.

A best practice here is to regularly review and update this organizational risk repository. This enables the team to stay ahead of predictable problems. For instance, if historical data shows a 20% increase in scope-related delays during Q4 due to end-of-year product push, your project schedule should already account for this.

Project managers must periodically assess these known risk areas throughout the lifecycle of the project. Risk logs should be living documents, not static checklists filed away after kickoff. If a known risk manifests because it was ignored or underestimated, the responsibility lies squarely with the project manager.

However, it is not uncommon for even experienced professionals to get caught up in daily operations, firefighting deliverables, and managing stakeholders. In doing so, they lose the mental bandwidth required to continuously review and assess known risk factors.

Avoiding this pitfall means embedding risk review into your routine processes. This could be as simple as adding a five-minute discussion point in weekly status meetings or setting aside 30 minutes each week to review the risk log and evaluate current triggers.

2. Navigating the Unknown: Identifying Emerging Risks

The second category of risk is much harder to pin down: the unknowns. These are risks that aren’t documented in any database. They haven’t occurred before, or they manifest in new, unpredictable ways. But make no mistake—they're just as real.

Consider a real-world example: your competitor suddenly launches a disruptive update to their product, forcing your team to recalibrate features that were in development. This, in turn, impacts timelines, resource allocations, internal communications, and possibly even the entire release strategy.

While you can’t predict every market move, you can put systems in place to surface emerging risks early. This involves:

  • Regular sync-ups with cross-functional leads and product managers

  • Encouraging a culture of transparency and early escalation

  • Tracking subtle signals from the field, such as customer support feedback, developer bottlenecks, or sales sentiment shifts

  • Reviewing change requests not just for technical feasibility but for strategic alignment

The key here is visibility. You can only mitigate what you can see, and the earlier the better. Every change request, every team concern, and every product pivot should be reviewed with a "what could go wrong?" lens.

To manage emerging risks effectively, project managers should use a hybrid approach combining traditional tools like a RAID log (Risks, Assumptions, Issues, and Dependencies) with more adaptive practices like lightweight agile retrospectives and real-time issue tracking platforms.

Building a Culture of Risk Ownership

Project risk management should never be a one-person responsibility. An effective project manager builds a risk-aware culture across the team. This means:

  • Encouraging team members to report potential risks without fear

  • Rewarding early detection of issues, even if they don’t materialize

  • Assigning clear ownership of risk items

  • Embedding risk impact discussions into change request reviews

By normalizing risk conversations, you reduce the stigma around raising concerns. This ensures that your team becomes an early warning system rather than a passive set of executors.

Integrating Risk Management into Daily Practice

Effective risk management doesn’t happen in isolation. It must be integrated into everyday project management activities. Here are a few best practices:

  • Risk Workshops: Conduct short risk brainstorming sessions at the start of each major phase.

  • Risk Review Cadence: Build a rhythm of reviewing the risk register weekly or biweekly.

  • Trigger-Based Tracking: Define what "early indicators" might suggest a risk is developing.

  • Risk Scoring: Use a simple matrix to score risks based on probability and impact.

  • Scenario Planning: Consider “what-if” exercises to prepare the team for critical disruptions.

Over time, these habits not only reduce the number of surprises but also equip your team to respond more calmly and effectively when things do go sideways.

Measuring Risk Management Success

One of the challenges in risk management is measuring its effectiveness. Unlike deliverables or velocity, risk mitigation doesn’t always have immediate, visible results. Still, you can track:

  • Number of risks logged and actively monitored

  • Percentage of risks mitigated before impact

  • Stakeholder satisfaction during crisis periods

  • Response time to emerging issues

You can also gather qualitative feedback post-project to evaluate how prepared the team felt and whether contingency plans were effective.

Common Pitfalls to Avoid

  1. Treating Risk Management as a Phase: Risk isn’t just for kickoff. It’s a continuous, adaptive process.

  2. Ignoring Soft Signals: Risks often start as subtle concerns before becoming showstoppers.

  3. Overengineering the Process: Keep tools and logs simple. Focus on actionability, not bureaucracy.

  4. Shifting Responsibility: Everyone owns risk, but the project manager is accountable for visibility and response.

  5. Not Updating the Plan: A risk register is a live document. If your plan never changes, you're likely missing real-time shifts.

Final Thoughts: Risk Is Inevitable, Unpreparedness Is Not

Every project, regardless of size or complexity, will encounter risks. The difference between successful and failed initiatives often lies in how well those risks are understood, communicated, and managed.

Project managers must resist the temptation to view risk management as optional or peripheral. It is, in fact, one of the most strategic capabilities you can develop as a leader. Done well, it not only protects timelines and budgets—it builds trust, boosts team morale, and enhances your reputation as a calm, reliable, and forward-thinking project professional.

So, the next time you lead a project, remember: risk isn’t the enemy. It’s a signpost. And how you respond to it will determine not just the outcome of your current initiative but the trajectory of your career.

You may not be able to follow everything listed above :-), but you still should evaluate what works best for you. And if you are doing something else that works well for you, please add below.




Tuesday, August 27, 2019

Tackling Feature Creep: Lessons in Effective Product Management and Project Delivery

When managing software projects, success doesn’t depend solely on having skilled individuals in key roles. It also hinges on how teams navigate scope, requirements, and real-time adjustments. Our experience with one such project showed us just how important structured planning and boundary-setting are—particularly when it comes to managing scope expansion, also known as feature creep.

We had a great Product Manager for one of our flagship software initiatives. She was highly knowledgeable, had strong working relationships with the product support team, and direct lines of communication with several of our enterprise clients. Her ability to gather customer feedback and translate it into actionable requirements made her an invaluable part of the project.

The design team appreciated how she worked with them to evolve high-level ideas into detailed specifications, facilitating the creation of high-level design documents (HLDs) that were both comprehensive and realistic. Moreover, she remained actively involved throughout the design and development phases, consistently available for clarifications, reviews, and feedback. Her dedication earned the trust of everyone on the team.

Yet, despite all these strengths, we continually ran into a frustrating issue: teams consistently found themselves rushing to meet final feature deadlines. On multiple occasions, developers worked weekends and late nights in a last-minute sprint. Remarkably, we never missed our deadlines by more than a day—and we always met the quality benchmarks. But the strain on the team was undeniable.

During project retrospectives, team members flagged this pattern, asking why it kept recurring and why we couldn't better plan for it. They pointed out that while commitment and hard work were admirable, this recurring last-minute push was unsustainable. Something needed to change.


Identifying the Root Cause of Project Pressure

To get to the bottom of the issue, we launched a structured investigation. There was always a chance that we had flawed time or effort estimation processes. Alternatively, some feared that certain developers might not have been contributing their fair share.

Two of our most experienced leads were tasked with reviewing the project documentation, HLDs, effort tracking sheets, and defect metrics. Their goal: identify where and why our estimations consistently fell short.

What we found was surprising—but also enlightening. Time spent on core tasks—requirement preparation, coding, testing, and documentation—was generally in line with projections. In a few instances, certain segments had a 20% overrun, but there was no clear pattern linked to specific individuals or phases.

The real issue? Feature creep.


Understanding Feature Creep in Project Environments

In project management, feature creep refers to the uncontrolled expansion of product features beyond the original scope. It usually happens incrementally—one small change here, one improvement there—until the cumulative impact becomes significant.

In our case, this occurred subtly. As HLDs were being prepared and development moved forward, suggested enhancements came in—some from the development team itself, and many from the Product Manager. These were almost always well-intentioned. They improved the product, addressed edge cases, or reflected late-stage customer feedback.

Because these changes seemed “minor” and “beneficial,” there was a tendency to implement them without formal impact analysis or schedule adjustment. No one wanted to push back. After all, we were building something better for the customer.

But over time, these small changes added up. They chipped away at buffers, consumed developer focus, and led to crunches near the end of each development cycle.


Changing the Process: Structuring Scope Management

Once we identified feature creep as a recurring issue, we knew we had to act. Continually burning out the team wasn’t an option. We needed to instill a discipline around how post-freeze changes were handled.

Our solution was simple but effective: after the design freeze, any new requirement—regardless of size—would be classified as a “feature enhancement.” These enhancements were treated like change requests or defects and entered into a formal review and approval process.

We set up a Feature Enhancement Review Board composed of tech leads, QA, and product representatives. They met weekly to review all proposed enhancements. Only after careful evaluation of the effort, risk, and impact on schedule would a change be approved.


Outcomes of the New Approach

This change immediately brought several benefits:

  1. Clarity and Visibility: Everyone could now see what was being added post-freeze and why.

  2. Better Decision-Making: We were able to weigh the customer benefit of a change against its impact on delivery timelines.

  3. Improved Accountability: Product suggestions weren’t automatically implemented; they were scrutinized just like technical defects.

  4. Informed Resource Planning: Teams could plan capacity with fewer surprises.

Perhaps most importantly, this new framework ensured that the final sprint before release wasn’t a chaotic, high-stress period. Developers could plan their time more predictably, and team morale improved as they regained a sense of control over their workloads.


The Role of the Product Manager: Balancing Value and Discipline

This experience also reshaped how we viewed the role of our Product Manager. Her instincts were always customer-first and value-driven—but even the best intentions can have unintended consequences.

By including her in the Feature Enhancement Review Board, we preserved her vital input while also encouraging a more strategic approach. Instead of recommending enhancements during active development, she began to note them for future releases unless the business case was strong enough to warrant immediate inclusion.

This helped her maintain her customer advocacy while contributing to better team performance and smoother deliveries.


Lessons for Project and Product Leaders

Every project faces the temptation to “just add one more thing.” But without guardrails, those additions become silent killers of time, focus, and quality. Our experience taught us:

  • Feature creep is often a process problem, not a people problem.

  • Good documentation and post-mortems are key to surfacing hidden patterns.

  • Formalizing how changes are proposed and reviewed encourages better planning.

  • Empowering the product team with structure—not restrictions—leads to stronger results.

Ultimately, the discipline of saying “not now” is just as important as the innovation of saying “what if?”


Conclusion: Managing Growth Without Losing Control

Software development is a dynamic process. Customer needs evolve, ideas improve, and developers discover better ways to build. But growth must be managed.

Feature creep may not always be obvious. It can masquerade as helpful suggestions, customer-centric improvements, or low-effort tweaks. But if not managed carefully, it erodes deadlines, impacts quality, and drains team energy.

Through formal tracking, cross-functional review, and a shared understanding of priorities, we transformed a recurring delivery issue into a point of strength. Our teams now deliver with greater confidence, and our products still evolve—with intention, not chaos.


Tuesday, August 20, 2019

Don't Hard-Code URLs in Software or Documentation: Use Smart Redirects Instead

Introduction

At first glance, a broken link may not seem like a major issue. But as we discovered firsthand, something as small as a non-functioning URL can highlight a deeper flaw in your development and documentation process. In the early versions of our software, we included direct, hard-coded URLs to external resources in our documentation and help pages. It seemed like a harmless shortcut—until we encountered a real-world consequence that made us completely rethink our approach.

The Problem Begins: A 404 That Uncovered a Systemic Flaw

A year after release, a customer reported a minor defect. One of the URLs in a help page was returning a 404 error. On the surface, this was a low-priority issue. But when we began reviewing it, we quickly saw that it was just the tip of the iceberg. That broken link pointed to an external help page for a third-party component we were using, and the organization behind that component had updated their site structure.

The result? The hard-coded URL we had embedded no longer worked.

This wasn't an isolated case—it exposed a critical weakness in our software design and documentation process. Our system relied on URLs that could change at any time, and we had no way to update them post-release.

Why Hard-Coding URLs Is a Bad Idea

While it might seem convenient to insert URLs directly into your software, documentation, or help files, doing so creates long-term maintenance and reliability issues. Here are just a few scenarios where hard-coded URLs can cause trouble:

1. External Websites Can Change

As with our initial issue, the structure of external websites is out of your control. If you're linking to third-party documentation or tools, there’s no guarantee those pages will remain at the same location. A restructuring, rebranding, or migration can instantly break all your references.

2. Internal Systems Evolve

Even internally, hard-coded links can be fragile. We once updated our internal Help System by moving to a new content management platform. This change altered our entire URL scheme. All previously working links were rendered useless, and fixing them manually would have required hours of work.

3. Page and Content Changes

Sometimes it’s more efficient to update where a link points rather than rewrite and republish several help pages. But when URLs are embedded directly in software or documentation, updating them becomes complex and error-prone.

4. Localization and Version Control Challenges

If you localize your documentation or maintain multiple versions of your product, hard-coded URLs complicate maintenance. Each version may have different content or links, leading to errors, inconsistencies, and duplicate effort.

The Better Solution: URL Redirection

To address this issue, we adopted a more robust strategy: use redirect URLs instead of hard-coded URLs. A redirect URL acts like a middle layer. Instead of pointing directly to the final destination, you point to a redirect link hosted on your own internal server. That redirect, in turn, forwards the user to the correct destination.

This approach gives you the flexibility to change the final target anytime, without needing to modify the software or re-release documentation.

Benefits of Using Redirect URLs

Implementing redirect URLs offers several advantages:

  • Flexibility: You can update the destination at any time without touching the software.

  • Centralized Control: All links can be tracked and managed from one place.

  • Reduced Defects: Fixing broken links no longer requires product patches.

  • Version Independence: You can change targets based on product versions or locales.

  • Long-Term Reliability: Even if external content moves, you remain in control of redirection.

Best Practices for Redirect Management

Using redirects effectively requires a structured approach. Here's what worked for us:

1. Create a Redirect Map

Maintain a detailed file that records every redirect URL, its usage, and the current destination. For each entry, include:

  • Redirect URL

  • Final destination

  • Usage context (help file, tooltip, etc.)

  • Requestor or owner

  • Date created or last modified

  • Comments or purpose notes

This file should be version-controlled in your source code management system, just like your software code.

2. Implement Change Tracking

Whenever a change is made to a redirect, log the change via a formal process—ideally as a tracked defect or feature request. This creates an audit trail, which helps during troubleshooting or reviews.

3. Host Redirects Internally

Use your internal web server or infrastructure for managing redirects. Avoid relying on external services for redirection unless you control them.

4. Use Meaningful Redirect Aliases

Instead of using random strings, use human-readable aliases for redirect URLs. This makes them easier to understand and manage. For example:

  • /redirects/video_help instead of /redirects/abc123

  • /redirects/component_docs_v2 instead of /redirects/xyz456

5. Test Regularly

Set up automated or scheduled testing to validate that all redirects are still functioning and pointing to valid destinations.

Addressing Redirects Across Software Releases

What happens if a redirect breaks, or the target content changes after a software version is released?

By decoupling the hard-coded URL from the final destination, you’ve already protected yourself from most issues. All you need to do is update the redirect. You don’t need to patch the product.

However, for older versions or those with strict support policies, evaluate whether fixing the redirect aligns with your support model. For example, if a security bulletin is posted for a legacy product still used by clients, you can simply redirect to the latest info—even if the original software is years old.

Communication Strategy for Customers

If a redirect breaks or a customer reports an issue, your team can:

  • Quickly confirm the problem

  • Update the destination in the redirect

  • Inform the customer that it’s fixed—often within hours

This builds customer trust. You’re not just fixing issues—you’re responding fast and showing that your development process is future-proof.

You can also use redirects to track user behavior by analyzing which URLs are most clicked. This helps prioritize updates and shows what users care about.

Final Thoughts

Adopting a redirect policy may feel like extra effort at first. It requires planning, documentation, and an internal process for tracking links. But the long-term benefits far outweigh the cost. Once you’ve had to deal with the hassle of fixing a hard-coded URL in released software, you’ll understand just how valuable redirect flexibility can be.

This approach provides future-proofing, minimizes disruption, and improves your ability to respond to change quickly.

Don’t wait until a customer finds a broken link. Plan ahead. Build smart. And never hard-code a URL again.


Thursday, August 15, 2019

Keeping Up with Security Fixes and Patches in Software Development

Introduction

Every other day, headlines scream about another security breach. Hackers have stolen credit card data, passwords, or even social security numbers. These stories might seem distant, but for the organizations affected, the damage is real and often severe. The consequences range from customer data loss and reputation damage to layoffs and crashing stock prices. While billion-dollar companies might survive such shocks with minimal tremors, smaller or mid-sized businesses can face lasting consequences.

You might feel immune to such threats. Perhaps your project has never faced a major breach. Maybe you're not even on a hacker's radar. But security risks aren’t always about direct attacks. Sometimes, vulnerabilities lie hidden in third-party components or outdated libraries quietly integrated into your software—a ticking time bomb waiting to be exploited.

How Hidden Security Flaws Enter Your Project

Most modern software projects rely on a variety of external components. These include libraries, plugins, media decoders, frameworks, and even code snippets. It’s neither feasible nor efficient to write everything from scratch. Developers use these components to accelerate development, reduce costs, and integrate complex functionalities quickly.

A great example? Media decoders. Handling all image, audio, and video formats from scratch would be a massive undertaking. Instead, developers include libraries or use built-in OS-level capabilities. While convenient, these additions come with their own risks. Once an external component becomes part of your application, so does any vulnerability it carries.

The Real Risk of Inaction

Here’s the problem: if a flaw is found in a component you've used and the fix hasn't been applied (or your users haven’t updated yet), the vulnerability persists. Tools and scripts to exploit such holes are widely available, making it easy for even low-skill attackers to cause harm. And if a breach occurs due to an issue in your distributed software—even if the root cause is third-party—your customers will hold you responsible.

A Simple Example

Imagine your software includes a third-party component for parsing image formats. A security researcher finds a buffer overflow flaw in that component. The maintainers release a fix. But if you don’t integrate that fix, repackage your software, and distribute it promptly, users remain vulnerable. If someone launches an attack using a specially crafted image, the consequences could range from crashing the application to complete system compromise.

How to Stay Ahead of the Threat

You can’t eliminate risk entirely, but there are several effective strategies to manage it:

1. Component Inventory and Exposure Matrix

Maintain a detailed inventory of all third-party components used in your software. For each component:

  • Record its version.

  • Note its criticality to the application.

  • Evaluate whether it could be exposed in ways that attackers might exploit (e.g., input parsing, network interfaces).

This matrix should be accessible and updated regularly.

2. Monitor Security Feeds and Vulnerability Alerts

Use tools or subscribe to feeds that alert you to vulnerabilities in libraries or frameworks you use. Websites like:

These resources offer real-time tracking of reported issues.

Assign a team member the responsibility of monitoring these sources and flagging any issues relevant to your project.

3. Establish Response Protocols

Define a pre-approved plan to respond to discovered vulnerabilities:

  • How critical is the flaw?

  • Does it require immediate action or can it wait for the next release?

  • Who investigates and verifies?

  • Who tests the patch and deploys the update?

Having a pre-determined strategy ensures a calm and measured response when problems arise.

4. Handle Legacy Releases Thoughtfully

This is a bit tricky. What happens when a vulnerability is found in an older release—say, a version two iterations back? You need to evaluate:

  • Do you still officially support that version?

  • What is the severity of the flaw?

  • What effort would be required to fix it?

If the flaw is minor and the release is obsolete, you might decide not to fix it. However, if many customers still use that version, and the vulnerability could cause significant harm, a patch or workaround might be necessary.

5. Define a Clear Communication Strategy

When a vulnerability is discovered, communication is key. Your customers need to:

  • Know that you are aware of the problem.

  • Understand the impact (or lack thereof).

  • Receive clear guidance on what to do next.

Sending timely updates, publishing knowledge base articles, and even issuing patches proactively builds trust and positions your organization as responsible and customer-focused.

Automation Helps, But Can’t Replace Strategy

Tools like Dependency-Check, npm audit, or automated scanners are excellent. They notify you when outdated or vulnerable packages are present. However, these tools only work if you integrate them into your build process and actually respond to the alerts. Technology can assist, but without policies and accountability, vulnerabilities will still slip through.

Best Practices Recap

  • Maintain an inventory of all external components.

  • Rate the risk level of each component.

  • Assign a team member to monitor vulnerability disclosures.

  • Define an internal process to assess and respond to each risk.

  • Decide how long older versions are supported and what patch policy applies.

  • Communicate clearly with customers when a flaw is identified.

  • Automate scanning wherever possible, but maintain manual oversight.

The Bigger Picture: Why This Matters

Security flaws impact more than just your application. They affect trust.

  • If a customer discovers a vulnerability before you do, their confidence is shaken.

  • If attackers exploit the flaw, the damage can go beyond your software to your brand.

  • If news of the breach spreads, legal, financial, and reputational harm could follow.

Being proactive about vulnerabilities isn’t just about code. It’s about credibility.

Conclusion

Security isn’t a one-time task; it’s a continuous process. With the speed at which threats evolve and the increasing use of third-party code, staying updated with security fixes and patches is more important than ever. By implementing structured processes, assigning clear responsibilities, and maintaining a strong communication line with your users, you significantly reduce your risk.

Treat security as a core feature of your software, not an afterthought. Because when trust is broken, no patch can fully fix it.


Tuesday, August 13, 2019

Partnership with an external party - quick prototyping / solution

A couple of years back, a major disaster took place. We had tied up with an external mobile based organization, larger than us, for providing a customized version of our software which would act as an entry point to their software. Such deals and partnerships happen all the time; the schedule was aggressive, but which schedule is not. Deals such as these are the life blood of a growing organization, since it helps develop partnerships and also builds credibility for other deals (even when the deal may be under an NDA, since you can still use such deals for other partnerships without revealing information that might break the NDA).
So, our marketing team along with their technical team (and a few points from us) managed to seal a deal and there were a lot of congratulations and happy faces. Since the schedule was aggressive, the work started right away. However, within a few days, as the proper design phase started, it became clear that there were problems involved in the probable design. The design changes needed were complex, the number of technical resources that were allocated were far less than the amount required after the design process was evaluated. And with the passage of time, it became clear that the process was only going to go downhill. It was an important collaboration and the organization was willing to add more resources, but the timeline was just not working out. After a series of frantic meetings, it was decided that rather than deliver an inadequate product, it was better to break the deal within time and still be able to maintain a relationship.
Given the importance of the deal and the critical nature of avoiding a situation like that in the future, there was a need to do a thorough analysis of where the problems lay. The analysis took some time, but it soon became clear that the level of interaction with the team that actually owned the product was very low and there was not enough time spent to figure out whether the solution was indeed feasible in the timeframe and with the amount of resources that could be committed. Now, this kind of problem had never been this severe in the past, but now that it had happened once, it became clear that it could occur again in the future. So, the relationship process had to be modified to include more time for a technical evaluation; if the contract was very large, then even a prototype may need to be built before such a contract can be signed off.


Friday, August 9, 2019

The importance of code walkthrough and reviews

When one studies software engineering, the importance of review like activities are actively mentioned. However, during the pressure of actual project work, there is always pressure to reduce the amount of time spent on these activities; in many cases, this is something that the project manager might push. And there are a number of such review activities that happen during the course of the project. Here are some example:
- Requirements / Design review: During the process of detailing the requirements, there is a need to do a review of the requirements to ensure that the requirements cover the entire feature needs and is  comprehensive. When the design process is done, there is a need for experienced design folks to review the design documents / architectural documents and related documents to ensure that the best possible solution is designed. In a number of projects that I have seen, such reviews typically are very value enhancing and can add a lot of improvements to the design documents.
- Test plans / test cases review: Once the design documents have been prepared, the testing documents get into high gear (in many cases, the test plan document would be started before the design documents have been completed). The testing process is critical to weed out all the defects / improper features in the software; if the test plans and cases are not comprehensive, then there would be problems with the software. Hence, a comprehensive review would be necessary, in many cases with the developer also participating.
- Code walkthrough: Code walkthrough is not done for all the code, typically done for the more critical sections of the code. In addition, as the software development process nears the end, this process gets more critical. There have been cases where a defect fix has failed, and caused more problems for the team in trying to figure out what went wrong. The advantages for a code walkthrough are very high.
- Defect review: This is not something that happens in all cases. What happens in many projects is that when a defect is written, it goes to a defect review committee that reviews whether the defect is valid, whether it has the proper severity and priority, and also whether it needs to be fixed. The committee can then decide whether to actually allocate the defect for fixing. This kind of process does add some overhead to the defect process, but can help prevent some defects from getting fixed which are not needed, and also whether the proper allocation of defects to people.
- Defect fix review: This can be critical, especially with defect fixes that touch core areas of the design or the architecture and need a proper review. This overlaps with the code walkthrough as well, but to ensure that the defects are properly fixed, this part of the process is critical.

One can quibble about the amount of time needed for such process work, but the need for reviews at different stages is critical and needs to be done. 


Wednesday, August 7, 2019

Coordination with external teams - regular meetings to track progress and status

Sometimes when I review some of the posts that I create, it seems like something that is so obvious, why would there be a need to even create such a post; everybody would already know this. And then, one comes across cases where it becomes clear that some people do not really know the contents of the post, that they get into problems which have been described in some of the posts. So the idea of such posts is that people read these, see whether this applies to their current situation, and work out any changes if required. If they are doing things superior to what is being written about in the post, then I would be really grateful if they can update in the comments.
For any team that has been working for some time, especially in the area of product development, there will always be the need for working with external parties. These can be other teams within the organization that depend on your product, or provide you with a component. There can be teams outside the organization with whom you are coordinating with for either inputs or outputs. For example, a simple case was whereby the marketing team had made a deal with a phone manufacturer for loading the product as a pre-loaded application on the phone (yes, the same type of product that many reviews call bloatware, and which users sometimes are unable to uninstall when they get the phone in their hands).
You might think that this is a simple transaction, you provide the product to them and they incorporate it into their phone, with schedules being the major tracking. However, life is not so easy many times. In many cases, we have had to add tracking parameters to the product which let us monitor how many times users loaded the product that was on the phone (as opposed to getting the same product from another source, or buying from us, and so on). All of this requires coordination (and there can be numerous cases where such type of changes and coordination needs to happen).
When you are dealing with teams that are outside of your organization, you should never make the mistake of assuming that they work with the same culture as teams within your organization.
Coordinating with such teams needs to ensure a proper system of tracking requirements, changes, issues, closing issues, defects, status and setting the grounds for acceptability testing (to ensure that both teams are in agreement over what is needed to finally state that the product is good to go).
- There is a need to define responsibilities and contact details, who does what on either side
- With these responsibilities, a further need to map ongoing work to people along with timelines and what the output should be like
- Set up regular meetings for discussing ongoing issues, working through the schedule, deliverables, action items, and escalations
- As work proceeds, define the changing mix of people involved in the discussions (for example, earlier states may have a mix of development and marketing people, and later, when work is ongoing, more of development and testing people from both sides)

This is the general idea of how you go about working with people from outside your organization; the exact granular details of this coordination would vary from team to team, and from project to project.


Tuesday, August 6, 2019

Giving time for the testing effort

The testing process is one of the most fundamental parts of a software project. Any software that is built (or modified) would have defects in it. Even the most confident of software developers and the most skilled would admit that there will be defects that creep in when they are writing their code (in fact, the best ones are a part of the testing effort, working closely with the testing team to ensure that the team fully understands what all has been done so that they can do their best to tweak all the defects out). So it is well understood that there is a need for testing in order to deliver a high quality item to the end customer; and the process of testing tries to ensure that most of the high severity defect are found out and fixed during the process.
The challenges come in terms of ensuring that there are enough resources for the testing process and the amount of time required for the testing process is also there. There can be a lot of pushback on this front from the project managers and others in the management team, since the development and testing schedules do take up a significant amount of the overall project cycle. From my experience, there is a fair amount of pressure on the test team leaders about pulling in their estimates and trying to get that part of the project done early. If you speak to some of these testing guys, their common refrain is that management typically (in a majority of cases) do not have people that came from a testing background and hence do not really understand the work we do, and hence we get pressured a lot.
So what is the way out to ensure that the testing estimates are accepted, even though there may be some rounds of discussion and some estimates may be refined (could be reduced, or in the off case, during the process of estimation and discussion, there may be some upwards estimation of the estimates as well). Well, like many other issues that come up during estimation and planning, there would be some amount of rigor combined at the same time with some amount of rough estimates.
How does  rough estimate come up ? Well, if you have experienced testing leaders, when they look through the requirements (at a top level, since detailed requirements may not exist, but summary requirements would be there), they would be able to give a fairly good and rough estimate for the testing effort required, which can be broken down into number of people for the desired schedule.
Another way is to see similar projects (a lot of projects in large organizations can be similar, which gives a good idea of the testing efforts for a new project, at least as a good point for comparison purposes).
The effort is in preparing detailed testing estimates by taking the different requirements, breaking them down into test plans (more detailed test cases may not be possible given the state of requirements at this point of time). Some amount of rigorous review of these plans would give a very good consolidated testing requirement, both for the testing effort estimation and for later needs when these test plans would form the basis for preparing more detailed test cases.


Wednesday, July 10, 2019

Interaction between product manager and usability expert

The product manager plays a role throughout the product development or the projectexecution cycle. The product manager delivers requirements, discusses them with the feature teams, collaborated and provides clarifications during the design phrase and also plays a key role during the development and testing cycles - defining what the flow for the feature should be case there is a lack of clarity among the development team (and there will typically be some small part or element of the workflow or the screens or the UI that may not have been well detailed during the requirements or design process and needs the inputs of the product manager); in addition, most Product Managers I know would do extensive testing of the product, primarily of new features or features that were modified; and also spent time in the beta programs, discussing with the beta users about specific features or providing clarification or passing on the more severe defects.
The usability expert does not play that extensive a role throughout the cycle, but in the initial phases of the cycle, the inputs of the usability expert are critical. I remember a particular cycle in which we were doing a comprehensive redesign of the product, based on a summary of user issues and requests over the past few versions, and also because the product UI looked dates and needed to be modified to seem better and fresher (and those are somewhat nebulous concepts, but you would not believe how well these concepts sound when you pitch the idea to senior management). In such a case, the flow of ideas between the product manager and the usability expert was something that started way before the requirements phase; in fact these could start before the previous version was done and out of the gate.
The usability expert and the product manager have a set of inputs that help them as they start their process, and for larger products, the number of screens that they have could be considerable, so they do need to prioritize. These inputs would be -
- Complaints and suggestions by customers and on the user forums (especially if these get mentioned a lot),
- Inputs from the usability expert and the product manager themselves (if you show product screens to a usability expert, you can be sure that they will have their opinions on the workflow plus and minuses of certain screens, and the product manager typically has a list of peeves about some screens in the product),
- Technical changes that require a modification to an existing screen or make an improvement possible. It is possible that the components used for screen design have gone through certain changes, which in turn ensures that the screen needs modification or maybe there was a certain workflow that was desired but was not technologically possible, but is possible now
- And there could be some other inputs that also lead to screen or UI modifications
The process is somewhat cyclical, with the expert typically laying down a new desired workflow, which would be commented on by the Product manager and sometimes by the product team, and based on these discussions, a new iteration would be made. Because this may need to be done over many UI screens or workflows, the creative mind of a usability expert may do this screen by screen, rather than working over several screens at the same time, thus ensuring that different product teams can get started. This is where the Product Manager can prod and work with the usability expert, atleast being able to detail out preliminary requirements that can be fully detailed out by the usability expert. It can be a challenge for the Project Manager to handle this kind of scheduling, but cooperation with the Product Manager can help make this smoother.


Thursday, July 4, 2019

Presentation - who should do the presentations ...

In previous posts, I have talked about the kind of data, graphs and slides one should use in a presentation, especially when the presentation is being made to people who are in a more senior position. One has to be careful about what to present, presenting a top level summary and not doing an overkill with data, and yet having backup data and graphs for the queries that might come (it always comes off well if you have data on your tips, or have access to the data on your finger tips and goes a long way in generating a more positive impression).
The next important question is about who should do the presentation. And for a question such as this, there is no correct answer. It really depends on a number of circumstances, depends on the members of the team, and so on. Here are some points to ponder over:
- Importance of the presentation: Sometimes the presentation is really significant; for example when a new project is being launched and the kickoff means that senior executives would be present. In such a case, one really needs to have the best foot forward, and there is no question of trying out different members of the team in order to get them more presentation experience. If you were going to kick-start a project and the meeting was a review meeting, the presenter needs to be the best person for the job. On the contrary, if this is a regular meeting (many such meetings can be standard meetings where not much changes are expected but are a part of the regular schedule), one can try to get different team members to present either the whole stuff or break it up into different parts done by different team members. There is no real problem in even starting out by the meeting by introducing the team members and explaining the people who would be doing the presentation.
- Inclination: In every team, there will be people who are interested in doing such presentations because it gets them noticed and known by people outside the team especially if they come across as confident and knowledgeable. On the opposite side, there will be people in the team who are really not interested in doing presentations, and this is not something that one can force somebody into.
- Specific ability: Sometimes there is the need to fit a specific ability to the need of the situation. There could be a team member who is very good at data, at being able to understand the different data points as well as analysis of data and different permutations and combinations of data (this would be very useful when this is a review meeting that goes into detail into coding data or defect analysis); on the other hand, when you have a meeting that talks about project starting and about the various options and variables, about customer inputs, you need somebody who is more clear about the requirements, about the options in this, about what the customers think like, and so on. Everybody would know some details, but there are always some specific team members who are more fluent in different parts of the project, and one should always try and match these abilities, unless it is a really routine meeting.


Tuesday, July 2, 2019

Focusing on the usability and ease of reading

Recently. I was driving past a gas station next to the highway, one that I had passed by earlier and this time saw a new board announcing some new eating options. Given the speed at which I was driving, I would normally be able to read the signs on the board, but the lettering was in a fancy script where I was not able to read it (or rather, it would have taken more time to read it than the time I had while driving past it). I asked the other people in the car whether they were able to read the names of the outlets that were on the sign, and none of them were able to read in the short duration of time that we had while passing by. This was not true of other signboards that were in plain simple script, not some fancy script.
While reliving this experience, my experience in an IT industry came to the fore, all the discussions with the usability experience and the discussions with the usability expert came to the front. If the gas station signboard was written after consulting with an experience expert, then they would have realized the use case and had the sign board written in a way that people could read it while passing by and maybe be attracted enough to stop by before they passed the place. 
And this is what usability is all about. While doing the design of any new user screens or even when looking at the redesign of an existing screen or user facing UI, or anything similar, it is always important to look at how this will look at the users. When the design is being done by the people behind the development and testing team along with the product manager, it is necessary that it be consulted with a usability expert. It is important to emphasize this point because there have been so many cases where the people that have been working on the product for so long feel that they know what the customer wants and will resist what a usability expert emphasizes on (there are specific examples I know where the usability expert has recommended changes in the workflow or the screen, and the development team has not been able to appreciate the changes or are very resistant to these changes).
One way to make sure that the development team understand the need for usability is to get the team members looking at user forums or defects logged by customers, as well as getting them to actively looking at beta programs and interacting with the users - this can get them to quickly change their perspective of what is important for the product.


Tuesday, June 11, 2019

Presentations - what data to present and how (contd..)

In the previous post (Data and graphs in a presentation), we talked about some of the data elements and graphs that would be shown in a presentation; such information is generated from a number of factors, the level to which these need to be shown as well as the detail depends on the audience of the presentation.
In this post, we talk about something that needs careful planning while making a presentation. When you have information and are able to show data and make great graphs, there is a tendency in most cases to do over-kill by trying to present too much data in the form of graphs. This is especially problematic when you are presenting to people who are senior to you and really do not have the time or the effort to go through multiple graphs that are presenting information that is similar. For example, if you are presenting on the current status of your project, especially with respect to the development phase of your project, the maximum focus would be on the defects - and there can be incredible amounts of data that is generated during this phase. For you and your colleagues who are working day in and day out on these defects, a lot of the data may seem relevant. But if you try and present too many graphs, even if they are packaged in a great way, it would still be over-kill. I have seen a case where the audience soon started saying, "Next, next" as soon as they saw another graph.
Such a reaction from an audience means that you have practically lost them. You have to focus on the key data details that you need to present (and I am not trying to tell you what this key data is); talk to your colleagues, get presentations made by other teams, talk to somebody who is more senior and would have attended such presentations, and so on. Make sure that you have done this homework. In one presentation, I saw around 15 different graphs on defects and defect resolution, this was way too much.
Try to finalize on a small number of graphs that you will have in your presentation, it is fine for you to have more detailed graphs in another presentation or in another appendix. There is a small chance that somebody will ask for more data or will get curious about another metric, and having that graph handy shows that you are well prepared and ready for the presentation (at the same time, don't go overboard and have dozens and dozens of graphs ready, those don't give that good an impression though). In fact, during the presentation, you can talk about the key data points for which you are presenting graphs and mention that there are additional graphs available if somebody wants more data (and these graphs should typically be those that your team is anyhow generating for keeping track of defect or coding or other metrics).


Thursday, June 6, 2019

Presentations - what data to present and how

This topic is actually a full book, since it depends on the type of presentation, the target audience, etc. For example, if you are presenting to senior management, you would try to keep bullet points and graphs of data along with conclusions (and keep all the detailed information on your fingertips since you would not know who could ask what question about which part of the presentation). If you are presenting to colleagues and team members, high level summary may still be presented, but a lot more data analysis, a lot more talking about the data analysis, shortcomings, etc need to be talked about - in some cases, follow up meetings with select members of the audience may need to be set as well. When people ask questions, the questions may be more exact about specific points of the data or the analysis and it helps to have all the information as required on your finger tips.
However, suppose we are planning the presentation, need to figure out what kind of data to present, what are the graphs that may be required, all of these need to be figured and finalized before the data needs to be presented. There are many ways in which this sort of initial presentation needs to designed.
- Design what is the information you need to present, which in turn drives the data elements you need to have as part of your graphs. For example, if you are presenting on the current status of an ongoing project, one important data point would be the number of defects that are being found and fixed over a period of time. There may be ways of presenting this data in terms of the actual graph, including contrasting with similar data from previous versions, but you have an idea of the data points that need to be there in the graphs.
- Discussions with fellow presenters. In our case, when we had to make a presentation, it was on behalf of the team, so the fellow presenters would be colleagues (I was a project manager, so involved other other project managers, involved the heads of the development and testing team) with whom you could have extensive discussions on what is the kind of information or data points that need to be presented, along with the level of detail.
- In a number of cases (atleast in my case), my boss was ultimately the person assigned the responsibility for the team, so even though we would be making the presentation, the boss held a fair degree of responsibility. You can be sure that if your presentation made some boo-boo, there might be some (or many) uncomfortable words with the boss and a loss in the amount of trust that was given to you.

Once you are done with the kind of data to show as well as the kind of graphs and data analysis, there needs to be atleast a couple of presentations with the team and the boss as a sort of practise runs. You would not believe how a very confident team, well happy with their presentation, was shaken with some of the questions (genuine ones) that somehow needed a modification of the presentation, whether these be the graphs or the talking points.


Thursday, May 30, 2019

Ensuring you are kept in the loop for communication

Recently I got an email from another program manager, the lady was somewhat junior to me in the sense of actual title, but we were both doing the same role (and eventually that is what matters after all, not a title). She was in charge of a team that delivers some modules for a project that our team uses, and we have been working with her team in the past for several deliveries, and the coordination between our team and theirs was working well.
With a new request, one of our senior developers started a discussion with another senior developer from the other team, and this discussion continued for some time between these 2 developers and eventually the developer from our team included me in the discussion. I put in my comments, talked about the schedule and so on, and did not take the elementary step of including the program manager from the other team. It was around a week later that she found out that she was not being included in a discussion about the features, deliveries, and so on. And then she sent me an email asking as to why she was not being included in the discussions about a delivery that her ground was eventually going to be tasked with making. I am sure she was having a similar discussion with the developer from her group. I had no great answers for this one other than stating that it was a mistake and she should have been included in the discussion.
This is a tricky point, about the level of involvement in discussions and the point at which it should start. The dynamics of this varies from group to group, with some groups having the program manager or the project manager coming in only when actual scheduling or commitments need to be made and the developers having the experience to continue discussions and only bring in the program manager at a much later stage; for other groups the dynamics could be different - when the PgM or the PM needs to come in is not a reflection of the values or maturity of a group, it is just how the dynamics of the group have become established.
However, there is no denying the fact that the PgM or the PM does need to be involved at a certain stage; there are many factors that require inputs from the PgM which the developer may not have. At a very extreme level, the team may have been directed to do some other work, and hence would not be able to cater to any request; or there may be scheduling conflicts or resource conflicts and it is typically the PgM who is in a position to look at these conflicts and then work these out in coordination with others. Further, once the discussions reach a certain stage, there may be the need for regular interactions between more than just these developers and somebody needs to track the agreements and action items from these interactions or meetings. There could be a multitude of reasons why the PgM or the PM needs to get involved, and it is best if the person gets included and they then can figure out their level of involvement at different stages of the discussion.


Wednesday, April 24, 2019

Ensuring resources are allocated for the next version

The process of  resource (in this post, we are talking about people) allocation during the process of product development is tricky, and because there are high costs associated with the same, it requires careful planning, and sometimes circumstances can throw such planning out of the window.
For projects, where people are assembled for a specific one-off project, the situation is slightly simpler. There is a proper schedule for the project, and that project schedule defines when what resources is required for the project and this can be done with the identification of resources and their allocation to the project at the required time (or it can be done in a staggered manner with part work on their existing project and slowly taking up more work on the new project until they are fully on the new project).
However, consider the case of product development where versions of the product are released after a periodic cycle. For simplicity, consider the case where the product is released every year, say in October. During the course of the year, the resource requirements are not static. At the start of the cycle, during the requirements phase, the need for resources is lower, it increases during the design phase and can be maximum during the cycle of develop, test, fix; it is during this time that the phrase 'all hands on deck' is most suitable. But as development and testing starts to taper down, the product team needs to simultaneously start work on the next version. Identification of new features, the most critical fixes, interactions with customers to identify those features or changes that are highly needed by customers all happen during this time phrase, which usually does start before the previous version has shipped.
Even the use of more complicated requirements and workflow design involving prototyping, developing sample user interfaces, and so on, is something that takes time. If these are attempted to be started after the previous version has shipped, it will eat up the development and design time for the next phase. The problem is in terms of assigning more accomplished developers and some testers for this effort, since there will be need for simultaneous working on critical defects and so on. However, teams that have been working on multiple versions over the years have learned how to do this; the amount of resource allocation needs to be fluid, with people moving from one version to another during the course of the week, or even during the course of a work day (with the intention that these changes are not too chaotic, since that could unnerve even the most rational of people). The program / project manager, the leads and the product manager need to handle this process carefully, being careful not to fluster the people working on this too much and it will work just fine. 


Tuesday, April 16, 2019

True status in the status report

The status report can be a very important document, or it can be just something that is created as a matter of routine. I remember 2 very differing usages in 2 different situations - in one case, the status report was reviewed by many members of management and they had queries on some of them, which reassured us that the status report was valued and was being viewed. However, it also brought on a feeling to recheck the report before it was sent out that it was accurate and that the information presented the status as of that point, not an optimistic or a pessimistic portrayal, but an accurate portrayal.
Another case was in an organization that had different types of process certification, and part of that certification was about ensuring that every project generated status reports of different types which were sent to a central project management office; the idea being that anybody could find the status report of any project and review it for whatever timeline. The problem I could see after a few weeks was that the project manager was drowning in the various status reports that were required to be generated, and it was pretty clear that most management would not have the bandwidth to be able to review more than a couple at any detail.
However, the subject of this post is actually more about the accuracy of the status report. Right in the beginning, when I was more of a novice project manager with a few months experience, I would work with the leads to generate a status report - the problem was with the level of maturity of everyone involved. Most people tend to see issues in a status report as something that reflects on their way; so initially the status report would contain the issue, but also with a sugar coating about what the team was doing. The lesson I got one day was from a senior manager who had a discussion with me. His feedback was that the status report was supposed to report the issues as they were along with what the team could do to overcome them, not a sugar coating. The issues were needed to be represented accurately, including in those cases where the issues could pose potential red risks to the project and needed some kind of immediate attention (whether these be from within the team or needed attention from people outside the team, such as an external team on which there was a dependency).
This can get tricky. I remember the first time when I generated a status report with a red item, I got called into a discussion with the leads of development and testing and my boss, who were not very happy with the fact that a issue was listed in red. The expectation was that any red issue would be handled so that it was no longer red, but I held my ground. What we did finalize was that the day before my status report, or sometimes on the same day, I would do a quick communication if I saw a red item and we could discuss it. That did not mean that I would remove it unless I was convinced that my understanding was unfair and it was not red. This seemed to work for the future for this team at least.


Thursday, April 11, 2019

Ensuring the major features are delivered to testing early

Sometimes when I am writing these posts, and review the content once I have done the post; it seems like I am writing about the most obvious of topics. But you won't believe the number of projects where there has been discord between the team members with the QE team complaining about features being given late, those features which had a huge testing impact; and a significant number of end of project review meetings talk about how to ensure that major features are given early enough in the cycle that it is shaken out as thoroughly as possible much before the final deadlines.
What is this ? Well, when you are doing a software project cycle, in most cases, there will be some features that are more substantial than the others. It need not be a user facing dialog or screen, for example, it could be some kind of engine that works in the background but has a huge impact on the product (for example, in an accounting software, it could be the tax calculation code that is a huge part of the product, or for a Photoshop kind of software, it could be the graphics engine that works in the background), or it could be a brand new feature that is supposed to be the selling point of the new version of the software.
In such cases, the future of the product is dependent on making sure that these significant features / engines / code are thoroughly shaken out and tested and major and medium level defects are found and fixed, and fixed much in advance so that these defects are not left for the last parts of the cycle (unfortunately in many cases of software cycles, even with the best of intentions (not planning), these features can drag right till the end).
There is a problem inherent in all this. When you have a new feature or new engine or something that is new, there is the chance that there will be more defects than in a feature that has existed from earlier and where a lot of testing may have already happened. Some of these may be severe enough that the product cannot be released until these defects have been found and tested.
Another problem is that for new features, even with the best written cases and requirements, there is the possibility of disagreement between the development and QE team about a specific workflow, which could be something as minor as the exact wording of an error message or the case in which it appears. Such disagreements can be easily resolved by the Product Manager, but all of these take time and contribute to potential delay in actual completion of the feature.
Further, such major changes have a higher impact on the localization and documentation aspects of the product, and until the feature is fully ready and all medium and major defects have been found and fixed, these aspects cannot be fully resolved and too much delay will have an impact on the overall schedule of the project.
Now, all of this does not mean that it is going to be easy for these major features to be fully delivered early; there may be schedule or dependency issues that will delay the feature, but the planning should try to ensure that the feature is delivered as early as possible, and if it can be broken into parts which can still be tested to some reasonable level of confidence, one should target such a plan. Don't ignore this issue. 


Wednesday, April 10, 2019

Costs of taking last minute defect fixes

You know what I am talking about. I even hinted in the last couple of posts about the dangers and problems involved in this situation. It is like a Hobson's choice, no matter what you do, there is no clear right answer. Here are some cases for the same:
- You are a week away from the date when you cease the cycle of testing and fixing, when the product goes into the process of wrapping up the development activities and into the release set of processes. The testing team, by this time, would have wrapped up the major testing cases and would be carrying out the last stage of testing, with the hope that no major defect pops up at this point of time. And would you believe it, there is a major defect that does indeed emerge; restest confirms that the defect is reproducible, the defect review committee looks at the defect, but at this late stage, decides that it wants details in terms of what is the proposed fix, what are the code changes; wants the code changes to be reviewed by multiple people and wants the change in a private build so that it can be tested thoroughly before it is integrated into the main branch. And even with all this, it can seem dicey since a major change has the potential to create instability into the entire system and code base.
Such a change coming just a few weeks would have been implemented easily enough.
- Now we get into the critical milestone timelines. Just a day is left before the wrapping of the testing and defect fixing stage, and then you get such a defect. Everybody remember's Murphy's Law (if anything can go wrong, it will) at this stage and the possibility that such a defect is deferred or pushed into release notes with the possibility of being fixed in the next release or in a dot release is actively thought through. However, every defect cannot be deferred; some defects can make a product crippled, or at some workflow in the product seem crippled and with the potential of a section of the users giving it a negative rating or hollering at product support and in user forums, you have to take the possibility that at this late stage, defects will still need to be fixed. You have to go through the same process that you can went through when you looked at the defect if it was found a week before, but you need to put more resources on this review and try to speed it up. Further, if there is an internal milestone that is getting impacted, you try to work out whether you can move the internal milestone without impacting the product release date (but this is not a single person decision, needs to move through a few layers of management before getting approval; if your team has a good reputation, it is easier to get approval). And you still have to work out whether there is an impact on the documentation team and the localization teams and what will be the impact, how much their schedule will get impacted.
And you need to get a proper review done about whether there was a way to get such a defect found earlier, so as to hopefully avoid the kind of panic that you went through in this late stage. 


Sunday, April 7, 2019

Avoiding ascribing blame for last minute defects without a review

As a software development team reaches the last stages of the development project, the tension levels in the team can suddenly change drastically, mostly increase. There is an anticipation that something may go wrong, something can change the milestones and deadlines. When the team reaches the days before the completion of the development and testing stage, every day of testing brings forth an anticipation, with the leads and managers of the team hoping that the testing is thorough, but that no major defect comes through that could impact the deadlines. 
Any major or high severity defect that comes through near the end deadline has a potential severe impact; the risk of not making a fix is that you release a buggy product, but any fix has the potential to cause an undesired change in functionality or introduce another defect, something that may not be captured easily. With the pressure of deadlines looming, unless more time is given, code reviews and impact testing can try and give the confidence that there are no adverse affects of the fix, but there is always a risk.
What I have seen is that this tension causes people to start flipping out when things start going wrong. So for example, there was a case where a young tester found a severe defect almost near the end of the cycle, and there was no getting around the impact. There was a need to make a fix, evaluate the impact of the fix, do multiple code review cycles and use multiple testers to check the impacted areas, and, and, there was a pushing out of the deadlines by a couple of days. One of the senior managers was very irritated by this, and dressed down the QE lead about why the defect was not caught earlier, almost blaming the QE team for not doing the job thoroughly.
Once the release was done, there was a review team that went through the various development and testing documents, and realized that there was a mixup right from the start, from the developer design documents that were in turn used by the QE team for making their test cases. It was a lucky adhoc test that found the defect. As a byproduct of this review, the senior manager was also advised that such kind of blaming does not help and can end up discouraging those team members who were only doing their jobs, and that too in a proper manner. 


Saturday, April 6, 2019

Defining single point responsibilities for decision making during a software cycle

It seems like a simple management issue, having single point responsibilities for different functions, but you would amazed at the number of teams that stumble on this issue when at a critical point in their schedule. Consider the case where a team is coming to a point where they have to declare whether they are now clear of all major discovered defects (no team can find and fix all known defects, it is an impossible task, the amount of effort involved in trying to detect all bugs starts increasing exponentially once you reach a certain point). At this point, many teams start working on adhoc testing, others start the process of release of the product to the consumer, and so on. It is a major milestone for the product.
But who takes a call that they are clear of all major defects. The key word here is 'major'. As long as testing is going on, there will always be issues coming up, and they have to be dealt. Depending on who see the defects, the classification of whether an issue is major or not can be dealt with differently, even with the best of defect classification criteria in place.
I remember an issue from a couple of decades back. Almost at the last stage, when the team was ready to close down the defect finding and defect fixing, a defect came up. It was an interesting defect since it was serious, but covered a small workflow that many considered a non-serious workflow, and some of the team members were fine delaying it for a later dot release (the team was in the process of releasing periodic dot releases, so such defects could go into the dot releases).
At that point, during the day, we realized were were going around in circles, trying to figure out whether it should be fixed and we take another build (with the subsequent testing of that build again) or we defect it and take it up later. There were strong opinion for both in the managers and leads in the team. We realized that we had never worked out the appropriate decision making process for cases such as this, and suddenly giving the decision making for this to one person could have caused tension within the team. Ultimately we had to setup a meeting of the senior leaders of the team to thrash through a decision, taking into account the costs and the impact (both for and against the decision).
The learning we had from this kind of case was we need to better refine the process of having decision makers for specific situations - in this case, for the next time, we made the testing manager as the decision maker about whether a defect that came up in the last minute was of a sufficient severity to be needed for fixing, with the concept that if the QA manager did recommend such a defect, they should also be able to justify the severity of such a defect later.


Software product localization - there may be base product changes

For somebody (people or teams) who have experience in releasing software products in multiple languages, they would typically have gone through a lot of learning in terms of how the nuances of different languages can cause changes in the base language product (in our case, and in most cases, the base language product is in English, and the product can be released in many other languages, for larger software products such as Operating Systems or MS Office or Photoshop, these can be many many languages).
However for a team that has so far been releasing software products in one base language and have now moved to try and release their product in other languages, it can be a fairly complex project. In simplistic terms, it is to make sure that all the strings used in the product (whether these be text on screens or on dialogs or error messages, etc) are all capable of being harvested, sent for translation and then reincorporated back into the product depending on the language in which the product is being released.
Based on this simple concept, things get more complicated as you proceed towards actually doing the project. There are additional schedule requirements, there is a lot more work for the developers since testing a product for localization reveals many changes that are required, there is the need to get external people who can do the testing of the product in the different languages (the language needs to be checked, as well as the functionality of the various parts of the product under different languages), and many other changes need to be planned (this post is not meant to be a full description of the process of getting a product localized for the first time - that is a massive endeavor that requires a lot of explanation). As an example, a simple text on an error message may turn out to be much longer in a language such as Russian or German, or reading from right to left in Arabic or Hebrew, and the error message may not display properly in such cases. Either the message needs to be re-written or the error message box needs to be re-sized, which also has implications for the help manuals that may need to be modified.
Ideally, a team planning to get their product localized for the first time needs to avail of the learning that other teams and products have gained over their cycles, and so either need to hire some people with the required experience for both development and testing, or atleast get a thorough discussion with teams that have done this. Getting a product localized for the first time is not that big a effort and can be done right, but it is also not something that you attempt without ensuring that you have done adequate preparation in terms of schedule and resources. Once you have done that level of planning, then you will still face challenges, but those should be fixable.


Wednesday, April 3, 2019

Taking the time to define and design metrics

I recall my initial years in software products where there was less of a focus on metrics. In fact, in some projects, there was a question of accounting for defects and handling them, and the daily defect count was held on the whiteboard, but that was about the extent of it. Over a period of time, however, this has come to change. Software development organizations have come to realize that there is a lot of optimization that can be done, such as in these areas:
1. Defect accounting - How many defects are generated by each team and team members, how many of these defects move towards being fixed vs. being withdrawn, how many people generate defects that are critical vs. trivial, and so on.
2. Coding work, efficiency, code review records, number of line of code being written, and so on.
You get an idea, there are a number of ways in which organizations are trying to determine information about the processes within a software cycle, so that this information can be used to determine optimization, as well as figure out the awards for the appraisal of employees. This kind of data helps in providing a quantitative part of the overall appraisal counselling and to some extent, being able to give the manager a comparison between the employees.
However, such information and metrics are not easy to come by and cannot be done on the fly. Trying to create metrics when the project is ongoing or expecting people to do it along with their regular jobs will lead to sub-standard metrics, or even getting some wrong data, and not enough effort to screen the resulting data to ensure that the data is as accurate as it can be.
Typically, a good starting point for ongoing project cycles is to do a review at regular periods so that the team can contribute as to what metrics would be useful, and why. Another comparison point would be about talking to other teams to see what metrics they have found useful. And during the starting period of a project, when the discussion stage is ongoing, there needs to be a small team or couple of people who can figure out the metrics that need to be created during the cycle.
There may be a metrics engine or some tool being used in the organization, and there may be a process for new metrics to be added to the engine, or even for getting existing metrics to be added for a new project, and the effort and coordination for that also needs to be planned.
Basic concept of this article is -> Get metrics for your project, and design for it rather than treating it as an after-thought. 


Sunday, March 31, 2019

Regular sessions with Product Management and Customers

One of the most common experiences I had while interacting with people on product development teams, especially those who have been on 2-3 development cycles, from start to end is, that they seem to get a feeling for what the features in the product should be, how the workflow should happen and so on. It can get tricky when they have a strong feeling in this regard since they might incorporate their feelings in the workflow that they are implement and it would take some effort by the workflow designer or the product manager to get the design as they want (may not happen very often, but it is something to be watched out for).
But when you get the team members on the beta release program, or to monitor the user forums, or to attend sessions where actual product users meet the product development team, it can get interesting. Even when team members get to review the prerelease program (the beta program), they can seem surprised at the type of defects or feature requirements that come through these programs. I have seen cases where the team members almost dismiss these as defects that are worthy of deferring or not part of the general workflow and they have to be reminded that these are actual users, the ones who actually pay and use the software program.
The more perceptive of the team members welcome such interactions, since it gives them a great idea of how their customers are actually using the product (or atleast a section of the users) and the more they take part in their interactions, the more they are connected with how the users actually use the software and this in turn benefits the product since they are more attuned to the requirements of the user, and in fact, would actively hunt to figure out the requirements of the user.
At the same time, it is necessary that all members of the team are exposed to such interactions. Such interactions help team members understand what the users feel, that sometimes their perception of what users feel and what they themselves feel about how a feature should be like, or the severity of a defect can vary drastically, and such interactions help reduce these differences and give them a better understanding of the customer perception of what is important for features. The next when a new feature is being designed, it makes the process much smoother, and actually helps the product manager - for the case of teams that implement the Scrum development methodology, such a customer centric perception from the team members is absolutely essential and helps to drive the process of feature delivery. 
One example of this was where the feature requirement was detailed by a customer to the product management team who wrote feature specifications for the development team. However, the developer was a senior developer who had his own ideas of what the ideal feature should be like and proceeded to tweak the feature specification. It required some amount of rework to ensure that the feature was finally done as per the product management / customer requirement. 


Sunday, March 24, 2019

Dedicated team for previous version releases

For software products that have had multiple releases over the years, there is a ticklish question about the resources that need to be dedicated for previous released product versions. This is even more critical for software products that have wide usage such as Microsoft Office, Photoshop, and Acrobat. Organizations have stated policies about providing support for previous versions, but there is a cost associated with doing the same. You need to have people who can do defect resolution and testing, you need people who can do all the entire build infrastructure including releasing the software fix. This can add upto a pretty packet in terms of resources, and this can be painful especially when there are time periods when there are no major fixes that are due.
Deployment in these previous support teams can also lead to tricky morale issues, given that people assigned to these teams would feel that they are put onto a team that is less important than those who are part of the team working on the current features. To avoid such morale issues, team can rotate people across these teams, but that comes with its own problems about people not being able to develop the expertise, something that comes only after gaining experience on the team.
A problem with not having these previous support teams - for some organizations and products, such an option is not possible; a full fledged support team is required. Consider somebody using a previous version of MS Office or Adobe Acrobat, and suddenly, there is news from a computer security researcher about a critical zero day or similar intensity defect in the product which could allow a hacker to get into the hole. These events can be a public relations disaster - such blowback cannot be countered just by Public Relations, but there is a need to have a team that can work on a full scale quick research and fix, such that the organization develops a reputation for responding quickly. And it's just not a fix in a previous version, but the same problem may be there in the ongoing version that is being developed and the team needs to get the research done by the previous version team and incorporate the fix in the version under development.
A lot of teams I know slightly under-budget a team for working on previous version defects, and if the problem requires a slightly enhanced amount of effort, then additional people are deployed for working on the defect fix, and are only there on a temporary basis (even though it might have some amount of impact for the current cycle, but there is no real solution for the same). People assigned to such teams are typically team members who are not very high in ambition, and are not necessarily the high rated people in the team. Further, the Product Management teams need to be involved in both sides, since many of the feedback in terms of defects or suggestions are actually coming from customers or end users and there is a possibility that some of them may have value for the next version to be released.   


Wednesday, March 20, 2019

Inter team - Pushing for your bug fixes

When you work in a slightly larger software development organization, you will find that there are numerous cases where teams have dependencies on external teams for getting defect fixes. A simple example can explain. Say, there are multiple teams that need a function for coding / decoding music - and there are so many different audio formats, some of which have free solutions, and others which are paid solutions (and even in the paid solutions, there will be some that are very cheap and others, for some specific audio formats, which are very expensive). To further complicate this, each external solution will have its own set of legal formalities and requirements which may or may not be easy fort the organization to follow (some of the open source solutions are almost non-touchable for typical software organizations because they have some stringent requirements on their own, like insisting that any software that uses them must be open source in its own way).
And there can be numerous examples such as this; we used to have a simple XML parser that almost every software would need, and as a result, there was one team that was mandated to write such a parser and own the solution. Net net, where there are multiple teams that need a common functionality, it makes sense for a central team to create and own this common functionality, to update it as and when needed and provide the required updates to all the teams that depend on them.
However, this dependency on a central team can be tricky. With every organization and every team working on the concept of limited resources, the question of priority comes in. Teams that are more important and critical to the organization realistically have more say in the release timeline of central components, and more importantly, the bug fixes that go into the central component.
For a team that ranks somewhat lower on the priority, it can be a strugle to get the component with your desired bug fixes as per your schedule, and realistically no amount of hollering or screaming is going to change that basic truth. However, you still do need those bug fixes, so what do you do ? It is not a simple solution to write your own code to replace the component - it may not be allowed, the resources or other costs may not be available to do this, or your team may not even have the capability to do this. Another solution is to align your schedule with some of the more higher priority teams, atleast you would get a rock solid component with some of the high priority bug fixes in it. If this is not really a solution, then another method is to ensure your communication is top notch. The relevant people in your team (both management and technical) are part of any mailing lists or discussion groups that talk about the component, its features and defects. Similarly, there is a need to setup regular checkin meetings with the component team to ensure that your relevant defects are passed on along with the required priority and severity. Further, you need to communicate regularly with the other team to ensure that your defects remain on their radar (including with the product management function who decide on features and defect fixes). All of these measures help to ensure that your required defects or features get highlighted; whether they make it or not is still not guaranteed though. It does help though if you are able to get customer inputs about the defects or features which tries to increase the importance of the defect or feature.



Wednesday, March 6, 2019

Not every defect should be fixed

During the course of a software development project, whether this be an updated version of a product or a one-off software project, the key currencies of the project are primarily features and defects (features and defects are what the teams end up doing most of the time during the development cycle, and in fact, at some intense points of the cycle, the team will be doing mostly defect finding, fixing, releasing fixes and re-test (in the Waterfall cycle), once the feature is released, it is bug fixing and more bug fixing.
In the software team, these are busy times. In a number of such development cycles, the number of defects that there are there in the product are such that it seems tough to think that all of these will be fixed in the given time cycle. This cycle of defect fixing primarily involves the developer and the tester. Some of these would be simple defects, where the feature as defined is not working and needs to be fixed by the developer. However, there will be other cases where there is some ambiguity in terms of how the feature is working and whether it is as per the definition - it could be that the feature definition is not complete to cover all the cases or that there is some disagreement regarding the way that the feature was supposed to be working vs. how it was written.
The biggest problem could be that the resolution of such issues happened in a path not intended to happen, something that the product manager had not intended. Now, it is not necessary that it happens this way, but one has to prepare for such an eventuality, or to be more clear, to define a process so that such a thing does not happen. The actual process is something that needs to be defined for each team, since what works for one team may not work for another team. For example, I once worked with a team that asked that all defects be triaged by a bug committee which decided whether it needed to be fixed or not, and if so, what would be the proper method of fixing (although the actual fix is something that the developer and tester could decide). Why this process may not work for all teams is because of the quantum of defects that may come in, overwhelming the defect review committee and causing a backlog. Other teams may find such a process too severe, trusting the developers and testers that they will not automatically make suspect fixes, checking with a defect committee or with the Product Manager before actually making a feature change or refinement.
However, it is essential that this be talked about and decided with the management and with the team before proceeding, else there is a chance that feature changes may happen just at the developer and tester level.  


Facebook activity