Subscribe by Email


Wednesday, December 4, 2019

Mastering Risk Management in Project Leadership: A Practical Guide for Project Managers

In any comprehensive course on project management, one theme repeatedly emerges as central to project success: effective risk management. It's not simply a best practice—it is a core discipline that every competent project or program manager must master. Many seasoned professionals even argue that once a project is underway, risk management becomes the most critical and continuous area of focus.

Despite its importance, risk management often gets sidelined in the hustle of project execution. A large part of this is due to its subjective nature—risk isn’t always visible or easily quantifiable. However, subjective does not mean intangible. With the right processes and mindset, project managers can consistently identify, assess, and mitigate risks in a structured way.

Based on my own experience leading and mentoring project teams, I believe that there are two fundamental pillars of effective risk management:

1. Recognizing Common, Known Risk Areas

Every organization that operates at a mature level has a set of known risk factors that tend to repeat across different projects. These risks are often related to:

  • Schedule delays

  • Team attrition or sudden personnel transfers

  • Feature creep or uncontrolled scope changes

  • Budget constraints

  • Vendor reliability

These types of risks are considered "known knowns"—they're the usual suspects. A proactive project manager should have access to historical data or a shared risk register that documents past risks, their impact, and how they were mitigated.

A best practice here is to regularly review and update this organizational risk repository. This enables the team to stay ahead of predictable problems. For instance, if historical data shows a 20% increase in scope-related delays during Q4 due to end-of-year product push, your project schedule should already account for this.

Project managers must periodically assess these known risk areas throughout the lifecycle of the project. Risk logs should be living documents, not static checklists filed away after kickoff. If a known risk manifests because it was ignored or underestimated, the responsibility lies squarely with the project manager.

However, it is not uncommon for even experienced professionals to get caught up in daily operations, firefighting deliverables, and managing stakeholders. In doing so, they lose the mental bandwidth required to continuously review and assess known risk factors.

Avoiding this pitfall means embedding risk review into your routine processes. This could be as simple as adding a five-minute discussion point in weekly status meetings or setting aside 30 minutes each week to review the risk log and evaluate current triggers.

2. Navigating the Unknown: Identifying Emerging Risks

The second category of risk is much harder to pin down: the unknowns. These are risks that aren’t documented in any database. They haven’t occurred before, or they manifest in new, unpredictable ways. But make no mistake—they're just as real.

Consider a real-world example: your competitor suddenly launches a disruptive update to their product, forcing your team to recalibrate features that were in development. This, in turn, impacts timelines, resource allocations, internal communications, and possibly even the entire release strategy.

While you can’t predict every market move, you can put systems in place to surface emerging risks early. This involves:

  • Regular sync-ups with cross-functional leads and product managers

  • Encouraging a culture of transparency and early escalation

  • Tracking subtle signals from the field, such as customer support feedback, developer bottlenecks, or sales sentiment shifts

  • Reviewing change requests not just for technical feasibility but for strategic alignment

The key here is visibility. You can only mitigate what you can see, and the earlier the better. Every change request, every team concern, and every product pivot should be reviewed with a "what could go wrong?" lens.

To manage emerging risks effectively, project managers should use a hybrid approach combining traditional tools like a RAID log (Risks, Assumptions, Issues, and Dependencies) with more adaptive practices like lightweight agile retrospectives and real-time issue tracking platforms.

Building a Culture of Risk Ownership

Project risk management should never be a one-person responsibility. An effective project manager builds a risk-aware culture across the team. This means:

  • Encouraging team members to report potential risks without fear

  • Rewarding early detection of issues, even if they don’t materialize

  • Assigning clear ownership of risk items

  • Embedding risk impact discussions into change request reviews

By normalizing risk conversations, you reduce the stigma around raising concerns. This ensures that your team becomes an early warning system rather than a passive set of executors.

Integrating Risk Management into Daily Practice

Effective risk management doesn’t happen in isolation. It must be integrated into everyday project management activities. Here are a few best practices:

  • Risk Workshops: Conduct short risk brainstorming sessions at the start of each major phase.

  • Risk Review Cadence: Build a rhythm of reviewing the risk register weekly or biweekly.

  • Trigger-Based Tracking: Define what "early indicators" might suggest a risk is developing.

  • Risk Scoring: Use a simple matrix to score risks based on probability and impact.

  • Scenario Planning: Consider “what-if” exercises to prepare the team for critical disruptions.

Over time, these habits not only reduce the number of surprises but also equip your team to respond more calmly and effectively when things do go sideways.

Measuring Risk Management Success

One of the challenges in risk management is measuring its effectiveness. Unlike deliverables or velocity, risk mitigation doesn’t always have immediate, visible results. Still, you can track:

  • Number of risks logged and actively monitored

  • Percentage of risks mitigated before impact

  • Stakeholder satisfaction during crisis periods

  • Response time to emerging issues

You can also gather qualitative feedback post-project to evaluate how prepared the team felt and whether contingency plans were effective.

Common Pitfalls to Avoid

  1. Treating Risk Management as a Phase: Risk isn’t just for kickoff. It’s a continuous, adaptive process.

  2. Ignoring Soft Signals: Risks often start as subtle concerns before becoming showstoppers.

  3. Overengineering the Process: Keep tools and logs simple. Focus on actionability, not bureaucracy.

  4. Shifting Responsibility: Everyone owns risk, but the project manager is accountable for visibility and response.

  5. Not Updating the Plan: A risk register is a live document. If your plan never changes, you're likely missing real-time shifts.

Final Thoughts: Risk Is Inevitable, Unpreparedness Is Not

Every project, regardless of size or complexity, will encounter risks. The difference between successful and failed initiatives often lies in how well those risks are understood, communicated, and managed.

Project managers must resist the temptation to view risk management as optional or peripheral. It is, in fact, one of the most strategic capabilities you can develop as a leader. Done well, it not only protects timelines and budgets—it builds trust, boosts team morale, and enhances your reputation as a calm, reliable, and forward-thinking project professional.

So, the next time you lead a project, remember: risk isn’t the enemy. It’s a signpost. And how you respond to it will determine not just the outcome of your current initiative but the trajectory of your career.

You may not be able to follow everything listed above :-), but you still should evaluate what works best for you. And if you are doing something else that works well for you, please add below.




Tuesday, August 27, 2019

Tackling Feature Creep: Lessons in Effective Product Management and Project Delivery

When managing software projects, success doesn’t depend solely on having skilled individuals in key roles. It also hinges on how teams navigate scope, requirements, and real-time adjustments. Our experience with one such project showed us just how important structured planning and boundary-setting are—particularly when it comes to managing scope expansion, also known as feature creep.

We had a great Product Manager for one of our flagship software initiatives. She was highly knowledgeable, had strong working relationships with the product support team, and direct lines of communication with several of our enterprise clients. Her ability to gather customer feedback and translate it into actionable requirements made her an invaluable part of the project.

The design team appreciated how she worked with them to evolve high-level ideas into detailed specifications, facilitating the creation of high-level design documents (HLDs) that were both comprehensive and realistic. Moreover, she remained actively involved throughout the design and development phases, consistently available for clarifications, reviews, and feedback. Her dedication earned the trust of everyone on the team.

Yet, despite all these strengths, we continually ran into a frustrating issue: teams consistently found themselves rushing to meet final feature deadlines. On multiple occasions, developers worked weekends and late nights in a last-minute sprint. Remarkably, we never missed our deadlines by more than a day—and we always met the quality benchmarks. But the strain on the team was undeniable.

During project retrospectives, team members flagged this pattern, asking why it kept recurring and why we couldn't better plan for it. They pointed out that while commitment and hard work were admirable, this recurring last-minute push was unsustainable. Something needed to change.


Identifying the Root Cause of Project Pressure

To get to the bottom of the issue, we launched a structured investigation. There was always a chance that we had flawed time or effort estimation processes. Alternatively, some feared that certain developers might not have been contributing their fair share.

Two of our most experienced leads were tasked with reviewing the project documentation, HLDs, effort tracking sheets, and defect metrics. Their goal: identify where and why our estimations consistently fell short.

What we found was surprising—but also enlightening. Time spent on core tasks—requirement preparation, coding, testing, and documentation—was generally in line with projections. In a few instances, certain segments had a 20% overrun, but there was no clear pattern linked to specific individuals or phases.

The real issue? Feature creep.


Understanding Feature Creep in Project Environments

In project management, feature creep refers to the uncontrolled expansion of product features beyond the original scope. It usually happens incrementally—one small change here, one improvement there—until the cumulative impact becomes significant.

In our case, this occurred subtly. As HLDs were being prepared and development moved forward, suggested enhancements came in—some from the development team itself, and many from the Product Manager. These were almost always well-intentioned. They improved the product, addressed edge cases, or reflected late-stage customer feedback.

Because these changes seemed “minor” and “beneficial,” there was a tendency to implement them without formal impact analysis or schedule adjustment. No one wanted to push back. After all, we were building something better for the customer.

But over time, these small changes added up. They chipped away at buffers, consumed developer focus, and led to crunches near the end of each development cycle.


Changing the Process: Structuring Scope Management

Once we identified feature creep as a recurring issue, we knew we had to act. Continually burning out the team wasn’t an option. We needed to instill a discipline around how post-freeze changes were handled.

Our solution was simple but effective: after the design freeze, any new requirement—regardless of size—would be classified as a “feature enhancement.” These enhancements were treated like change requests or defects and entered into a formal review and approval process.

We set up a Feature Enhancement Review Board composed of tech leads, QA, and product representatives. They met weekly to review all proposed enhancements. Only after careful evaluation of the effort, risk, and impact on schedule would a change be approved.


Outcomes of the New Approach

This change immediately brought several benefits:

  1. Clarity and Visibility: Everyone could now see what was being added post-freeze and why.

  2. Better Decision-Making: We were able to weigh the customer benefit of a change against its impact on delivery timelines.

  3. Improved Accountability: Product suggestions weren’t automatically implemented; they were scrutinized just like technical defects.

  4. Informed Resource Planning: Teams could plan capacity with fewer surprises.

Perhaps most importantly, this new framework ensured that the final sprint before release wasn’t a chaotic, high-stress period. Developers could plan their time more predictably, and team morale improved as they regained a sense of control over their workloads.


The Role of the Product Manager: Balancing Value and Discipline

This experience also reshaped how we viewed the role of our Product Manager. Her instincts were always customer-first and value-driven—but even the best intentions can have unintended consequences.

By including her in the Feature Enhancement Review Board, we preserved her vital input while also encouraging a more strategic approach. Instead of recommending enhancements during active development, she began to note them for future releases unless the business case was strong enough to warrant immediate inclusion.

This helped her maintain her customer advocacy while contributing to better team performance and smoother deliveries.


Lessons for Project and Product Leaders

Every project faces the temptation to “just add one more thing.” But without guardrails, those additions become silent killers of time, focus, and quality. Our experience taught us:

  • Feature creep is often a process problem, not a people problem.

  • Good documentation and post-mortems are key to surfacing hidden patterns.

  • Formalizing how changes are proposed and reviewed encourages better planning.

  • Empowering the product team with structure—not restrictions—leads to stronger results.

Ultimately, the discipline of saying “not now” is just as important as the innovation of saying “what if?”


Conclusion: Managing Growth Without Losing Control

Software development is a dynamic process. Customer needs evolve, ideas improve, and developers discover better ways to build. But growth must be managed.

Feature creep may not always be obvious. It can masquerade as helpful suggestions, customer-centric improvements, or low-effort tweaks. But if not managed carefully, it erodes deadlines, impacts quality, and drains team energy.

Through formal tracking, cross-functional review, and a shared understanding of priorities, we transformed a recurring delivery issue into a point of strength. Our teams now deliver with greater confidence, and our products still evolve—with intention, not chaos.


Tuesday, August 20, 2019

Don't Hard-Code URLs in Software or Documentation: Use Smart Redirects Instead

Introduction

At first glance, a broken link may not seem like a major issue. But as we discovered firsthand, something as small as a non-functioning URL can highlight a deeper flaw in your development and documentation process. In the early versions of our software, we included direct, hard-coded URLs to external resources in our documentation and help pages. It seemed like a harmless shortcut—until we encountered a real-world consequence that made us completely rethink our approach.

The Problem Begins: A 404 That Uncovered a Systemic Flaw

A year after release, a customer reported a minor defect. One of the URLs in a help page was returning a 404 error. On the surface, this was a low-priority issue. But when we began reviewing it, we quickly saw that it was just the tip of the iceberg. That broken link pointed to an external help page for a third-party component we were using, and the organization behind that component had updated their site structure.

The result? The hard-coded URL we had embedded no longer worked.

This wasn't an isolated case—it exposed a critical weakness in our software design and documentation process. Our system relied on URLs that could change at any time, and we had no way to update them post-release.

Why Hard-Coding URLs Is a Bad Idea

While it might seem convenient to insert URLs directly into your software, documentation, or help files, doing so creates long-term maintenance and reliability issues. Here are just a few scenarios where hard-coded URLs can cause trouble:

1. External Websites Can Change

As with our initial issue, the structure of external websites is out of your control. If you're linking to third-party documentation or tools, there’s no guarantee those pages will remain at the same location. A restructuring, rebranding, or migration can instantly break all your references.

2. Internal Systems Evolve

Even internally, hard-coded links can be fragile. We once updated our internal Help System by moving to a new content management platform. This change altered our entire URL scheme. All previously working links were rendered useless, and fixing them manually would have required hours of work.

3. Page and Content Changes

Sometimes it’s more efficient to update where a link points rather than rewrite and republish several help pages. But when URLs are embedded directly in software or documentation, updating them becomes complex and error-prone.

4. Localization and Version Control Challenges

If you localize your documentation or maintain multiple versions of your product, hard-coded URLs complicate maintenance. Each version may have different content or links, leading to errors, inconsistencies, and duplicate effort.

The Better Solution: URL Redirection

To address this issue, we adopted a more robust strategy: use redirect URLs instead of hard-coded URLs. A redirect URL acts like a middle layer. Instead of pointing directly to the final destination, you point to a redirect link hosted on your own internal server. That redirect, in turn, forwards the user to the correct destination.

This approach gives you the flexibility to change the final target anytime, without needing to modify the software or re-release documentation.

Benefits of Using Redirect URLs

Implementing redirect URLs offers several advantages:

  • Flexibility: You can update the destination at any time without touching the software.

  • Centralized Control: All links can be tracked and managed from one place.

  • Reduced Defects: Fixing broken links no longer requires product patches.

  • Version Independence: You can change targets based on product versions or locales.

  • Long-Term Reliability: Even if external content moves, you remain in control of redirection.

Best Practices for Redirect Management

Using redirects effectively requires a structured approach. Here's what worked for us:

1. Create a Redirect Map

Maintain a detailed file that records every redirect URL, its usage, and the current destination. For each entry, include:

  • Redirect URL

  • Final destination

  • Usage context (help file, tooltip, etc.)

  • Requestor or owner

  • Date created or last modified

  • Comments or purpose notes

This file should be version-controlled in your source code management system, just like your software code.

2. Implement Change Tracking

Whenever a change is made to a redirect, log the change via a formal process—ideally as a tracked defect or feature request. This creates an audit trail, which helps during troubleshooting or reviews.

3. Host Redirects Internally

Use your internal web server or infrastructure for managing redirects. Avoid relying on external services for redirection unless you control them.

4. Use Meaningful Redirect Aliases

Instead of using random strings, use human-readable aliases for redirect URLs. This makes them easier to understand and manage. For example:

  • /redirects/video_help instead of /redirects/abc123

  • /redirects/component_docs_v2 instead of /redirects/xyz456

5. Test Regularly

Set up automated or scheduled testing to validate that all redirects are still functioning and pointing to valid destinations.

Addressing Redirects Across Software Releases

What happens if a redirect breaks, or the target content changes after a software version is released?

By decoupling the hard-coded URL from the final destination, you’ve already protected yourself from most issues. All you need to do is update the redirect. You don’t need to patch the product.

However, for older versions or those with strict support policies, evaluate whether fixing the redirect aligns with your support model. For example, if a security bulletin is posted for a legacy product still used by clients, you can simply redirect to the latest info—even if the original software is years old.

Communication Strategy for Customers

If a redirect breaks or a customer reports an issue, your team can:

  • Quickly confirm the problem

  • Update the destination in the redirect

  • Inform the customer that it’s fixed—often within hours

This builds customer trust. You’re not just fixing issues—you’re responding fast and showing that your development process is future-proof.

You can also use redirects to track user behavior by analyzing which URLs are most clicked. This helps prioritize updates and shows what users care about.

Final Thoughts

Adopting a redirect policy may feel like extra effort at first. It requires planning, documentation, and an internal process for tracking links. But the long-term benefits far outweigh the cost. Once you’ve had to deal with the hassle of fixing a hard-coded URL in released software, you’ll understand just how valuable redirect flexibility can be.

This approach provides future-proofing, minimizes disruption, and improves your ability to respond to change quickly.

Don’t wait until a customer finds a broken link. Plan ahead. Build smart. And never hard-code a URL again.


Thursday, August 15, 2019

Keeping Up with Security Fixes and Patches in Software Development

Introduction

Every other day, headlines scream about another security breach. Hackers have stolen credit card data, passwords, or even social security numbers. These stories might seem distant, but for the organizations affected, the damage is real and often severe. The consequences range from customer data loss and reputation damage to layoffs and crashing stock prices. While billion-dollar companies might survive such shocks with minimal tremors, smaller or mid-sized businesses can face lasting consequences.

You might feel immune to such threats. Perhaps your project has never faced a major breach. Maybe you're not even on a hacker's radar. But security risks aren’t always about direct attacks. Sometimes, vulnerabilities lie hidden in third-party components or outdated libraries quietly integrated into your software—a ticking time bomb waiting to be exploited.

How Hidden Security Flaws Enter Your Project

Most modern software projects rely on a variety of external components. These include libraries, plugins, media decoders, frameworks, and even code snippets. It’s neither feasible nor efficient to write everything from scratch. Developers use these components to accelerate development, reduce costs, and integrate complex functionalities quickly.

A great example? Media decoders. Handling all image, audio, and video formats from scratch would be a massive undertaking. Instead, developers include libraries or use built-in OS-level capabilities. While convenient, these additions come with their own risks. Once an external component becomes part of your application, so does any vulnerability it carries.

The Real Risk of Inaction

Here’s the problem: if a flaw is found in a component you've used and the fix hasn't been applied (or your users haven’t updated yet), the vulnerability persists. Tools and scripts to exploit such holes are widely available, making it easy for even low-skill attackers to cause harm. And if a breach occurs due to an issue in your distributed software—even if the root cause is third-party—your customers will hold you responsible.

A Simple Example

Imagine your software includes a third-party component for parsing image formats. A security researcher finds a buffer overflow flaw in that component. The maintainers release a fix. But if you don’t integrate that fix, repackage your software, and distribute it promptly, users remain vulnerable. If someone launches an attack using a specially crafted image, the consequences could range from crashing the application to complete system compromise.

How to Stay Ahead of the Threat

You can’t eliminate risk entirely, but there are several effective strategies to manage it:

1. Component Inventory and Exposure Matrix

Maintain a detailed inventory of all third-party components used in your software. For each component:

  • Record its version.

  • Note its criticality to the application.

  • Evaluate whether it could be exposed in ways that attackers might exploit (e.g., input parsing, network interfaces).

This matrix should be accessible and updated regularly.

2. Monitor Security Feeds and Vulnerability Alerts

Use tools or subscribe to feeds that alert you to vulnerabilities in libraries or frameworks you use. Websites like:

These resources offer real-time tracking of reported issues.

Assign a team member the responsibility of monitoring these sources and flagging any issues relevant to your project.

3. Establish Response Protocols

Define a pre-approved plan to respond to discovered vulnerabilities:

  • How critical is the flaw?

  • Does it require immediate action or can it wait for the next release?

  • Who investigates and verifies?

  • Who tests the patch and deploys the update?

Having a pre-determined strategy ensures a calm and measured response when problems arise.

4. Handle Legacy Releases Thoughtfully

This is a bit tricky. What happens when a vulnerability is found in an older release—say, a version two iterations back? You need to evaluate:

  • Do you still officially support that version?

  • What is the severity of the flaw?

  • What effort would be required to fix it?

If the flaw is minor and the release is obsolete, you might decide not to fix it. However, if many customers still use that version, and the vulnerability could cause significant harm, a patch or workaround might be necessary.

5. Define a Clear Communication Strategy

When a vulnerability is discovered, communication is key. Your customers need to:

  • Know that you are aware of the problem.

  • Understand the impact (or lack thereof).

  • Receive clear guidance on what to do next.

Sending timely updates, publishing knowledge base articles, and even issuing patches proactively builds trust and positions your organization as responsible and customer-focused.

Automation Helps, But Can’t Replace Strategy

Tools like Dependency-Check, npm audit, or automated scanners are excellent. They notify you when outdated or vulnerable packages are present. However, these tools only work if you integrate them into your build process and actually respond to the alerts. Technology can assist, but without policies and accountability, vulnerabilities will still slip through.

Best Practices Recap

  • Maintain an inventory of all external components.

  • Rate the risk level of each component.

  • Assign a team member to monitor vulnerability disclosures.

  • Define an internal process to assess and respond to each risk.

  • Decide how long older versions are supported and what patch policy applies.

  • Communicate clearly with customers when a flaw is identified.

  • Automate scanning wherever possible, but maintain manual oversight.

The Bigger Picture: Why This Matters

Security flaws impact more than just your application. They affect trust.

  • If a customer discovers a vulnerability before you do, their confidence is shaken.

  • If attackers exploit the flaw, the damage can go beyond your software to your brand.

  • If news of the breach spreads, legal, financial, and reputational harm could follow.

Being proactive about vulnerabilities isn’t just about code. It’s about credibility.

Conclusion

Security isn’t a one-time task; it’s a continuous process. With the speed at which threats evolve and the increasing use of third-party code, staying updated with security fixes and patches is more important than ever. By implementing structured processes, assigning clear responsibilities, and maintaining a strong communication line with your users, you significantly reduce your risk.

Treat security as a core feature of your software, not an afterthought. Because when trust is broken, no patch can fully fix it.


Tuesday, August 13, 2019

Avoiding Pitfalls in External Partnerships: Lessons in Prototyping and Feasibility

In today’s rapidly evolving technology landscape, partnerships between organizations are not only common but necessary. They help companies expand their capabilities, improve their market presence, and share resources. However, not all collaborations go smoothly. Some run into critical issues that could have been avoided with better planning and foresight. This article recounts one such real-world experience and offers actionable strategies to avoid similar setbacks in the future.


The Background: A High-Stakes Deal with Promise

A few years ago, our organization entered into a promising partnership with a larger mobile-based company. The idea was simple and powerful: we would provide a customized version of our software, which would act as a gateway to their platform. The potential of such a collaboration was immense. Not only would this help establish credibility in the market, but it would also provide a platform for future deals and partnerships.

The marketing and sales teams from both sides worked tirelessly to iron out the details. Our technical team contributed as needed, and before long, the agreement was sealed. There were celebrations, back-pats, and high hopes. As with many such collaborations, the schedule was tight—but then again, what enterprise deal isn’t?


Early Red Flags: Complex Designs and Resource Constraints

The issues began to surface almost immediately after the initial excitement wore off. As the formal design phase kicked off, we noticed that the design requirements were more complex than initially anticipated. It wasn't just about repackaging our existing solution; the customizations required deep architectural changes.

Compounding the issue was a lack of technical resources. The initial resource commitment, based on the early scope discussions, was grossly inadequate once the actual requirements came to light. Despite increasing the team size, it became evident that we were not going to meet the aggressive timeline.


The Critical Decision: Break the Deal or Deliver an Incomplete Product?

A series of urgent meetings followed. Executives, project managers, architects—everyone was pulled in to evaluate options. After weighing the pros and cons, a decision was made: rather than delivering an incomplete and potentially damaging solution, it would be better to step back and break the deal.

It was not an easy decision. There were risks to our reputation and fears about the impact on future collaborations. But ultimately, preserving the long-term relationship and ensuring product integrity took priority.


Post-Mortem: Where Did It Go Wrong?

With the dust settled, it became clear that a post-mortem analysis was essential. And the results were illuminating. The most glaring issue? A lack of deep technical engagement before the deal was signed. While our marketing and sales teams did a commendable job sealing the deal, there was limited interaction with the actual product owners and technical leads.

We had entered the agreement without fully understanding the feasibility of the required customizations. The timelines and resource commitments were made based on superficial knowledge of the technical scope.


Key Takeaways and Strategic Fixes

1. Mandatory Technical Feasibility Review

Every future collaboration would now require a dedicated phase for technical feasibility. This means having the engineering team review the requirements in detail and provide input on timelines, risks, and resource needs.

2. Build Prototypes for Large Deals

For major contracts, we instituted a policy of building quick prototypes. This not only helps validate technical assumptions but also acts as a proof-of-concept to show the partner what’s possible. A working model beats a thousand PowerPoint slides.

3. Cross-Functional Planning

Deals are no longer closed without joint sessions involving marketing, sales, engineering, and product management. Everyone must sign off before moving forward.

4. Realistic Resource Commitment

No more best-case assumptions. Project plans now factor in possible delays, developer onboarding, testing cycles, and quality assurance.

5. Transparent Partner Communication

We now maintain complete transparency with our partners. If something can’t be done in a specific timeline, we communicate it clearly. Most clients appreciate honesty over surprises.


The Bigger Picture: Why Prototyping Matters

In situations, where rapid development and deployment cycles are the norm, prototyping plays a crucial role. It allows teams to:

  • Identify integration issues early

  • Get real-time feedback from stakeholders

  • Validate assumptions and reduce rework

  • Improve team alignment

By implementing quick and iterative prototypes, development teams can reduce time-to-market and improve product quality.


Conclusion: A Lesson Worth Remembering

While our failed partnership was a difficult experience, it became one of our most valuable lessons. It reshaped how we evaluate deals, plan timelines, and collaborate across teams. Most importantly, it taught us the importance of upfront technical validation and the role of prototyping in making smarter business decisions.

In a world where speed often trumps caution, taking the time to do a proper feasibility check can make all the difference.


📘 Recommended Books on Partnerships & Prototyping:

📺 YouTube Resources to Learn More:

Software Development Partner: 10 Considerations Before Hiring




Establishing a Technology Partner Program 







Disclaimer: This article is for informational purposes only and reflects a personal experience. Every partnership is unique and should be evaluated based on its specific context.


Friday, August 9, 2019

The Importance of Code Walkthroughs and Reviews in Software Development

In the world of software engineering, the value of structured review processes—like walkthroughs, code reviews, and requirement validations—is a topic that comes up often in academic settings. Students are taught that peer reviews, design validations, and test plan evaluations are essential components of high-quality development. But when real-world project pressures begin to mount, these structured activities are often the first to be cut or minimized.

Why? Often, project managers push to reduce perceived overhead to meet aggressive deadlines. The result is a project that may hit timeline goals but suffers from bugs, misaligned features, or unstable architecture down the line. Let’s dive deeper into various review types and examine why they matter at every stage of software development.


✅ Requirements and Design Review

The earliest review point in any software project occurs during requirements gathering and design planning. Here's why these are critical:

  • Requirements Review: Ensures that functional and non-functional requirements are complete, unambiguous, and agreed upon by all stakeholders. Overlooking this step can lead to costly changes later.

  • Design Review: Allows experienced architects and developers to scrutinize the proposed architecture. Questions like "Is this scalable?", "Does this integrate well with our existing modules?", or "Can this be simplified?" are raised.

Real Impact: In several projects I’ve overseen, design reviews led to architectural simplifications, which made implementation easier and performance stronger.


🧪 Test Plans and Test Cases Review

Testing is your quality gate. But what ensures the quality of the test cases themselves?

  • Test Plan Review: Ensures that testing objectives align with product requirements. Missing out on corner cases or performance scenarios can result in critical defects reaching production.

  • Test Case Review: Detailed test cases should be reviewed by both developers and testers. Developers understand the logic deeply and can point out missing validation steps.

Developer Involvement is Key: Developers might know hidden limitations or design shortcuts, and their involvement helps testers create more realistic scenarios.


🔍 Code Walkthroughs

A code walkthrough isn’t about blaming—it’s about understanding and improving.

  • Purpose: Typically done for complex or high-impact sections of the codebase.

  • Timing: Often scheduled at the end of a sprint or right before major merges.

Benefits:

  • Improves code readability and maintainability.

  • Detects logical errors or performance bottlenecks early.

  • Encourages knowledge sharing between team members.

Case Study: In one situation, a critical module suffered from repeated defects. Post-implementation code walkthroughs revealed poor exception handling and lack of logging, which were then corrected.


🐞 Defect Review

Not every reported defect should be fixed immediately. That’s where a structured defect review process can help.

  • Defect Committee Review: Validates whether the defect is real, reproducible, and impactful. Some reported issues might stem from user misunderstanding or edge cases that don't warrant immediate attention.

Key Benefits:

  • Prevents unnecessary fixes.

  • Helps in prioritizing high-severity issues.

  • Balances developer workload.

Efficiency Tip: Record defect metrics like how many defects were rejected or deferred. This helps refine QA processes.


🔧 Defect Fix Review

Sometimes, fixing one bug introduces two more. This is especially true for legacy systems or tightly coupled codebases.

  • Fix Review: Especially critical when touching core modules or integrating new components.

  • Overlap with Walkthroughs: These reviews often double as code walkthroughs for patches.

Why It Matters: A seemingly simple null check might affect validation rules elsewhere. Peer reviews catch these issues before they go live.


📊 Are Reviews Time-Consuming?

Many teams worry about the overhead. But it’s important to compare short-term time cost with long-term stability and reduced defect rates.

  • A one-hour review might prevent days of debugging.

  • Improved code quality leads to better team morale and reduced burnout.

Pro Tip: Use lightweight tools like GitHub PR reviews, automated style checkers, and static analysis tools to enhance the review process without overburdening the team.


🚀 Final Thoughts

Reviews may feel like slowdowns in the high-speed world of software releases. But in reality, they serve as powerful guardrails. Incorporating them consistently across your SDLC (Software Development Life Cycle) reduces risk, improves communication, and leads to better software products.

Whether you are a startup racing to launch your MVP or an enterprise handling millions of transactions, structured code walkthroughs and reviews can be the difference between success and disaster.

Don't skip them. Plan for them. Respect them.


📚 Further Learning and References

📘 Amazon Books on Software Reviews

🎥 YouTube Videos Explaining the Concept


Code review best practices



Code Review Tips (How I Review Code as a Staff Software Engineer)



Code Review, Walkthrough and Code Inspection





Wednesday, August 7, 2019

Coordination with External Teams – Why Regular Meetings Matter

Sometimes, when I review the posts I write, I wonder—why even bother documenting something so obvious? Surely everyone already knows this, right? But then real-world experience kicks in. Time and again, I come across situations where professionals, even experienced ones, fall into issues that were already covered in one of these posts. That’s when I realize the importance of capturing even the seemingly obvious practices.

The goal of this post isn’t to restate the basics but to help individuals reflect on their processes. If you're doing something better than what’s mentioned here, I would genuinely appreciate it if you shared it in the comments. These insights help all of us grow.


📌 The Reality of External Coordination

For any team—especially those working on product development—it is inevitable that you will need to work with external parties. These could be:

  • Internal teams within your organization that depend on your deliverables or supply essential components.

  • External vendors or partners—third-party developers, marketing agencies, manufacturers, etc.

Let me give you an example. Our marketing team once struck a deal with a phone manufacturer to preload our app on their devices. At first glance, this seemed straightforward—just give them the APK and you’re done. But the reality? Far more complex.

We had to integrate special tracking parameters to monitor usage statistics:

  • How often the app was used if preloaded

  • How it compared to installs from other sources

This required not just technical changes, but intense coordination. And it’s one of the many examples where assuming things will “just work” can lead to missed deadlines or poorly tracked deliverables.


🛠️ Challenges in Cross-Organization Coordination

When you're dealing with external teams, one big mistake is assuming their work culture and structure mirrors yours. This assumption can be costly.

You need to:

  • Clarify deliverables

  • Map roles and responsibilities

  • Track timelines accurately

  • Define escalation paths

Communication gaps, time zone issues, different management styles—these can all derail a project if not actively managed.


✅ Best Practices for Effective External Coordination

Here are some core practices to adopt when managing collaborations with teams outside your organization:

1. Define Clear Responsibilities

Start by identifying stakeholders on both sides:

  • Who owns which part of the work?

  • Who is the decision-maker?

  • Who handles testing, approvals, or rollbacks?

Have a contact matrix or ownership chart. Ensure it's documented and shared.

2. Establish Clear Communication Channels

Create dedicated channels for formal communication:

  • Email threads with clear subject lines

  • Slack or Teams channels for informal queries

  • Project management tools (like Jira or Trello) to track progress

Avoid mixing multiple discussions in a single thread—it leads to confusion.

3. Set Regular Meetings

Regular sync-ups are crucial. These meetings help:

  • Resolve roadblocks early

  • Ensure accountability

  • Track action items and outcomes

Depending on the project phase, these could be:

  • Weekly status meetings

  • Daily standups (during integration or release phase)

  • Ad hoc calls for urgent issues

4. Phase-Wise Role Adaptation

In the early stages, marketing, legal, and business development people might be heavily involved. As you transition into development, QA and release engineers take over. Ensure that:

  • The right people are in meetings

  • Transitions are smooth

5. Track Deliverables and Dependencies

Have a shared tracker (Excel, Notion, Jira, etc.) that both teams update. Include:

  • Milestones

  • Deadlines

  • Blockers

  • Review comments

Maintain visibility. Transparency prevents finger-pointing.

6. Issue Management and Escalations

Not all issues can be resolved at the same level. Define:

  • What constitutes a blocker

  • Who gets informed

  • Expected resolution times

Escalation should be a process, not a panic button.

7. Define Acceptance Criteria Early

To avoid disputes, both parties must agree on what “done” means. Define:

  • Functionality expectations

  • Performance benchmarks

  • Test coverage

  • User acceptance testing (UAT) criteria


💡 Tailor Your Process, But Keep the Structure

While the steps above are generic, the application of each depends on:

  • Team maturity

  • Nature of the partnership

  • Project complexity

A lightweight integration project with an external CMS vendor may not need a full-blown steering committee. But a core integration with a payments processor? That absolutely needs structured touchpoints.

Create templates for:

  • Kickoff checklists

  • Weekly status updates

  • Risk registers

  • Communication protocols

These documents become lifesavers during escalations.


🚫 What Happens When You Don’t Coordinate?

Let’s revisit the pre-installation app example. Suppose we had:

  • Skipped UAT

  • Failed to add tracking parameters

  • Assumed marketing had done the heavy lifting

The result? A product on millions of devices with:

  • No user insights

  • No uninstall metrics

  • No feature usage stats

In a data-driven world, this is a disaster. And entirely avoidable.


📝 Wrap-Up: Coordination Is Not Optional

Working with external teams—be they partners, clients, or vendors—is inevitable. How you manage that collaboration defines whether your project succeeds or drags into chaos.

So don’t assume. Don’t delay. Build coordination into the DNA of your process:

  • Communicate clearly

  • Document rigorously

  • Meet regularly

When done well, coordination becomes invisible—just like the best-run projects.


📚 Amazon Books for Further Reading


🎥 YouTube Video on Cross-Team Coordination


Challenges of Working With an External Design Team





Tuesday, August 6, 2019

Giving time for the testing effort

The testing process is one of the most fundamental parts of a software project. Any software that is built (or modified) would have defects in it. Even the most confident of software developers and the most skilled would admit that there will be defects that creep in when they are writing their code (in fact, the best ones are a part of the testing effort, working closely with the testing team to ensure that the team fully understands what all has been done so that they can do their best to tweak all the defects out). So it is well understood that there is a need for testing in order to deliver a high quality item to the end customer; and the process of testing tries to ensure that most of the high severity defect are found out and fixed during the process.
The challenges come in terms of ensuring that there are enough resources for the testing process and the amount of time required for the testing process is also there. There can be a lot of pushback on this front from the project managers and others in the management team, since the development and testing schedules do take up a significant amount of the overall project cycle. From my experience, there is a fair amount of pressure on the test team leaders about pulling in their estimates and trying to get that part of the project done early. If you speak to some of these testing guys, their common refrain is that management typically (in a majority of cases) do not have people that came from a testing background and hence do not really understand the work we do, and hence we get pressured a lot.
So what is the way out to ensure that the testing estimates are accepted, even though there may be some rounds of discussion and some estimates may be refined (could be reduced, or in the off case, during the process of estimation and discussion, there may be some upwards estimation of the estimates as well). Well, like many other issues that come up during estimation and planning, there would be some amount of rigor combined at the same time with some amount of rough estimates.
How does  rough estimate come up ? Well, if you have experienced testing leaders, when they look through the requirements (at a top level, since detailed requirements may not exist, but summary requirements would be there), they would be able to give a fairly good and rough estimate for the testing effort required, which can be broken down into number of people for the desired schedule.
Another way is to see similar projects (a lot of projects in large organizations can be similar, which gives a good idea of the testing efforts for a new project, at least as a good point for comparison purposes).
The effort is in preparing detailed testing estimates by taking the different requirements, breaking them down into test plans (more detailed test cases may not be possible given the state of requirements at this point of time). Some amount of rigorous review of these plans would give a very good consolidated testing requirement, both for the testing effort estimation and for later needs when these test plans would form the basis for preparing more detailed test cases.


Wednesday, July 10, 2019

Interaction between product manager and usability expert

The product manager plays a role throughout the product development or the projectexecution cycle. The product manager delivers requirements, discusses them with the feature teams, collaborated and provides clarifications during the design phrase and also plays a key role during the development and testing cycles - defining what the flow for the feature should be case there is a lack of clarity among the development team (and there will typically be some small part or element of the workflow or the screens or the UI that may not have been well detailed during the requirements or design process and needs the inputs of the product manager); in addition, most Product Managers I know would do extensive testing of the product, primarily of new features or features that were modified; and also spent time in the beta programs, discussing with the beta users about specific features or providing clarification or passing on the more severe defects.
The usability expert does not play that extensive a role throughout the cycle, but in the initial phases of the cycle, the inputs of the usability expert are critical. I remember a particular cycle in which we were doing a comprehensive redesign of the product, based on a summary of user issues and requests over the past few versions, and also because the product UI looked dates and needed to be modified to seem better and fresher (and those are somewhat nebulous concepts, but you would not believe how well these concepts sound when you pitch the idea to senior management). In such a case, the flow of ideas between the product manager and the usability expert was something that started way before the requirements phase; in fact these could start before the previous version was done and out of the gate.
The usability expert and the product manager have a set of inputs that help them as they start their process, and for larger products, the number of screens that they have could be considerable, so they do need to prioritize. These inputs would be -
- Complaints and suggestions by customers and on the user forums (especially if these get mentioned a lot),
- Inputs from the usability expert and the product manager themselves (if you show product screens to a usability expert, you can be sure that they will have their opinions on the workflow plus and minuses of certain screens, and the product manager typically has a list of peeves about some screens in the product),
- Technical changes that require a modification to an existing screen or make an improvement possible. It is possible that the components used for screen design have gone through certain changes, which in turn ensures that the screen needs modification or maybe there was a certain workflow that was desired but was not technologically possible, but is possible now
- And there could be some other inputs that also lead to screen or UI modifications
The process is somewhat cyclical, with the expert typically laying down a new desired workflow, which would be commented on by the Product manager and sometimes by the product team, and based on these discussions, a new iteration would be made. Because this may need to be done over many UI screens or workflows, the creative mind of a usability expert may do this screen by screen, rather than working over several screens at the same time, thus ensuring that different product teams can get started. This is where the Product Manager can prod and work with the usability expert, atleast being able to detail out preliminary requirements that can be fully detailed out by the usability expert. It can be a challenge for the Project Manager to handle this kind of scheduling, but cooperation with the Product Manager can help make this smoother.


Thursday, July 4, 2019

Presentation - who should do the presentations ...

In previous posts, I have talked about the kind of data, graphs and slides one should use in a presentation, especially when the presentation is being made to people who are in a more senior position. One has to be careful about what to present, presenting a top level summary and not doing an overkill with data, and yet having backup data and graphs for the queries that might come (it always comes off well if you have data on your tips, or have access to the data on your finger tips and goes a long way in generating a more positive impression).
The next important question is about who should do the presentation. And for a question such as this, there is no correct answer. It really depends on a number of circumstances, depends on the members of the team, and so on. Here are some points to ponder over:
- Importance of the presentation: Sometimes the presentation is really significant; for example when a new project is being launched and the kickoff means that senior executives would be present. In such a case, one really needs to have the best foot forward, and there is no question of trying out different members of the team in order to get them more presentation experience. If you were going to kick-start a project and the meeting was a review meeting, the presenter needs to be the best person for the job. On the contrary, if this is a regular meeting (many such meetings can be standard meetings where not much changes are expected but are a part of the regular schedule), one can try to get different team members to present either the whole stuff or break it up into different parts done by different team members. There is no real problem in even starting out by the meeting by introducing the team members and explaining the people who would be doing the presentation.
- Inclination: In every team, there will be people who are interested in doing such presentations because it gets them noticed and known by people outside the team especially if they come across as confident and knowledgeable. On the opposite side, there will be people in the team who are really not interested in doing presentations, and this is not something that one can force somebody into.
- Specific ability: Sometimes there is the need to fit a specific ability to the need of the situation. There could be a team member who is very good at data, at being able to understand the different data points as well as analysis of data and different permutations and combinations of data (this would be very useful when this is a review meeting that goes into detail into coding data or defect analysis); on the other hand, when you have a meeting that talks about project starting and about the various options and variables, about customer inputs, you need somebody who is more clear about the requirements, about the options in this, about what the customers think like, and so on. Everybody would know some details, but there are always some specific team members who are more fluent in different parts of the project, and one should always try and match these abilities, unless it is a really routine meeting.


Tuesday, July 2, 2019

Focusing on the usability and ease of reading

Recently. I was driving past a gas station next to the highway, one that I had passed by earlier and this time saw a new board announcing some new eating options. Given the speed at which I was driving, I would normally be able to read the signs on the board, but the lettering was in a fancy script where I was not able to read it (or rather, it would have taken more time to read it than the time I had while driving past it). I asked the other people in the car whether they were able to read the names of the outlets that were on the sign, and none of them were able to read in the short duration of time that we had while passing by. This was not true of other signboards that were in plain simple script, not some fancy script.
While reliving this experience, my experience in an IT industry came to the fore, all the discussions with the usability experience and the discussions with the usability expert came to the front. If the gas station signboard was written after consulting with an experience expert, then they would have realized the use case and had the sign board written in a way that people could read it while passing by and maybe be attracted enough to stop by before they passed the place. 
And this is what usability is all about. While doing the design of any new user screens or even when looking at the redesign of an existing screen or user facing UI, or anything similar, it is always important to look at how this will look at the users. When the design is being done by the people behind the development and testing team along with the product manager, it is necessary that it be consulted with a usability expert. It is important to emphasize this point because there have been so many cases where the people that have been working on the product for so long feel that they know what the customer wants and will resist what a usability expert emphasizes on (there are specific examples I know where the usability expert has recommended changes in the workflow or the screen, and the development team has not been able to appreciate the changes or are very resistant to these changes).
One way to make sure that the development team understand the need for usability is to get the team members looking at user forums or defects logged by customers, as well as getting them to actively looking at beta programs and interacting with the users - this can get them to quickly change their perspective of what is important for the product.


Tuesday, June 11, 2019

Presentations - what data to present and how (contd..)

In the previous post (Data and graphs in a presentation), we talked about some of the data elements and graphs that would be shown in a presentation; such information is generated from a number of factors, the level to which these need to be shown as well as the detail depends on the audience of the presentation.
In this post, we talk about something that needs careful planning while making a presentation. When you have information and are able to show data and make great graphs, there is a tendency in most cases to do over-kill by trying to present too much data in the form of graphs. This is especially problematic when you are presenting to people who are senior to you and really do not have the time or the effort to go through multiple graphs that are presenting information that is similar. For example, if you are presenting on the current status of your project, especially with respect to the development phase of your project, the maximum focus would be on the defects - and there can be incredible amounts of data that is generated during this phase. For you and your colleagues who are working day in and day out on these defects, a lot of the data may seem relevant. But if you try and present too many graphs, even if they are packaged in a great way, it would still be over-kill. I have seen a case where the audience soon started saying, "Next, next" as soon as they saw another graph.
Such a reaction from an audience means that you have practically lost them. You have to focus on the key data details that you need to present (and I am not trying to tell you what this key data is); talk to your colleagues, get presentations made by other teams, talk to somebody who is more senior and would have attended such presentations, and so on. Make sure that you have done this homework. In one presentation, I saw around 15 different graphs on defects and defect resolution, this was way too much.
Try to finalize on a small number of graphs that you will have in your presentation, it is fine for you to have more detailed graphs in another presentation or in another appendix. There is a small chance that somebody will ask for more data or will get curious about another metric, and having that graph handy shows that you are well prepared and ready for the presentation (at the same time, don't go overboard and have dozens and dozens of graphs ready, those don't give that good an impression though). In fact, during the presentation, you can talk about the key data points for which you are presenting graphs and mention that there are additional graphs available if somebody wants more data (and these graphs should typically be those that your team is anyhow generating for keeping track of defect or coding or other metrics).


Thursday, June 6, 2019

Presentations - what data to present and how

This topic is actually a full book, since it depends on the type of presentation, the target audience, etc. For example, if you are presenting to senior management, you would try to keep bullet points and graphs of data along with conclusions (and keep all the detailed information on your fingertips since you would not know who could ask what question about which part of the presentation). If you are presenting to colleagues and team members, high level summary may still be presented, but a lot more data analysis, a lot more talking about the data analysis, shortcomings, etc need to be talked about - in some cases, follow up meetings with select members of the audience may need to be set as well. When people ask questions, the questions may be more exact about specific points of the data or the analysis and it helps to have all the information as required on your finger tips.
However, suppose we are planning the presentation, need to figure out what kind of data to present, what are the graphs that may be required, all of these need to be figured and finalized before the data needs to be presented. There are many ways in which this sort of initial presentation needs to designed.
- Design what is the information you need to present, which in turn drives the data elements you need to have as part of your graphs. For example, if you are presenting on the current status of an ongoing project, one important data point would be the number of defects that are being found and fixed over a period of time. There may be ways of presenting this data in terms of the actual graph, including contrasting with similar data from previous versions, but you have an idea of the data points that need to be there in the graphs.
- Discussions with fellow presenters. In our case, when we had to make a presentation, it was on behalf of the team, so the fellow presenters would be colleagues (I was a project manager, so involved other other project managers, involved the heads of the development and testing team) with whom you could have extensive discussions on what is the kind of information or data points that need to be presented, along with the level of detail.
- In a number of cases (atleast in my case), my boss was ultimately the person assigned the responsibility for the team, so even though we would be making the presentation, the boss held a fair degree of responsibility. You can be sure that if your presentation made some boo-boo, there might be some (or many) uncomfortable words with the boss and a loss in the amount of trust that was given to you.

Once you are done with the kind of data to show as well as the kind of graphs and data analysis, there needs to be atleast a couple of presentations with the team and the boss as a sort of practise runs. You would not believe how a very confident team, well happy with their presentation, was shaken with some of the questions (genuine ones) that somehow needed a modification of the presentation, whether these be the graphs or the talking points.


Thursday, May 30, 2019

Ensuring you are kept in the loop for communication

Recently I got an email from another program manager, the lady was somewhat junior to me in the sense of actual title, but we were both doing the same role (and eventually that is what matters after all, not a title). She was in charge of a team that delivers some modules for a project that our team uses, and we have been working with her team in the past for several deliveries, and the coordination between our team and theirs was working well.
With a new request, one of our senior developers started a discussion with another senior developer from the other team, and this discussion continued for some time between these 2 developers and eventually the developer from our team included me in the discussion. I put in my comments, talked about the schedule and so on, and did not take the elementary step of including the program manager from the other team. It was around a week later that she found out that she was not being included in a discussion about the features, deliveries, and so on. And then she sent me an email asking as to why she was not being included in the discussions about a delivery that her ground was eventually going to be tasked with making. I am sure she was having a similar discussion with the developer from her group. I had no great answers for this one other than stating that it was a mistake and she should have been included in the discussion.
This is a tricky point, about the level of involvement in discussions and the point at which it should start. The dynamics of this varies from group to group, with some groups having the program manager or the project manager coming in only when actual scheduling or commitments need to be made and the developers having the experience to continue discussions and only bring in the program manager at a much later stage; for other groups the dynamics could be different - when the PgM or the PM needs to come in is not a reflection of the values or maturity of a group, it is just how the dynamics of the group have become established.
However, there is no denying the fact that the PgM or the PM does need to be involved at a certain stage; there are many factors that require inputs from the PgM which the developer may not have. At a very extreme level, the team may have been directed to do some other work, and hence would not be able to cater to any request; or there may be scheduling conflicts or resource conflicts and it is typically the PgM who is in a position to look at these conflicts and then work these out in coordination with others. Further, once the discussions reach a certain stage, there may be the need for regular interactions between more than just these developers and somebody needs to track the agreements and action items from these interactions or meetings. There could be a multitude of reasons why the PgM or the PM needs to get involved, and it is best if the person gets included and they then can figure out their level of involvement at different stages of the discussion.


Wednesday, April 24, 2019

Ensuring resources are allocated for the next version

The process of  resource (in this post, we are talking about people) allocation during the process of product development is tricky, and because there are high costs associated with the same, it requires careful planning, and sometimes circumstances can throw such planning out of the window.
For projects, where people are assembled for a specific one-off project, the situation is slightly simpler. There is a proper schedule for the project, and that project schedule defines when what resources is required for the project and this can be done with the identification of resources and their allocation to the project at the required time (or it can be done in a staggered manner with part work on their existing project and slowly taking up more work on the new project until they are fully on the new project).
However, consider the case of product development where versions of the product are released after a periodic cycle. For simplicity, consider the case where the product is released every year, say in October. During the course of the year, the resource requirements are not static. At the start of the cycle, during the requirements phase, the need for resources is lower, it increases during the design phase and can be maximum during the cycle of develop, test, fix; it is during this time that the phrase 'all hands on deck' is most suitable. But as development and testing starts to taper down, the product team needs to simultaneously start work on the next version. Identification of new features, the most critical fixes, interactions with customers to identify those features or changes that are highly needed by customers all happen during this time phrase, which usually does start before the previous version has shipped.
Even the use of more complicated requirements and workflow design involving prototyping, developing sample user interfaces, and so on, is something that takes time. If these are attempted to be started after the previous version has shipped, it will eat up the development and design time for the next phase. The problem is in terms of assigning more accomplished developers and some testers for this effort, since there will be need for simultaneous working on critical defects and so on. However, teams that have been working on multiple versions over the years have learned how to do this; the amount of resource allocation needs to be fluid, with people moving from one version to another during the course of the week, or even during the course of a work day (with the intention that these changes are not too chaotic, since that could unnerve even the most rational of people). The program / project manager, the leads and the product manager need to handle this process carefully, being careful not to fluster the people working on this too much and it will work just fine. 


Tuesday, April 16, 2019

True status in the status report

The status report can be a very important document, or it can be just something that is created as a matter of routine. I remember 2 very differing usages in 2 different situations - in one case, the status report was reviewed by many members of management and they had queries on some of them, which reassured us that the status report was valued and was being viewed. However, it also brought on a feeling to recheck the report before it was sent out that it was accurate and that the information presented the status as of that point, not an optimistic or a pessimistic portrayal, but an accurate portrayal.
Another case was in an organization that had different types of process certification, and part of that certification was about ensuring that every project generated status reports of different types which were sent to a central project management office; the idea being that anybody could find the status report of any project and review it for whatever timeline. The problem I could see after a few weeks was that the project manager was drowning in the various status reports that were required to be generated, and it was pretty clear that most management would not have the bandwidth to be able to review more than a couple at any detail.
However, the subject of this post is actually more about the accuracy of the status report. Right in the beginning, when I was more of a novice project manager with a few months experience, I would work with the leads to generate a status report - the problem was with the level of maturity of everyone involved. Most people tend to see issues in a status report as something that reflects on their way; so initially the status report would contain the issue, but also with a sugar coating about what the team was doing. The lesson I got one day was from a senior manager who had a discussion with me. His feedback was that the status report was supposed to report the issues as they were along with what the team could do to overcome them, not a sugar coating. The issues were needed to be represented accurately, including in those cases where the issues could pose potential red risks to the project and needed some kind of immediate attention (whether these be from within the team or needed attention from people outside the team, such as an external team on which there was a dependency).
This can get tricky. I remember the first time when I generated a status report with a red item, I got called into a discussion with the leads of development and testing and my boss, who were not very happy with the fact that a issue was listed in red. The expectation was that any red issue would be handled so that it was no longer red, but I held my ground. What we did finalize was that the day before my status report, or sometimes on the same day, I would do a quick communication if I saw a red item and we could discuss it. That did not mean that I would remove it unless I was convinced that my understanding was unfair and it was not red. This seemed to work for the future for this team at least.


Thursday, April 11, 2019

Ensuring the major features are delivered to testing early

Sometimes when I am writing these posts, and review the content once I have done the post; it seems like I am writing about the most obvious of topics. But you won't believe the number of projects where there has been discord between the team members with the QE team complaining about features being given late, those features which had a huge testing impact; and a significant number of end of project review meetings talk about how to ensure that major features are given early enough in the cycle that it is shaken out as thoroughly as possible much before the final deadlines.
What is this ? Well, when you are doing a software project cycle, in most cases, there will be some features that are more substantial than the others. It need not be a user facing dialog or screen, for example, it could be some kind of engine that works in the background but has a huge impact on the product (for example, in an accounting software, it could be the tax calculation code that is a huge part of the product, or for a Photoshop kind of software, it could be the graphics engine that works in the background), or it could be a brand new feature that is supposed to be the selling point of the new version of the software.
In such cases, the future of the product is dependent on making sure that these significant features / engines / code are thoroughly shaken out and tested and major and medium level defects are found and fixed, and fixed much in advance so that these defects are not left for the last parts of the cycle (unfortunately in many cases of software cycles, even with the best of intentions (not planning), these features can drag right till the end).
There is a problem inherent in all this. When you have a new feature or new engine or something that is new, there is the chance that there will be more defects than in a feature that has existed from earlier and where a lot of testing may have already happened. Some of these may be severe enough that the product cannot be released until these defects have been found and tested.
Another problem is that for new features, even with the best written cases and requirements, there is the possibility of disagreement between the development and QE team about a specific workflow, which could be something as minor as the exact wording of an error message or the case in which it appears. Such disagreements can be easily resolved by the Product Manager, but all of these take time and contribute to potential delay in actual completion of the feature.
Further, such major changes have a higher impact on the localization and documentation aspects of the product, and until the feature is fully ready and all medium and major defects have been found and fixed, these aspects cannot be fully resolved and too much delay will have an impact on the overall schedule of the project.
Now, all of this does not mean that it is going to be easy for these major features to be fully delivered early; there may be schedule or dependency issues that will delay the feature, but the planning should try to ensure that the feature is delivered as early as possible, and if it can be broken into parts which can still be tested to some reasonable level of confidence, one should target such a plan. Don't ignore this issue. 


Wednesday, April 10, 2019

Costs of taking last minute defect fixes

You know what I am talking about. I even hinted in the last couple of posts about the dangers and problems involved in this situation. It is like a Hobson's choice, no matter what you do, there is no clear right answer. Here are some cases for the same:
- You are a week away from the date when you cease the cycle of testing and fixing, when the product goes into the process of wrapping up the development activities and into the release set of processes. The testing team, by this time, would have wrapped up the major testing cases and would be carrying out the last stage of testing, with the hope that no major defect pops up at this point of time. And would you believe it, there is a major defect that does indeed emerge; restest confirms that the defect is reproducible, the defect review committee looks at the defect, but at this late stage, decides that it wants details in terms of what is the proposed fix, what are the code changes; wants the code changes to be reviewed by multiple people and wants the change in a private build so that it can be tested thoroughly before it is integrated into the main branch. And even with all this, it can seem dicey since a major change has the potential to create instability into the entire system and code base.
Such a change coming just a few weeks would have been implemented easily enough.
- Now we get into the critical milestone timelines. Just a day is left before the wrapping of the testing and defect fixing stage, and then you get such a defect. Everybody remember's Murphy's Law (if anything can go wrong, it will) at this stage and the possibility that such a defect is deferred or pushed into release notes with the possibility of being fixed in the next release or in a dot release is actively thought through. However, every defect cannot be deferred; some defects can make a product crippled, or at some workflow in the product seem crippled and with the potential of a section of the users giving it a negative rating or hollering at product support and in user forums, you have to take the possibility that at this late stage, defects will still need to be fixed. You have to go through the same process that you can went through when you looked at the defect if it was found a week before, but you need to put more resources on this review and try to speed it up. Further, if there is an internal milestone that is getting impacted, you try to work out whether you can move the internal milestone without impacting the product release date (but this is not a single person decision, needs to move through a few layers of management before getting approval; if your team has a good reputation, it is easier to get approval). And you still have to work out whether there is an impact on the documentation team and the localization teams and what will be the impact, how much their schedule will get impacted.
And you need to get a proper review done about whether there was a way to get such a defect found earlier, so as to hopefully avoid the kind of panic that you went through in this late stage. 


Sunday, April 7, 2019

Avoiding ascribing blame for last minute defects without a review

As a software development team reaches the last stages of the development project, the tension levels in the team can suddenly change drastically, mostly increase. There is an anticipation that something may go wrong, something can change the milestones and deadlines. When the team reaches the days before the completion of the development and testing stage, every day of testing brings forth an anticipation, with the leads and managers of the team hoping that the testing is thorough, but that no major defect comes through that could impact the deadlines. 
Any major or high severity defect that comes through near the end deadline has a potential severe impact; the risk of not making a fix is that you release a buggy product, but any fix has the potential to cause an undesired change in functionality or introduce another defect, something that may not be captured easily. With the pressure of deadlines looming, unless more time is given, code reviews and impact testing can try and give the confidence that there are no adverse affects of the fix, but there is always a risk.
What I have seen is that this tension causes people to start flipping out when things start going wrong. So for example, there was a case where a young tester found a severe defect almost near the end of the cycle, and there was no getting around the impact. There was a need to make a fix, evaluate the impact of the fix, do multiple code review cycles and use multiple testers to check the impacted areas, and, and, there was a pushing out of the deadlines by a couple of days. One of the senior managers was very irritated by this, and dressed down the QE lead about why the defect was not caught earlier, almost blaming the QE team for not doing the job thoroughly.
Once the release was done, there was a review team that went through the various development and testing documents, and realized that there was a mixup right from the start, from the developer design documents that were in turn used by the QE team for making their test cases. It was a lucky adhoc test that found the defect. As a byproduct of this review, the senior manager was also advised that such kind of blaming does not help and can end up discouraging those team members who were only doing their jobs, and that too in a proper manner. 


Saturday, April 6, 2019

Defining single point responsibilities for decision making during a software cycle

It seems like a simple management issue, having single point responsibilities for different functions, but you would amazed at the number of teams that stumble on this issue when at a critical point in their schedule. Consider the case where a team is coming to a point where they have to declare whether they are now clear of all major discovered defects (no team can find and fix all known defects, it is an impossible task, the amount of effort involved in trying to detect all bugs starts increasing exponentially once you reach a certain point). At this point, many teams start working on adhoc testing, others start the process of release of the product to the consumer, and so on. It is a major milestone for the product.
But who takes a call that they are clear of all major defects. The key word here is 'major'. As long as testing is going on, there will always be issues coming up, and they have to be dealt. Depending on who see the defects, the classification of whether an issue is major or not can be dealt with differently, even with the best of defect classification criteria in place.
I remember an issue from a couple of decades back. Almost at the last stage, when the team was ready to close down the defect finding and defect fixing, a defect came up. It was an interesting defect since it was serious, but covered a small workflow that many considered a non-serious workflow, and some of the team members were fine delaying it for a later dot release (the team was in the process of releasing periodic dot releases, so such defects could go into the dot releases).
At that point, during the day, we realized were were going around in circles, trying to figure out whether it should be fixed and we take another build (with the subsequent testing of that build again) or we defect it and take it up later. There were strong opinion for both in the managers and leads in the team. We realized that we had never worked out the appropriate decision making process for cases such as this, and suddenly giving the decision making for this to one person could have caused tension within the team. Ultimately we had to setup a meeting of the senior leaders of the team to thrash through a decision, taking into account the costs and the impact (both for and against the decision).
The learning we had from this kind of case was we need to better refine the process of having decision makers for specific situations - in this case, for the next time, we made the testing manager as the decision maker about whether a defect that came up in the last minute was of a sufficient severity to be needed for fixing, with the concept that if the QA manager did recommend such a defect, they should also be able to justify the severity of such a defect later.


Software product localization - there may be base product changes

For somebody (people or teams) who have experience in releasing software products in multiple languages, they would typically have gone through a lot of learning in terms of how the nuances of different languages can cause changes in the base language product (in our case, and in most cases, the base language product is in English, and the product can be released in many other languages, for larger software products such as Operating Systems or MS Office or Photoshop, these can be many many languages).
However for a team that has so far been releasing software products in one base language and have now moved to try and release their product in other languages, it can be a fairly complex project. In simplistic terms, it is to make sure that all the strings used in the product (whether these be text on screens or on dialogs or error messages, etc) are all capable of being harvested, sent for translation and then reincorporated back into the product depending on the language in which the product is being released.
Based on this simple concept, things get more complicated as you proceed towards actually doing the project. There are additional schedule requirements, there is a lot more work for the developers since testing a product for localization reveals many changes that are required, there is the need to get external people who can do the testing of the product in the different languages (the language needs to be checked, as well as the functionality of the various parts of the product under different languages), and many other changes need to be planned (this post is not meant to be a full description of the process of getting a product localized for the first time - that is a massive endeavor that requires a lot of explanation). As an example, a simple text on an error message may turn out to be much longer in a language such as Russian or German, or reading from right to left in Arabic or Hebrew, and the error message may not display properly in such cases. Either the message needs to be re-written or the error message box needs to be re-sized, which also has implications for the help manuals that may need to be modified.
Ideally, a team planning to get their product localized for the first time needs to avail of the learning that other teams and products have gained over their cycles, and so either need to hire some people with the required experience for both development and testing, or atleast get a thorough discussion with teams that have done this. Getting a product localized for the first time is not that big a effort and can be done right, but it is also not something that you attempt without ensuring that you have done adequate preparation in terms of schedule and resources. Once you have done that level of planning, then you will still face challenges, but those should be fixable.


Wednesday, April 3, 2019

Taking the time to define and design metrics

I recall my initial years in software products where there was less of a focus on metrics. In fact, in some projects, there was a question of accounting for defects and handling them, and the daily defect count was held on the whiteboard, but that was about the extent of it. Over a period of time, however, this has come to change. Software development organizations have come to realize that there is a lot of optimization that can be done, such as in these areas:
1. Defect accounting - How many defects are generated by each team and team members, how many of these defects move towards being fixed vs. being withdrawn, how many people generate defects that are critical vs. trivial, and so on.
2. Coding work, efficiency, code review records, number of line of code being written, and so on.
You get an idea, there are a number of ways in which organizations are trying to determine information about the processes within a software cycle, so that this information can be used to determine optimization, as well as figure out the awards for the appraisal of employees. This kind of data helps in providing a quantitative part of the overall appraisal counselling and to some extent, being able to give the manager a comparison between the employees.
However, such information and metrics are not easy to come by and cannot be done on the fly. Trying to create metrics when the project is ongoing or expecting people to do it along with their regular jobs will lead to sub-standard metrics, or even getting some wrong data, and not enough effort to screen the resulting data to ensure that the data is as accurate as it can be.
Typically, a good starting point for ongoing project cycles is to do a review at regular periods so that the team can contribute as to what metrics would be useful, and why. Another comparison point would be about talking to other teams to see what metrics they have found useful. And during the starting period of a project, when the discussion stage is ongoing, there needs to be a small team or couple of people who can figure out the metrics that need to be created during the cycle.
There may be a metrics engine or some tool being used in the organization, and there may be a process for new metrics to be added to the engine, or even for getting existing metrics to be added for a new project, and the effort and coordination for that also needs to be planned.
Basic concept of this article is -> Get metrics for your project, and design for it rather than treating it as an after-thought. 


Facebook activity