The status report can be a very important document, or it can be just something that is created as a matter of routine. I remember 2 very differing usages in 2 different situations - in one case, the status report was reviewed by many members of management and they had queries on some of them, which reassured us that the status report was valued and was being viewed. However, it also brought on a feeling to recheck the report before it was sent out that it was accurate and that the information presented the status as of that point, not an optimistic or a pessimistic portrayal, but an accurate portrayal.
Another case was in an organization that had different types of process certification, and part of that certification was about ensuring that every project generated status reports of different types which were sent to a central project management office; the idea being that anybody could find the status report of any project and review it for whatever timeline. The problem I could see after a few weeks was that the project manager was drowning in the various status reports that were required to be generated, and it was pretty clear that most management would not have the bandwidth to be able to review more than a couple at any detail.
However, the subject of this post is actually more about the accuracy of the status report. Right in the beginning, when I was more of a novice project manager with a few months experience, I would work with the leads to generate a status report - the problem was with the level of maturity of everyone involved. Most people tend to see issues in a status report as something that reflects on their way; so initially the status report would contain the issue, but also with a sugar coating about what the team was doing. The lesson I got one day was from a senior manager who had a discussion with me. His feedback was that the status report was supposed to report the issues as they were along with what the team could do to overcome them, not a sugar coating. The issues were needed to be represented accurately, including in those cases where the issues could pose potential red risks to the project and needed some kind of immediate attention (whether these be from within the team or needed attention from people outside the team, such as an external team on which there was a dependency).
This can get tricky. I remember the first time when I generated a status report with a red item, I got called into a discussion with the leads of development and testing and my boss, who were not very happy with the fact that a issue was listed in red. The expectation was that any red issue would be handled so that it was no longer red, but I held my ground. What we did finalize was that the day before my status report, or sometimes on the same day, I would do a quick communication if I saw a red item and we could discuss it. That did not mean that I would remove it unless I was convinced that my understanding was unfair and it was not red. This seemed to work for the future for this team at least.
Another case was in an organization that had different types of process certification, and part of that certification was about ensuring that every project generated status reports of different types which were sent to a central project management office; the idea being that anybody could find the status report of any project and review it for whatever timeline. The problem I could see after a few weeks was that the project manager was drowning in the various status reports that were required to be generated, and it was pretty clear that most management would not have the bandwidth to be able to review more than a couple at any detail.
However, the subject of this post is actually more about the accuracy of the status report. Right in the beginning, when I was more of a novice project manager with a few months experience, I would work with the leads to generate a status report - the problem was with the level of maturity of everyone involved. Most people tend to see issues in a status report as something that reflects on their way; so initially the status report would contain the issue, but also with a sugar coating about what the team was doing. The lesson I got one day was from a senior manager who had a discussion with me. His feedback was that the status report was supposed to report the issues as they were along with what the team could do to overcome them, not a sugar coating. The issues were needed to be represented accurately, including in those cases where the issues could pose potential red risks to the project and needed some kind of immediate attention (whether these be from within the team or needed attention from people outside the team, such as an external team on which there was a dependency).
This can get tricky. I remember the first time when I generated a status report with a red item, I got called into a discussion with the leads of development and testing and my boss, who were not very happy with the fact that a issue was listed in red. The expectation was that any red issue would be handled so that it was no longer red, but I held my ground. What we did finalize was that the day before my status report, or sometimes on the same day, I would do a quick communication if I saw a red item and we could discuss it. That did not mean that I would remove it unless I was convinced that my understanding was unfair and it was not red. This seemed to work for the future for this team at least.
No comments:
Post a Comment