Comments
Comments
There are two schools of thought as to when you should conduct progress reviews for product development programs. The dominant school advocates event-driven reviews—by this I mean those reviews that are synchronized with key project events. Such reviews might occur after achieving a key milestone or before making a large financial commitment.
The argument supporting event-driven reviews seems compelling. Conducting a review just prior to making an im-portant commitment provides the best possible information available from which to base decisions about the commitment. Reviewing im-mediately after a key project milestone enables managers to quickly react to the outcome of that event. Furthermore, at a subtler level, the reviews become stakes in the ground for dawdling teams. With the pressure of a looming deadline, teams will work harder to avoid disappointing management.
But, the actual practice of event-driven reviews turns out to be less than perfect. Slippage on a project usually occurs precariously close to the scheduled review date. Team members hope they will be saved by some miracle, but that never happens. Finally, when everyone agrees the team won't be ready for the review, a decision is reached to reschedule it.
This is where the fun begins. Reviews are conducted by high-level people who have very busy calendars. If the project will be ready for a progress review two days late, that doesn't mean the review slips by only two days. Instead, the next available time the entire review team can actually assemble might be weeks in the future. Small slips of a key deliverable can be amplified into large variations in the review dates.
Alternately, the other school of thought argues that reviews should be calendar driven. In other words, the date of each review will be independently fixed from the status of the project. One of the key advantages of using this type of review is that a date can be firmly scheduled months in advance because it's independent of key deliverables. Another advantage to this technique is that it's precisely the projects that are slipping that benefit most from a review. These projects are most likely to have their review dates slip when employing the event-driven approach.
Managers who have a large number of projects should use the event-driven review system because its simplicity will probably save some time. Under such circumstances, the large number of projects reduces overall risk anyway. In contrast, if managers have only a few big projects, they may prefer shifting to a calendar-driven approach. This will make it easier to stay on the schedule of important decision-makers.
Of course, there's nothing to prevent managers from using both techniques on a project. It's possible to implement calendar-driven reviews for most of the project to take advantage of their predictability and then insert a few event-driven checkpoints later in the process. For example, management might want to make specific decisions immediately after completing certain important tests, or just prior to making any large financial commitments. Both types of reviews have their advantages and their applications.