How we organize determines how effectively we can define results-oriented “gates." Traditionally, IT teams have organized in silos, working on abstract slices of the business goal (as opposed to the end-to-end). When we organize in silos, we assume that what teams create independently will work together with little effort. When it doesn’t – and that’s the case more often than not – our economies of scale are completely obliterated. Consider the Airbus A380: teams working in complete independence of each other discovered after their sections were "complete" that they couldn’t connect up the electronics.1 Were the independent sections of the plane “complete” according to the project plan? Yes. Was the plane nearer to completion by virtue of those sections being done? No. It’s entirely possible that Airbus could have built those sections entirely from scratch (to consistent wiring specs) in the time it took to integrate the finished units they had. The lesson learned is that no matter how well we think we define our APIs, no matter how disciplined we think we can work in a silo, integration is never free.
In addition to how we organize, we also need to have some means by which to execute and therefore measure our progress that is directly rooted in the context of the asset. Functionality that is coded and tested is the best measure of "results" that we have.
To act on this, we need requirements expressed as small, actionable statements of business functionality. Agile Stories lend themselves very well to this. Stories are small, granular statements of business need. For the PMO, Stories provide us with the best measure of progress: a Story is either done, or it’s not done. It’s not partially done or kind-of done. If Stories are written in terms meaningful to the business, and if we track progress to Stories, we can assess progress in terms of results. Compare this to how we measure progress in traditional IT: there are long delays between functional deliveries, so the PMO has to rely on surrogates such as timesheet data and collections of technical tasks. Hours spent and completion of technical tasks are measures of effort. Effort doesn't necessarily translate into results. Stories are a measure of results.
The fundamental true/false nature of a Story's completeness lets us measure progress in some very simple but information rich ways. The most common way is the burn-up chart.
These example charts show that scope (in terms of Stories) may expand or contract as new discoveries are made and work is reprioritized. The projected rate of progress through the Stories - that is, how quickly the team is expected to render business needs code complete - is rooted in a simple formula that includes capacity, distraction management, and estimate weight (complexity, etc.) relative to the team.
By tracking progress through Stories, we're measuring accomplishment of meaningful results. With each iteration, the PMO knows the specific functionality that an asset possesses and how much it cost to get it to that point. We also know what functionality the asset does not have and we have a pretty good means by which to forecast how much it's going to cost to complete that functionality. Best of all, in near real-time we can see if a project is showing signs of trouble, we can see almost immediately the impact of changes that we make, and we can communicate all of this in business (as opposed to IT) terms. Traditional IT cannot provide this level of transparency.
The burn-up is meaningful only if we know that it has integrity. Nothing prevents us from defining and tracking technical tasks. If we do that, we're really no better off than we are in current IT practices: it’s just a burn-up of effort, but not of results achieved. We have to make sure teams are relentlessly focused on satisfying business need, from the time a requirement is captured to the time software is certified as production ready. By writing well formed Stories, we're writing the "completion gene" into the DNA of a project team. This aligns the interests of the business (the buyer), the PMO (the buyer's agent and de-facto underwriter), and the development team.
This gives us direct line-of-sight from the unit of measurable work all the way through to the project reporting level, so that when we report upward we know that there is integrity in what we’re reporting. This is deceptively simple, but is the critical component to the Agile PMO: we know what is done, and what isn’t done, not in IT terms but in the terms of what it is we're buying. By extension, we know our investment yield to date and what returns further investment will yield.
The proliferation of Agile Project Management tools, such as Mingle from ThoughtWorks Studios, make it far easier for the PMO to get project performance data from in-flight projects. We don't need to transcribe data from what a PM has told us into what the project sponsors need to know. We're not e-mailing spreadsheets around. We're drilling into a project dashboard to see what’s going on, and we know what’s going on in very granular terms that relate back to the state of the asset.
The combination of team organization and Story-based requirements allow us to work and measure in terms of results. But the results we're looking for go beyond “completed Stories.” We must also have some appreciation for the quality of the solution delivered. Progress – even in “functionality” terms – may be completely hollow if the expected level of quality (e.g., maintainability, technical quality, etc.) isn’t baked in at the same time. This means we are at risk of over-stating our results. In our next installment we’ll have a look at how we can align quality metrics to get a complete picture of solutions delivered, and we’ll look at how to automate capture to make this simple. Once we’ve done that, we’ve defined the fundamental tools necessary for an Agile PMO.
1 Hollinger, Peggy."In a tangle: How having to wire the A380 by Hand is hampering Airbus." Financial Times. 16 July 2008.