Automating Metrics Capture
Because they're derived directly from the asset under development, our metrics give us an objective way to index and monitor quality. What’s even better is that these metrics can be automated and run frequently. If we have continuous integration established in our teams (e.g., where the project binary is built as often as every time code is committed) we have the ability to subject the binary just built to a battery of automated quality metrics and tests. Some tests may take a long time to run, while others may run in just a few seconds. This is fine. We can construct a build pipeline to run our metrics and tests in the most efficient manner.
We can collect up-to-the-minute quality data for any project, such as:
- What extent of unit test coverage do we have, and are all tests passing?
- What extent of functional test coverage do we have, and are all tests passing?
- Are we creating code that appears to be high maintenance due to complexity scores, duplication, or other poor coding practices?
And so forth.
This gives us a collection of technical and functional risk indicators that are both comprehensive and current.
By efficiently automating the capture of different quality metrics, we don't need to ask people to generate this data for us: we can lift it right off the binary. This makes data collection non-invasive to the teams, and less prone to collection error.
Tools such as Cruise and Mingle (both from ThoughtWorks Studios) have dashboards that allow people to see first hand current quality and project status across a number of different projects. This allows people in the PMO to look into what's actually happening without burdening the teams, and make far more specific and accurate status reports to project stakeholders.
All told, we're spending less effort and getting greater accuracy of what's happening within each project in the portfolio.
We now have all of the essential components of the agile PMO. Early on in this series, we talked about aligning executive and executor in how we organize, gatekeep and articulate the work to be done. Once we've done that, the data we glean on performance and quality has integrity by virtue of being solidly founded on results achieved, not hope that everything will work out in a future we've mortgaged to late stages of a project. By automating collection of this data, we can see how a project is evolving day-in and day-out with far less effort - and conjecture - than we've traditionally had in IT.
In the next and final installment of this series, we'll recap how all of these concepts and practices fit together, consider some caveats, and present some actionable items you can get started with today.