I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Wednesday, December 17, 2008

The Agile PMO: Consistent Project Gatekeepers

In the last installment we took a look at the gap between what the PMO reports out, and what's actually happening in a project team. To begin to understand the nature of this gap, we’ll first take a look at what we use for project gatekeepers.



We need to make a clear distinction in an IT project between the means and the ends. We often confuse this, because what we see day in and day out is that we’re paying for the means of production, when in the end we’re really acquiring an asset. Unfortunately, this tends to skew our thinking about how we execute, organize, measure our progress and assess our exposure.

Traditional IT projects are mass economy-of-scale exercises: once development begins, armies of developers are unleashed. So in Traditional IT we stage large volumes of work to keep the largest and most expensive pool of people – developers – busy in the hopes of maximizing their productive effort. To minimize the chance that development is misdirected (e.g., due to poor requirements) or wasted (e.g., due to poor technical structures), we create checkpoints, or gatekeepers, throughout the project. Satisfy the gatekeeper, so the thinking goes, and we minimize the risk. Traditionally in IT, our gatekeepers are typically several different waves of requirements and specification documents, then software, then test results, then a production event.




This may give us lots of gatekeepers, but they’re inconsistent in both degree of effort and results achieved.

Clearly, a small team delivering documentation is nowhere near as significant an event as a large team delivering executable code. But of bigger concern is the latency between the time when requirements are captured and the time they're available as working code in an environment. We don’t know for fact that our documentation-centric gatekeepers have truly been satisfied until we have a functioning asset. A dozen people can reach a dozen different conclusions after reading the same documentation; the proof of the quality and completeness of documentation is in the delivered software. Inadequacies in documentation may not become apparent until QA or, if we’re lucky, during development. In effect, there’s very little other than opinion to prevent us from developing a toxic asset: bad initial requirements are transformed into flawed derivative artifacts (specifications, code, even tests) as they pass through different stages. And, of course, we not only pass along quality problems, we risk introducing additional quality problems unique to each stage (flawed technical specifications, poor tests). This just adds insult to injury: we’ve not only put ourselves at risk of creating a useless asset, but our interim progress reports are laden with false positives.

One solution often attempted is phased delivery of use cases: the traditional IT steps are still performed, only we make interim deliveries of code to a QA environment and execute functional tests against them. The theory goes that functional success is assured by test cases passing, which, in turn, indicates some measure of “earned value” for the total amount spent. This assumes that the software released to QA on this interim basis is of high functional and technical quality. If it is of low quality – again, think of all the problems that build up when people work in technical or component silos and all that toxicity we’re building up through the “soft” gatekeepers of project documentation – the blowback to the teams in the form of a large number of defects raised will interfere and ultimately derail development. When this happens, it obliterates the economies of scale we were hoping to achieve. Phased delivery of use cases does less to expose problems in a solve-able way early in the lifecycle than it does pile-on work to development teams that are already overloaded. It adds noise to the development cycle and confuses decision makers as to what is going on, and why it’s happening in the first place. This may fail a doomed project sooner, but not by much. The real tragedy is that the idea of incremental delivery will be discredited in the minds of the people involved in the project.

By comparison, Agile maintains a steady pace of progress by having all of functional efforts simultaneously focused on achieving the same result. An Agile team is not an exercise in scale. It maintains a more consistent (or at any rate, less volatile) level of effort over the life of a project. Our gatekeepers are consistent, rooted in certification of the code, not certification of things that describe what will be coded. Either we have delivered the requirements defined or we have not. They either satisfy our technical and functional quality gatekeepers or they do not. They are found acceptable by the business or they are not. We know this with each iteration – every 2 weeks or so – not months or even years after requirements have been penned. Quite simply, because we derive our certification exclusively from the delivered asset and not from things that describe the asset, we’re not confusing the means for the ends.



Just because Agile teams are not exercises in scale doesn't mean they don't scale. To take on a large application, we divide the project into business-focused teams instead of technically-focused teams. "Work completed" is more clearly understood, because we report in terms of business needs satisfied (results) and not technical tasks completed (effort). Reporting progress in a large development program is therefore much more concrete to everybody involved.

However, this doesn’t mean that an Agile project won’t fail. It may. But if it does, it’s far less likely to be a spectacular failure. By paying attention to results as opposed to effort, we spot both trouble and opportunity a lot sooner in an Agile project. This means we can take smaller and less expensive corrective action (reprioritzation, technology change, team change, etc.) much earlier. More importantly, we’ll see the impact of those actions on our bottom line results much sooner, too. This is far better than being surprised into making a large and expensive correction late in the lifecycle.

So what does this mean for the PMO? It means that we have to change what it is we’re measuring – the means by which we can declare “victory” at any gatekeeper – if we’re going to change what it is we’re managing. We don’t want our gatekeepers to be rooted in effort; we want them rooted in results. In IT projects, the results that matter are the code and its demonstrable attributes (performance, technical quality, functional quality, etc.), not assurances about the code. We want to see results-based gatekeepers satisfied from the very early stages of the project, and we want them satisfied very frequently. We can do this across the portfolio to reduce execution risk, and with it reduce the probability that we'll get blind-sided.

Changing our gatekeepers is important, but it’s only the first step. In the next installments we’ll take a deeper look at how we organise and execute for development, and the impact that has on the confidence with which we can measure progress. We also need to be aware of how much work we might unintentionally create for people. Setting up these gatekeepers sounds great, but we need to avoid “metrics tax” burden on the teams, so we’ll also take a look at how we can make this collection non-burdensome to both team and PMO, and get closer to real-time project metrics.

Wednesday, November 26, 2008

The PMO Divide

This content is derived from a webinar I presented earlier this month titled The Agile PMO: Real-Time Metrics and Visibility. This is the first of a multi-part series.



We’ve all seen it: the project that reports “green” status on its stop-and-go light report for months suddenly goes red in the late stages of development. This is nothing new to IT, as projects suddenly crater all the time. But it begs the question: why does this happen as often as it does?

Program Management Offices (PMOs) are at the nexus of this. PMOs are responsible for keeping an eye on the performance of the IT project portfolio. They sit between the executives who sponsor IT projects and those who execute those projects. This means the PMO is responsible for bridging the divide between the two groups. But this divide is wider than we think. All too often we end up with overworked project managers frustrated by doing double duty managing a team and filling out status reports on one side, and angry, humiliated business sponsors blindsided by sudden project changes on the other.

Let's look at what it means to sit between executive and executor.

Facing “upward” to project sponsors, the PMO needs to be able to report status. It must show it has control over spend and that there is demonstrable progress being made. Facing “downward” the PMO needs to get spend and progress information from delivery teams. But because of the way most IT projects are structured, these aren’t easy questions to answer, and this creates an information gap.

IT projects are often structured by area of technology specialization (e.g., User Interface, middle tier, server side, database, specialists in things like ERP systems) or by component (e.g., one team works on the rating engine, one team works on the pricing engine, and so forth). This means that development of a bit of business functionality is splintered into independent effort performed by lots of specialists. Those individually-performed tasks need to be integrated and then tested from end-to-end. Integration is an opaque, under-funded phase most often scheduled to take place late in the project. End-to-end testing - the best indicator of success - can't take place until integration is complete. This means that lots of development tasks may be flagged as “complete” but they’re complete only by assertion of a developer, not by “fact” of somebody who has exercised the code from end-to-end.

What this means to the PMO is that when it looks “downward” to get an answer for somebody “upward” there’s a fair bit of conjecture in the answer. By deferring integration and testing, the whole of what we have at any given point in time is less than the sum of the parts. Code has been written, but it may not be functional, let alone useful. Measures of progress and spend are therefore highly suspect, because they are really lagging indicators of effort, not forward looking indicators of results. It also means that when we use effort as a proxy for results, we inflate our sense of progress. In traditional IT, which is effort-centric, there is nothing preventing us from reporting inflated numbers for months on end. The longer we do this, the greater the risk of being blind-sided.

This doesn’t become a serious problem in each and every project because the gap may or may not be a serious risk. The degree of exposure depends on the situation in the team. Since we know from experience that some teams seem to succeed while others fail, it’s worth exploring why this is.

In the best case scenario, reporting up to the PMO is a nuisance to a project manager. The data the PMO is asking for isn't what the PM uses to manage the project, so filling out status reports is a distraction. It can only truly be a nuisance and not represent an outright risk, though, if the team itself has all the behaviours and communications in place to complete their objectives in a business context. That is, the sum of the tasks in the project plan might not describe what needs to be done to complete business delivery, but the team itself may have the right leadership and membership so that it takes responsibility for completing the delivery. So, while there may be “leakage” on the project budget and timeline because not everything that the team does is fully and completely tasked out (and still inaccurately tracked in time entry systems and what not), the impact of this leakage is contained because the team is by its very nature working toward the goal of completion. There may be a lot of reasons why this is the case. Perhaps the team has been working together for many years and knows how to build-in contingency to cover the small overages. Or perhaps it's simply a team with few skill silos. Regardless the reason, leakage is contained when the right team dynamic is in place.

In the worst case scenario, people in silos work to complete their tasks to a point where nobody can tell them that their tasks aren’t done. Working to complete tasks, of course, isn’t quite the same as working to complete functionality. Completing UI code, services, and some server-side code does not necessarily define a complete business solution. In very large projects it's not always completely clear who is responsible for the complete solution. Is it the business analyst? The last person to commit code in support of a use case? The project manager? The QA tester? This responsibility void is made more acute by the fact that the “last mile” is the hardest: the steps necessary to integrate all the bits of code so that everything lines up and technically performs, as well as meets functional needs and satisfies non-functional requirements, is always the most difficult. In a large project structured around technology specialism (and very often made worse by a staff of “passengers” fulfilling tasks and not “drivers” completing requirements), we don’t have leakage, we have full-scale hemorrhage. No amount of contingency can cover this.

This means that in traditional IT, the PMO isn't bridging the divide. The data it gets from teams isn't reliably forward-looking. Reporting against task completion inflates progress, while spend data is simply cost-of-effort that doesn't directly translate into cost-for-results. The reported progress is inflated, and cost control is misleading.

This puts the PMO in a situation where it is underwriting the risk of the development capacity that has been sourced to complete a project. Work is being done – we know this from the timesheet data and task orders – but there’s no map from timesheet data to the degree to which a business need is functionally complete, and no way to know that it’s technically sound. In effect, the PMO is the buyer’s agent for an asset and it is underwriting the risk of developing that asset, but it’s not taking informed decisions with the state of the asset in full view at all times. To get visibility, PMOs typically try to scrutinize the minutia and decompose the project into further levels of detail and precision. Ironically, the greater the specialization baked into the plan, the more likely we are to miss the things that get the software into a functionally complete state. For all of this alleged precision, we may have more data, but in the end we have less information.

How can we bridge this divide? By managing and measuring incremental results, not collective effort. This aligns day-to-day activity with topline reporting. That, in turn, reduces our exposure to late-stage project collapse.

Ultimately, we want the PMO to have real-time and forward looking information about it's project portfolio, and to be able to get that information in a manner that is non-burdensome to project teams. But getting ourselves into this future state will require some re-alignment. In coming posts we'll look at IT organization and practice, as well as what we use as measures for progress and quality, that will allow us to do this. As our first step, we need to reconsider what it is we use for project gatekeepers, basing our decisions not on descriptions of the work we expect to do, but on the actual state of the asset under development.

Monday, October 13, 2008

The Agile PMO - Real Time Metrics and Visibility Webinar - 5 November

There’s a lot riding on IT in the current economic climate. In tight times, businesses rely on efficiency, and IT investments will be expected to create a lot of that efficiency. But while IT assets may help the business tighten up, IT execution must also tighten up to match the times.

That doesn’t mean IT projects have to execute flawlessly. They never will, as there will always be situations and events that challenge even the most experienced of teams. What it does mean is that the people responsible for IT projects will need to make well-informed decisions throughout the life of a project. That requires current, accurate and clear information about what’s actually happening in eah project.

IT doesn’t excel at this today. The failure rate of IT projects has been discussed for decades. Yet the most important IT initiatives that are subject to the greatest scrutiny still crater suddenly late in their lifecycles, much to the surprise of their executive sponsors. This exposes the very wide divide between what the PMOs need to know to oversight a portfolio (such as total time spent, features complete and functional QA results) and the indicators that the PMs use to manage projects day-to-day (such as technical task orders, defect counts and high-priority issue lists.) This gap also means that the status data that the PMOs eventually get is late and stale.

What this all means is that PMs are doing double duty, executives are getting blindsided, and PMOs are caught in the middle unable to satisfy either. The collective frustration by all parties is unfortunate, but it may also be little more than a tragic sideline. Capital is scarce and increasingly impatient, and IT will find it even more so if its business partners have little confidence that IT can deliver. Historically, executives have felt backed into a corner when an IT project fails, given little choice other than bailing out a project that has suddenly and surprisingly redlined. But as we're seeing in the global economy right now, historical patterns no longer apply.

There is a better way. Join me on 5 November for The Agile PMO – Real Time Metrics and Visibility. We’ll discuss how Agile practices align day-to-day execution with executive needs, and how the PMO is instrumental in making this happen:

  • Using requirements - not abstractions - as gatekeepers. What does it mean to complete business requirements and not just tasks, and why does this distinction matter?
  • Transparency. How can the PMO get unambiguous line-of-sight into what’s happening on the ground across the portfolio, both functionally and technically?
  • Metrics. We need information, not dispirit data points. What is signal, and what is just noise coming out of a project? How does Agile cut through the noise?
  • Collection. We need to collect project data efficiently, otherwise our projects will suffer under the weight of data generation (or simply not report it). How does Agile align with PMO needs so that collecting status data is not a burden to project teams?

I hope you can attend on the 5th.

Registration Details: The Agile PMO - Real Time Metrics and Visibility
Wednesday, 5 November 2008
Time: 12:00pm Eastern Standard Time (US-New York, GMT-5:00)
Registration URL: https://thoughtworks2.webex.com/thoughtworks2/onstage/g.php?t=a&d=842762110

Friday, September 12, 2008

Agile Readiness Assessment Webinar - 19 September

Please join me on Friday, 19 September for An Agile Readiness Assessment, a ThoughtWorks sponsored webinar.

Taking on Agile can appear to be an overwhelming commitment with no obvious place to start. For one thing, Agile is often a significant departure from how a team is operating, requiring organisational changes, new practices, and stricter discipline. In addition, because there are so many different things to be done - from continuous integration to Story based requirements - it's difficult to know what changes to make first. Finally, organisational constraints such as phase-based governance and shared QA can create questions about the extent to which Agile practices will have an impact, and raise doubts as to whether they can be taken on in the first place.

In this webinar, we'll discuss how to overcome stationary intertia and plot a course to Agile practice adoption.

  • How can we critically assess the state of our practices today?
  • What goals should we target given constraints and organisational realities?
  • How do we prioritise what we should do first?

I hope you can attend on the 19th.


Registration details:
Friday, 19 September 2008
Time: 1:00pm Eastern Standard Time (US-New York, GMT-4:00)
Registration URL: https://thoughtworks.webex.com/thoughtworks/j.php?ED=108081447&RG=1&UID=0

Saturday, August 30, 2008

IT's Identity Crisis

IT lacks a consistent definition of exactly what it does vis-à-vis its organisational peers.

  • Accounting is the language of business.
  • Finance is how business gets capital.
  • Marketing creates customers.
  • Sales brings them in.
  • Operations are how a business creates value.

IT does … what, exactly? Creates new business offerings? Retains customers? Is how business gets done? What do these really mean?

All too often they end up meaning, “would you like fries with that?” When that happens, IT devolves into a cost to be contained, a nuisance to be tolerated.

This ambiguity of purpose is made worse by the fact that IT brings both a language and set of priorities that are of no real interest to the business (e.g., technical "issues"). It’s no wonder IT struggles to justify its annual spend: it has a fundamental identity crisis.

This creates some rather bizarre side effects.

One is that a lot of businesses put IT in a narrow box, giving it a rudimentary but clear definition as a utility that operates at a predictable, consistent cost. The price of this simplification is that IT cannot perform as a competitive capability, but so it goes. Hard costs ("annual IT spend was reduced through strategic sourcing contracts") are easier to present to shareholders than soft language ("IT gives us a transformational capability.") Having clear line of sight as to where IT spend is going trumps vague promises of competitive advantage.

Another are the proliferation of IT firms on the sell side with very odd-sounding offerings: "We radically transform business to invent and reinvent them.” Yes, of course you do. Good luck with that.

But the real tragedy of IT’s identity crisis is that it is significantly responsible for two of its core problems.

One is that IT serves the wrong master (technology) at the cost of the right one (the business). Debates over platforms and tools often overshadow discussions of business need. This is particularly disasterous when the business is drawn in to mitigate a resolution. This is why IT typically lacks a seat at the top table of the business.

Another is that IT lacks both quantity of management practitioners and maturity of management practices. Despite involvement in every part of the business, generally speaking IT is not a destination employer for management talent, certainly not to the extent that other business disciplines are. To wit: finance typically attracts top business talent, while more often than not IT promotes engineers or mathematicians with little business education or acumen into positions of management.

Application development exemplifies this. Purchased solutions such as accounting systems or office tools have become business utilities. However, custom applications that support custom operations from which businesses derive competitive advantage defy a utility model. There’s been effort made to bring commodity thinking into appdev, to a point where we’ve created buying patterns that commoditise people. But skills – especially at the high-end of IT – are not ubiquitous and portable: one firm’s technical architect is another’s junior developer. We’ve abstracted what we’re buying into roles and definitions, and in so doing we've made it cheaper to get what we’re buying, but we’re not buying what we really need.

What we’re buying in appdev, of course, is capability. That’s hard to define. But then again, so is IT in the first place.

So having asked the question, what’s the answer? What is IT relative to its peer group of business disciplines?

IT maximises return on invested capital.

IT investments are made for one reason only: efficiency. We can execute operations (e.g., define and trade exotic financial instruments or run a manufacturing plant), comply with regulation, win and retain customers, and keep track of revenue and cash flows all by hand if we must. IT investments may make many opportunities possible that would otherwise not be economically viable, and it may make the burden of regulation less costly to bear, but it’s an efficiency game. This means IT maximises returns by quickly delivering solutions that create business efficiency.

By extension, it also means that IT should be a destination discipline for business talent. Capital that needs to sweat the assets will do so through efficiency of operations. In most businesses, efficiency will be realised substantially through IT, because IT has a hand in every aspect of a business. Any representative of that capital (e.g., the board, the CEO, the CFO) looking for ways to maximise returns will start by leveraging IT. This requires not technology leadership from IT, but business leadership.

So when capital comes calling for that leadership, IT needs to be prepared with an answer. That answer isn't that IT solves "business technology" problems: arguably, they're all contrived anyway. It isn't that IT achieves the minimum cost-per-role-staffed relative to its industry peers: that's abdication of leadership masquerading as fiduciary responsibility. Nor is it to reinvent business: there are still far more low tech than high tech solutions to business problems. IT must answer in terms specific to what it can deliver that creates business efficiency and therefore returns. This is how it fulfills its organisational role to maximise return on invested capital.

Any other answer misses the point.

Tuesday, July 22, 2008

Introducing alphaITjournal.com


alphaITjournal.com

I'm pleased to announce the launch of alphaITjournal.com, an online magazine focused on the execution, management and governance of IT investments that can produce outsized (or "alpha") returns. The mission and purpose are summarised in the welcome message on the site and in the press release that was issued in early July.

There are a few things that I hope stand out about the site.

The first is the site layout. It's designed to give attention to writers and their articles, and make the content easy for readers to navigate without being overwhelmed by a polyglot of messages.

Another is the absence of advertising. Aside from the "Presented by ThoughtWorks" message on the left navigation and bottom menu, there is no advertising on alphaITjournal.com. Being practitioners, this affords us flexibility in dealing with changing project demands and work priorities that will affect content production and editing.

Still another is the continuous release of content. Rather than having monthly editions, there will be one or two articles released each week. This will make it easier for the reader to stay current, and it will also make it easier to sustain fresh content on the site.

Last but certainly not least is the diverse community of writers. While ThoughtWorks is sponsoring the site, the community of writers are from all corners the IT universe. They, in turn, are producing content on a diverse collection of topics, all with a common theme: how to maximise returns on IT investments.

I hope that alphaITjournal.com consistently provides compelling content so that you'll be a regular reader, even a promoter: add it to your RSS news reader, share it with your peers and customers, add a link to it from your blog. I also hope that you'll consider being a contributor. We have a number of writers, but we are always looking for more. If you have ideas for individual articles, a series or a column, drop me an email.

We've just gone live, so there are a few additions we'll make once we establish our rhythm (such as reader comments). Meanwhile, if you haven't done so already please give alphaITjournal.com a visit.

Sunday, June 29, 2008

Agile Made Us Better, but We Signed Up for Great

This content is derived from material presented in a ThoughtWorks-sponsored webcast in June 2008. A two minute video presentation of this material is available. A complete webinar re-broadcast, including audience Q&A, will be available soon.



The popular press makes Agile sound like nirvana. Practitioners speak of it in nearly religious terms. Yet we often find that IT teams are underwhelmed after going “Agile,” even after having expended considerable effort on making the change.

Why is this? Is there too much hype around Agile? Could it be that it doesn’t work? No, it’s because they’ve fought only half the battle: they got some of the practices, but not the behaviours.

When teams or departments decide to “go Agile” they’re typically moving away from what they’re doing now, as much as if not more than they’re moving toward what it is they want to be doing. That is, they’re trying to get away from regressive behaviours where the way work is done impedes responsiveness, or they’re trying to get away from chaotic behaviours, where people are pursuing responsiveness at the cost of consistency and quality.

Changing the way work is performed is no simple task. Making investments in how work is done is extra effort above and beyond what has to be done just to keep up with the day-to-day. And there’s stationary inertia in IT: a lot of practice and theory have roots dating to the Eisenhower and Churchill eras. Getting away from regressive or chaotic states takes a lot of effort, and that effort isn’t necessarily sustainable.

No surprise, then, that many IT teams lose their appetite for change once they’ve shed their bad practices in favour of minimally good ones. But good practices are not the same as good behaviours. And that’s what separates the “functionally Agile” teams from the truly responsive ones. Do developers have a Pavlovian reaction when the alert goes out that the build is broken or are they content to leave it to somebody else? Are people co-located and directly engaged with each other in the execution of team responsibilities, or do they simply sit near each other still working in silos and swim-lanes?

Agile is not a Boolean question. There is no single thing you can do, or tool you can adopt, that will make your team “Agile.” It is a collection of practices. The extent to which these practices are mature in a team determines how responsive the team can be. The more mature states of practice require aligned behaviours.

This isn’t academic. Working with several colleagues, we’ve constructed an assessment tool, called the Agile Maturity Model. We’ve looked at 10 dimensions including project management, requirements, development and configuration management, and identified consistent patterns of progression – or maturity – in the way people and teams move toward more agile practices. For example, a team that infrequently performs its build manually today will not be able to cope with a build that automatically fires with each code commit and fails if code quality levels are below a specified threshold. The same is true for collaboration: a team that communicates requirements or project status by presentation is not going to get much mileage from automated collaboration tools. Durable practice results from taking incremental steps. This is how we gain mastery.

A maturity model helps us understand what it is we are doing as well as what it is we are not doing. That it’s based in experience makes our path to responsiveness less a matter of opinion, and more a matter of fact. But the real value is that it gives us some insight into the cost and the returns of taking the next steps. For example, perhaps if our frequently executing build were a continuously executing gatekeeper of quality, we could eliminate hours of rework, lost productivity and late nights because of bad builds being released into an environment. Or perhaps we wouldn’t have missed a subtle shift in the business priority had we been working as a team to deliver small but complete business requirements instead of technical tasks. A maturity model helps us to clearly pinpoint our best opportunities for change.

Using the model, we can also index where we’re at. There’s merit in an index, in setting some quantitative value for our target, historic and current states. It helps us to be more communicative about our strengths as well as our deficiencies. But the point of having an index isn’t to score or grade. The model isn’t our team, and the model doesn’t give results to our business partners. All it does is gives us an indicator of the extent to which we’re past the point of doing things that undermine responsiveness, and at a point where we’re behaviourally aligned for it. Or, that we’re not yet past that point. An index is an indicator that helps us frame our situation; it is not our sole purpose. Process is important, but we’re on the payroll to deliver solutions; we’re not on the payroll just to have really great processes.

There’s nothing wrong with being “functionally Agile.” Breaking free of the restrictive practices or simply getting some control over chaos is a better situation for an IT team, and usually is the result of significant effort. But don’t mistake it for being organizationally responsive. Recognize there are degrees of practice and find the optimal combination for your team or department. Above all, hold your teams to the expectation that they will not just perform to a set of practices, but behave in such a way that they maintain the highest state of readiness for whatever comes. Achieve that, and your IT organization will be more resilient to threat and better able to capitalize on opportunity.

Tuesday, May 27, 2008

The Moral Hazard of IT projects

The longer an IT project is expected to take, the greater the risk of moral hazard: i.e., that IT will provide poor information to its business partners or have incentive to take unusual risks to complete delivery.

This is not borne of maliciousness. People on an IT project are not necessarily out to defraud anybody. It may simply be that people incompletely scope the work, make assumptions about skills and capabilities, or are overly optimistic in estimates. This creates misleading project forecasts, which, in turn, lead to a disappointing asset yield.

This is the raison d'être for the rules-based approach to IT: improve rigor in scoping, estimating and role definition, it is argued, and projects will be on-time and on budget. Unfortunately, this won't accomplish very much: the moral hazard that plagues any IT project is not a product of poor practice, but of behaviours.

Rules-based IT planning assumes that each person in a team has an identical understanding of project and task, and is also similarly invested in success. It ignores that any given person may misunderstand or outright disagree with a task, a technology choice or a work estimate. These differences amplify as people exit and join a project team: those who are present when specific decisions are taken – technical or business – have a context for those decisions that new people will not. The bottom line is, there is a pretty good chance that any action by any person will not contribute to the success of the project.

Complicating matters is the ambiguous relationship of the employee to the project. The longer a project, and larger a team, the more anonymous each individual’s contribution. This gives rise to ITs version of the tragedy of the commons: because everybody is responsible for the success of a project, nobody takes responsibility for its success. The notion that “everybody is responsible” is tenuous: success or failure of the project may have no perceived bearing on their status as employees. And, of course, people advance their careers in IT by changing companies more often than they do through promotion.

But by far, the biggest single contributing factor to moral hazard is the corporate put option. There’s a long history of companies stepping in to rescue troubled IT projects. This means people will expect that some projects are too big or too important to fail, and that the business will bail out a project to get the asset.

All told, this means that the people working in a traditionally managed IT project may not understand their tasks, may perceive no relationship between project success and job or career, and may believe that the company will bail out the project no matter what happens. There might be a lot of oars in the water, but they may not be rowing in the same direction, if at all.

Especially for high-end IT solutions, the rules-based approach to IT is clearly a fallacy: any “precise” model will fail to identify every task (we cannot task out solutions to problems not yet discovered) and every risk (project plans fail to consider external forces, such as dynamics in the labour market). Rules feign control and create a false confidence because they assume task execution is uniform. They deny the existence of behavioural factors which make-or-break a project.

A rules-based approach actually contributes to moral hazard, because the tasks people perform become ends in and of themselves. To wit: writing requirements to get past the next “phase gate” in the project lifecycle is not the same as writing actionable statements of business need that developers can code into functionality.

Work done in IT projects can end up being no different from the bad loans originated to feed the demand for securitised debt. At the time development starts in a traditionally managed project, all we know is that there are requirements to code (e.g., mortgage paper to securitise.) Further downstream, all we know is there are components to assemble from foundation classes (e.g., derivatives to create). Nobody touching the details of the project have responsibility for its end-to-end lifecycle; once a detailed artifact clears the phase gate, that person is done with it. This is supplemented with misguided governance: quality and completeness of intermediate deliverables aren't reconciled to a working asset but to an abstraction of that asset, the project plan.

Just as we don’t discover defaults until long after the bad paper has entered the securitisation process, we similarly don’t discover problems with specifications or foundation code until late in the delivery cycle. There's typically only a minor provision (in IT terms, a “contingency”), meaning we can absorb only a small amount of “bad paper” in the project. And because it comes so late in the cycle, the unwind is devastating.

This does not mean that IT professionals are untrustworthy. What it does mean is that there must be a short impact horizon for every decision and every action. Our top priority in managing IT projects must be to minimise the time between the moment a requirement is articulated and the moment it is in production. That means the cycle time of execution – detailing requirements, coding, testing and releasing to production – should be measured in days, not months and years. This way, the results of each decision are quickly visible in the asset to everybody on the project.

Short impact horizons align behaviour with project success. Each person sees evidence of their contribution to the project; they do not simply pass the work downstream. A project may still go off course, but it won't do so for very long; a small correction is far less costly than a major unwind. And, of course, we can extract better governance data from an asset than we can from a plan.

Best of all, we’re not backstopping the project with the unwritten expectation that the business may need to exercise its put option.

Monday, April 28, 2008

Rules Versus Principles

In the wake of a credit market seizure, illiquid investments, $245 billion of write-downs and losses1, collapsing funds and financial institutions, and no indication as to where it’s going to end, US capital markets are facing significant changes in how they're regulated. Hedge funds are a flashpoint. There are about 8,000 funds managing some $2 trillion of assets,2 and there is no way of knowing whether or not there’s a large write-down looming somewhere among them. Indeterminate counterparty risk in a highly interconnected financial system means there’s a chance capital markets could get blindsided yet again, so hedge funds are front and centre of the regulatory debate.

There are two schools of thought over how hedge funds should be regulated.

Members of Congress are calling for strict, rule-based regulation. Very few industries have a track record of successful self-regulation, and capital markets firms have incurred more than a few self-inflicted wounds of late. Rule-based regulation calls for tight controls on activity. Transparency is an assumed byproduct: if actions are pre-defined, everybody will know exactly what everybody else is up to. There is also an “I pay, I say” dimension: if the US taxpayer could end up footing the bill, the taxpayer must have the opportunity to set the rules. The champions of rule-based regulation believe this is accomplished through control and regulation, imposed through legislation and agency.

The US Treasury department is agitating for principles to play a greater role in regulation. Because capital is globally mobile, markets must innovate to remain competitive. Financial markets are innovating at a fast clip. Rules can't be written as quickly as markets evolve. Principle-based regulation posits that compliance with best practices is the best way to facilitate innovation while retaining transparency. Advocates of principle-based regulation argue that it is in everybody’s best interests to voluntarily comply, as compliance guarantees consistency – and with it transparency, liquidity and confidence – in capital markets.

This debate mirrors a similar phenomenon in IT.

The traditional approach to IT project management is consistent with “regulation by rule.” This camp values practices such as deterministic project plans, highly detailed task orders, explicit role definitions, and timesheet-based project tracking. The theory is that consistency is achieved through meticulous control; any deviation from plan is visible and immediately correctable. At the other extreme are the Agilists who champion regulation through principle. This camp values practices such as test driven design, continuous integration, co-located and cross-functional teams, short development iterations, and frequent releases of software. They argue that innovation, transparency, consistency and ultimately project success result from compliance with best practices more than they are adherence to a collection of rules.

Not surprisingly, the ideological arguments in IT are similar to their capital markets counterparts. Those who advocate the traditional approach argue that top-down control is essential, and that best practices are ignored by teams when things are going well. How can there be self-regulation in an industry notorious for significant overruns and spectacular project failures? Why would a business abdicate responsibility for oversight if there's a risk it will have to bail out a project? The Agilists argue that top-down control is a myth, and that everybody has a vested interest in adhering to best practices. How can anybody expect that deterministic project planning will keep pace with changes and discoveries made during development? And how can we expect innovation in an environment stifled by bureaucratic control systems that are not aligned with day-to-day execution?

“Control” is elusive in IT, particularly at the high end. Applications with the potential to yield significant business impact typically involve new processes or technologies. In these cases, development is an exercise of continuous problem solving, not rote execution. It isn’t practical to create deterministic project plans for the delivery of solutions not yet formed to problems not yet discovered. Additionally, history has shown that regulation and control do not offer deliverance from failure, let alone disaster. As US Treasury Secretary Henry Paulson commented in the aftermath of the Bear Sterns intervention, “I think it was surprising … that where we had some of the biggest issues in capital markets were with the regulated financial institutions.”3 The same can be said about IT. Rules offer no guarantee of effective risk management, as time and again, we have seen delays or functional mis-fits announced late in the lifecycle of even the most tightly “controlled” IT project.

If IT is to be a source of innovation and business responsiveness, it needs disciplined execution more than it needs imposed rules. Unfortunately, “disciplined execution” doesn’t describe how the vast majority of IT is practiced today. IT has launched its share of self-targeted missiles over the years, and its track record remains poor. On top of it, buying patterns increasingly relegate IT to utility status; they don't elevate it to strategic partnership. Principle-based regulation may be appropriate for IT, but it faces significant headwinds.

This debate will affect the role and relevancy of IT in the coming years. There is an opportunity for IT to take leadership in this debate, but it can do so only if it has its house in order. Without principled execution, IT will increasingly be treated as a utility, regulated by rule. But by adhering to best practices, IT can demonstrate an ability to self-regulate. This will allow IT to strike a balance between effective practices and the rules with which it must comply, and position itself to be a driver of alpha returns.


1Brinsley, John. Treasury Panels Lay Out Hedge Fund `Best Practices' Bloomberg.com, 15 April 2008.
2Ibid.
3Secretary Paulson as quoted in Paletta, Damian and MacDonald, Alistair. Mortgage Fallout Exposes Holes in New Bank-Risk Rules The Wall Street Journal, 4 March 2008.

Thursday, March 27, 2008

A Margin Call on Leveraged Time

IT is primarily a business of people solving problems during the creation of assets that increase Ebitda. Problem solving requires talent, and most IT organisations have to contend with a shortage of talented people. To some extent this reflects limitations of the labour market. It’s also economic: highly capable IT professionals aren’t inexpensive, and most firms struggle with budgets and costs. To get by, the experience and capability of a core few is expected to support a very large number of staff. Because IT projects are work effort delivered in time, this is, in effect, a leverage of people’s time.

Consider how leverage works. If we invest $4 of our own capital and $6 of borrowed capital into a $10 asset, and that asset increases in value by 20% in one year, we’ll yield $2 of profitability. That will considerably eclipse the $0.80 that our own capital earns in that investment, provided the interest rate on the debt doesn’t exceed 20% annually. The same thinking applies to how we invest the time of our highest-capable IT professionals: if we create teams to take the burden of rote coding off the shoulders of the most capable people, we should be able to produce more IT assets and thus drive greater returns from IT. This can be very attractive, especially if IT is engaging in labour arbitrage, sourcing staff globally at lower costs. Indeed, the cost per hour may permit contracting staff in multiples of 2x, 3x or even 4x. There is also quick impact: the income statement improves as costs drop dramatically, and the notional value of the IT assets that our core (and expensive) capability is producing is quite high relative to their total numbers. There is a powerful temptation to overload on leverage: the higher the leverage, the bigger the payday if our bets pay off.

But our bets don’t always pay off. Suppose that $10 asset drops in value by 20%, to $8. We’re still on the hook for the $6 we borrowed. When assets erode, debt holders will require additional capital as a sign that we’re good for the loan. This is what is known as a margin call. Suppose the margin requirement is 30% - that is, suppose that our broker requires that we cover no less than 30% of a position with our own capital. The erosion of our investment to $8 would mean that our $4 original investment is now worth $2, and our own capital is 25% of the total value of the investment. We need to put up more capital to restore this to a margin minimum of 30% of the now $8 asset, or $0.40. We may have to liquidate positions in other investments to come up with that 40 cents. The higher the leverage, the greater the pain: our own capital in the asset has eroded, and we’re still on the hook for the loan at whatever interest rate we’re paying. We now face a difficult decision: cashing out this investment now will post a loss, while re-investing to maintain our position in the asset might be throwing good money after bad.

Consider this again in operational terms. Suppose a “leveraged team” fails to meet expectation, either because functionality delivered is wide of the mark, or technical quality is sub-optimal, or both. Time has been invested with the expectation that this team would succeed, and that time has been lost. We now need to invest additional time to bring that particular asset into an acceptable state. Most likely, we're going to call on more talented people to do so. Since they are few in number, we're going to have to liquidate a position in another investment, directing those people’s time to shore up this investment. The operational decision is just the same - and every bit as painful - as the financial one: walk away or reinvest.

A leveraged IT project that fails can trigger a capability liquidity crisis. The more we need to invest to rescue this project, the more capability we'll need to draw down from across the portfolio. When this happens, the IT income statement very rapidly sours and the high notional value in the IT portfolio is obliterated.1

To prevent a rapid de-leveraging, we may need to make a capability "injection.” Ideally, this is an exercise in sourcing top IT talent in a project rescue mission. In addition, the rescue team will very often get that which it needs to succeed: co-located facilities, access to business partners, hardware and tools, etc. Capability injections can be costly, but they prevent a greater disaster across the portfolio.

This assumes, of course, that a project can make effective use of capability. Even when the business domain and underlying technologies are relatively simple, IT projects can become situationally complex if there’s been a team in over its head for a long period of time. Decisions made inexpertly compound over the life of a project. Very often, this means a lot of esoteric knowledge must be mastered before a person can contribute to the project. The more esoteric, the more time it takes for people to become fluent in how to get things done, the less penetrable the project. Top talent will be frustrated in attempts to get work done in an (unnecessarily) complex environment. Meanwhile, those who “get things done” do so through mastery of a set of circumstances (that is, abundant esoteric characteristics) that cause more harm than good for the business. Such a project is capability illiquid and is resistant to rescue efforts. This creates a worst-case scenario in the IT portfolio: maintaining the status quo is perpetually expensive, while the price of rectifying the situation may be staggering. Either way, yield on the asset this team produces will fall far short of expectations.

There will always be some degree of capability leverage in IT projects, if for no other reason than there will always be incongruities in talent, skill and experience among members of a project team. Leverage is most effective when it is used to develop the capability of the entire team through transfer of knowledge and structured skill acquisition, so that individual team members are capable of independently taking competent decisions that are aligned with governance expectations. An investment in people's capability reduces the risk and impact of a margin call. Of course, this doesn’t just happen by itself: skill transfer is a mission objective, and teams don’t necessarily engage in this type of behaviour naturally unless that expectation is clearly set. Simultaneously, capability development isn’t something that can be taken for granted. There is no “capability index” in IT, so it is essential to have a sense of what the desired future state of a leveraged IT team should be once it unwinds – and to have objective criteria that define that state. Otherwise, there is little assurance that any given delivery team is not a margin call waiting to happen.

There are no shortage of opportunities to leverage IT capability, but there are few opportunities to wield it in a risk responsible manner. Prudent governance requires that IT manage itself and its suppliers to mitigate capability risk so that a project isn’t over-leveraged, to be at the ready to source capability to bring to bear on a situation, and to maintain projects to be in a position to do so. Failing to do so is a lapse in governance. Doing so successfully balances risk and reward.

1 The great de-leveraging we're seeing in the financial world is both rapid and devastating. By way of example is Carlyle Capital: leveraged to 32 times equity, they couldn't meet margin calls as asset values cratered. breakingviews.com produced a splendid bit of analysis titled Carlyle's Comeuppance.

Friday, February 29, 2008

Minimising the Speculative Risk of IT Investments

The cost of IT is often confused with its value. Consider earned value management: delivery, time and cost are combined in an attempt to better represent project performance. This might show the rate of cash burn to total expected effort by a development team, but it isn’t an indicator of value as the name might imply. This is simply another way to present cost. Cost is a measure of money out of pocket, whereas value is a measure of returns. The cost of an IT project is at best the liquidation value of a project – the capital that could be raised by selling the intellectual property produced. But it is not value. Value is return, and like any use of capital, an IT investment has to provide a return that exceeds the firms cost of capital.

So what is the value of an IT project? Equities are valued by asking, “what is the market willing to pay for a dollar of profitability.” Equity is far more liquid, and more sophisticated in its measures: for example, we have P/E ratios that help us to gauge whether or not a firm’s valuation is overweight or underweight relative to forward expectations of returns (specifically through the increase of market capitalisation and dividends). IT projects don’t offer this much technical analysis, but conceptually we can borrow some concepts.

The intrinsic value of an IT asset under development is the net present value of future profitability that is expected to be derived from putting the asset to work. From an IT perspective, intrinsic value has both tangible and intangible components to it.

  • The tangible value is the return realised from that portion of the solution that is in production and contributing to EBITDA. Something in production is complete and increasing bottom line results, so the benefit of this asset isn’t ambiguous.
  • The intangible value is entirely speculative: how much additional profitability does the business expect to derive from what remains to be delivered?

All IT solutions are of speculative value until they are delivered and expectations of returns are shown to be viable. Tens of thousands of person hours and millions of dollars may be expended in development of millions of lines of code, but unless that code is in production, the firm derives no value from the investment.

Like all speculative investments, returns are at risk. The risk with which IT must be most concerned is delivery risk. Until an asset is released to production, there is a probability that the asset will fail to be developed correctly. The possibility of failure in delivery creates the threat of reduced returns. Delivery risk is eliminated once software is in production.1

The speculative component of an IT asset is at greater risk the further into the future it is expected to be completed. The probability that business, people or technology will change increases with each additional day that an IT asset is being developed. The probability of slippage in functionality, time and investment introduces volatility to the IT portfolio.

Volatility can generate windfall returns in finance. Market speculation can wildly change the value of a stock or bond relative to its purchase price. The holder can exploit this delta (up or down) to book a profit. By comparison, returns driven by operations are rarely so flexible. Software delivery is work performed over time, and time cannot be recovered. Experience has shown that an IT project is more likely to suffer delays in delivery and depress returns, than it is to accelerate delivery and increase returns. Volatility in delivery is thus a downside force, and needs to be minimised.

Traditionally, IT has attempted to apply deterministic management as a means of reducing volatility. We build elaborate project plans, we map out a predefined collection of tasks, we plot a task order down to the hour, and we track activity of what people do day by day as our barometer of progress. This top-down approach to management has low tolerance for anything that happens in the “left-to-right,” or over the course of time. This approach offers little more than wishful thinking. “Plans are useless,” said Dwight D. Eisenhower, “but planning is indispensable.” Intricate plans that forecast future activity in detail have low tolerance or complete disregard for the impact of changes that occur over time, such as staff, capability, business or technology. Indeed, deterministic project planning holds the business solution static, any staff interchangeable, and any technology change turnkey. Experience has shown overwhelmingly that this is not the case. Projects with intricate plans tend to have continuous cycles of replanning as things change. Deterministic management doesn’t decrease volatility; it simply adds overhead to the IT project, and drives down returns.

A plan cannot increase the tangible value of an IT asset. Only the asset can do that. We should therefore focus energies on rapid and incremental delivery. Tangible value is realised when some functionality is delivered that produces value. With each incremental delivery, and every increase in tangible value, the intangible or speculative value decreases.2 The reduction in speculative value at risk represents a reduction in the total value that can be depressed through delays in delivery. Thus, early delivery reduces the risk of speculative value not being realised. Simultaneously, it reduces the volatility of returns.

By itself, IT doesn’t generate business value. The business must consume the assets that IT produces in such a manner that it can put them to work efficiently and profitably. But that doesn’t mean IT is just a cost center. It can, in fact, drive alpha returns for a business. Corporate capability is largely driven by technology. IT is often the plurality, if not the majority, of spend on business initiatives. Incremental delivery of system components can increase returns on corporate investments where time is more important than cost. With capital under management that is expected to deliver returns, IT governance has a portfolio management obligation. As portfolio managers, IT must do things that maximise yield of invested capital. Concomitant with maximising yield is minimising risk. Risk is minimised through asset realisation.


1 Provided, of course, that what is delivered is functionally and non-functionally fit, and of sufficient quality. These should never be assumed outcomes.
2 New information may lead us to conclude that the impact of the asset will be different than originally forecast. For example, an asset under development might suddenly provide more impact to a firm because of changing market dynamics, making some portions of the application of greater value than others. For sake of simplicity, intrinsic value is assumed constant in this example.

Monday, January 28, 2008

IT Effectiveness is Measured by Asset Yield


We tend to consider an IT project successful if it is delivered “on time and on budget.” From an IT governance perspective, however, this doesn’t tell us all that much. At best it is an indicator of basic operational competence, that fundamental project controls are working. At worst it’s a false positive, indicating nothing more than the team was particularly lucky that all assumptions held true, or that their contingency was sufficiently large to absorb the impact of those assumptions that didn’t.

As a measure of IT effectiveness, it is incomplete. The key element missing is whether or not the project met its business objectives. Indeed, measuring systems by their compliance to plan ignores the mission of the project: it focuses on execution, at the exclusion of results. That is, “on time on budget” at best assumes that the business goal was met, at worst abdicates responsibility for it. The objective is to create a business solution, not to simply perform tasks to a forecast.

Business solutions are business investments. These investments are no different from any other use of the firm’s capital. They are made for one reason, and only one reason – to maximise profitability. Sometimes they are initiatives, for example when new systems are developed to support new trading products. Sometimes they are reactionary, driven by the need to comply with a new regulation or respond to competitive market offerings. A firm makes an investment in an IT solution as a way to maximise operational efficiency, and thus EBITDA. If IT application development produces assets which drive EBITDA, we should manage IT projects to maximise asset yield.

Asset yield tells us how effective IT is in its stewardship of the money with which it is entrusted by the firm. With this measure we have a business-oriented way to answer the first governance question: are we getting value for money? This is very powerful. It allows us to take better oversight decisions: we quickly identify where IT is contributing to breakaway results, and where it would be better off putting capital into Treasuries instead of investing in operations because IT is letting the side down. It also improves the day-to-day execution of our different projects: behaviours1 should align with the business goals (the business solution), not an abstraction of the goals (the project plan.) We thus get a simple litmus test to evaluate day to day decisions we take relative to the first governance question: does it improve asset yield?

By focusing on asset yield, we become aware of something else: time-to-market has a greater impact on yield than cost. In the on time on budget world, it’s usually tolerable for a project to be late in delivery if the budget implications are minimal. This is because in most corporations it’s far easier for a manager to be granted additional time (people are on the payroll anyway, so it’s a committed cost), while securing additional budget is nearly impossible (annual expense controls make it difficult to change allocations). The time value of capital is invisible to most managers, and its impact is noticeably absent from project decision making. To wit, rarely do project managers request additional budget to deliver a project ahead of plan because they can maximise business returns.

The fact that the time value of money is invisible to most middle management has disastrous consequences for a firm. An IT asset that is not in production yields no returns. An IT asset will yield more business benefit than it costs to develop; otherwise, the firm wouldn’t invest in it. That means each month of delay depresses yield, while the incremental cost of accelerating delivery can increase yield. Even further, lethargic delivery within budget will yield far less than aggressive delivery in excess of budget.

The converse is also true: the sooner the asset is in production, the greater the yield. Consider a project with an estimated 12 month / $6mm development cost and 17% annual maintenance that will contribute an annualised $30mm in profitability for a firm with an 8% cost of capital. A “big bang” deployment after 12 months yields a return above the firm’s cost of capital, but it is both lower and realised later than if those returns can be partially realised with incremental releases (e.g., at 3 months and 9 months) that provide modest contributions to EBITDA (say, 10% and 30% of the projected impact, respectively). It is also obvious that the disparity between incremental and single-even delivery is amplified in the event of delay.

To be a strategic capability, IT leadership must shift focus away from cost minimisation in favour of time to market. The effort spent in recent years by IT departments to reduce spend is effort misplaced for strategic IT. This is not only because volatile currency markets have made labour arbitrage tactics less effective, but because we’re focused on the wrong end of the equation: whether we’re spending $200 / hour or $20 / hour for a developer, an asset that the business cannot use is of no value. Time, not cost, is the lever IT should be looking to throw. This means IT must capable of delivering in short timeframes and working in greater collaboration with the business partners to produce assets with a high degree of solution fitness. To maximise yield, it is more important to build this capability than it is to source a low-cost capacity.

Making this the business reality isn’t that easy. IT doesn't typically make incremental deliveries, it makes single deliveries following long development lifecycles. Similarly, most business operations are not prepared to deal with training and workflow changes necessary to consume frequent solution deliveries. But do these things, it must. With a rising cost of capital, M&A, stock buyback and startup investments are out of reach. Compounding this, large debt loads coupled with a soft economy will put even more pressure to achieve bottom line results. Investments in operations are now that much more critical to the success – if not the sustainability – of a business. Hustle will be the order of the day, urgency the imperative. Well governed IT is the centerpiece of executing this strategy.


1 There is an important distinction to make here. IT is not a business of assets. It’s a business of people creating assets. We can measure results by focusing on asset yield, but those yields are only achieved by the capability and successful execution of the people we have to achieve them.