I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

Friday, September 12, 2008

Agile Readiness Assessment Webinar - 19 September

Please join me on Friday, 19 September for An Agile Readiness Assessment, a ThoughtWorks sponsored webinar.

Taking on Agile can appear to be an overwhelming commitment with no obvious place to start. For one thing, Agile is often a significant departure from how a team is operating, requiring organisational changes, new practices, and stricter discipline. In addition, because there are so many different things to be done - from continuous integration to Story based requirements - it's difficult to know what changes to make first. Finally, organisational constraints such as phase-based governance and shared QA can create questions about the extent to which Agile practices will have an impact, and raise doubts as to whether they can be taken on in the first place.

In this webinar, we'll discuss how to overcome stationary intertia and plot a course to Agile practice adoption.

  • How can we critically assess the state of our practices today?
  • What goals should we target given constraints and organisational realities?
  • How do we prioritise what we should do first?

I hope you can attend on the 19th.


Registration details:
Friday, 19 September 2008
Time: 1:00pm Eastern Standard Time (US-New York, GMT-4:00)
Registration URL: https://thoughtworks.webex.com/thoughtworks/j.php?ED=108081447&RG=1&UID=0

Saturday, August 30, 2008

IT's Identity Crisis

IT lacks a consistent definition of exactly what it does vis-à-vis its organisational peers.

  • Accounting is the language of business.
  • Finance is how business gets capital.
  • Marketing creates customers.
  • Sales brings them in.
  • Operations are how a business creates value.

IT does … what, exactly? Creates new business offerings? Retains customers? Is how business gets done? What do these really mean?

All too often they end up meaning, “would you like fries with that?” When that happens, IT devolves into a cost to be contained, a nuisance to be tolerated.

This ambiguity of purpose is made worse by the fact that IT brings both a language and set of priorities that are of no real interest to the business (e.g., technical "issues"). It’s no wonder IT struggles to justify its annual spend: it has a fundamental identity crisis.

This creates some rather bizarre side effects.

One is that a lot of businesses put IT in a narrow box, giving it a rudimentary but clear definition as a utility that operates at a predictable, consistent cost. The price of this simplification is that IT cannot perform as a competitive capability, but so it goes. Hard costs ("annual IT spend was reduced through strategic sourcing contracts") are easier to present to shareholders than soft language ("IT gives us a transformational capability.") Having clear line of sight as to where IT spend is going trumps vague promises of competitive advantage.

Another are the proliferation of IT firms on the sell side with very odd-sounding offerings: "We radically transform business to invent and reinvent them.” Yes, of course you do. Good luck with that.

But the real tragedy of IT’s identity crisis is that it is significantly responsible for two of its core problems.

One is that IT serves the wrong master (technology) at the cost of the right one (the business). Debates over platforms and tools often overshadow discussions of business need. This is particularly disasterous when the business is drawn in to mitigate a resolution. This is why IT typically lacks a seat at the top table of the business.

Another is that IT lacks both quantity of management practitioners and maturity of management practices. Despite involvement in every part of the business, generally speaking IT is not a destination employer for management talent, certainly not to the extent that other business disciplines are. To wit: finance typically attracts top business talent, while more often than not IT promotes engineers or mathematicians with little business education or acumen into positions of management.

Application development exemplifies this. Purchased solutions such as accounting systems or office tools have become business utilities. However, custom applications that support custom operations from which businesses derive competitive advantage defy a utility model. There’s been effort made to bring commodity thinking into appdev, to a point where we’ve created buying patterns that commoditise people. But skills – especially at the high-end of IT – are not ubiquitous and portable: one firm’s technical architect is another’s junior developer. We’ve abstracted what we’re buying into roles and definitions, and in so doing we've made it cheaper to get what we’re buying, but we’re not buying what we really need.

What we’re buying in appdev, of course, is capability. That’s hard to define. But then again, so is IT in the first place.

So having asked the question, what’s the answer? What is IT relative to its peer group of business disciplines?

IT maximises return on invested capital.

IT investments are made for one reason only: efficiency. We can execute operations (e.g., define and trade exotic financial instruments or run a manufacturing plant), comply with regulation, win and retain customers, and keep track of revenue and cash flows all by hand if we must. IT investments may make many opportunities possible that would otherwise not be economically viable, and it may make the burden of regulation less costly to bear, but it’s an efficiency game. This means IT maximises returns by quickly delivering solutions that create business efficiency.

By extension, it also means that IT should be a destination discipline for business talent. Capital that needs to sweat the assets will do so through efficiency of operations. In most businesses, efficiency will be realised substantially through IT, because IT has a hand in every aspect of a business. Any representative of that capital (e.g., the board, the CEO, the CFO) looking for ways to maximise returns will start by leveraging IT. This requires not technology leadership from IT, but business leadership.

So when capital comes calling for that leadership, IT needs to be prepared with an answer. That answer isn't that IT solves "business technology" problems: arguably, they're all contrived anyway. It isn't that IT achieves the minimum cost-per-role-staffed relative to its industry peers: that's abdication of leadership masquerading as fiduciary responsibility. Nor is it to reinvent business: there are still far more low tech than high tech solutions to business problems. IT must answer in terms specific to what it can deliver that creates business efficiency and therefore returns. This is how it fulfills its organisational role to maximise return on invested capital.

Any other answer misses the point.

Tuesday, July 22, 2008

Introducing alphaITjournal.com


alphaITjournal.com

I'm pleased to announce the launch of alphaITjournal.com, an online magazine focused on the execution, management and governance of IT investments that can produce outsized (or "alpha") returns. The mission and purpose are summarised in the welcome message on the site and in the press release that was issued in early July.

There are a few things that I hope stand out about the site.

The first is the site layout. It's designed to give attention to writers and their articles, and make the content easy for readers to navigate without being overwhelmed by a polyglot of messages.

Another is the absence of advertising. Aside from the "Presented by ThoughtWorks" message on the left navigation and bottom menu, there is no advertising on alphaITjournal.com. Being practitioners, this affords us flexibility in dealing with changing project demands and work priorities that will affect content production and editing.

Still another is the continuous release of content. Rather than having monthly editions, there will be one or two articles released each week. This will make it easier for the reader to stay current, and it will also make it easier to sustain fresh content on the site.

Last but certainly not least is the diverse community of writers. While ThoughtWorks is sponsoring the site, the community of writers are from all corners the IT universe. They, in turn, are producing content on a diverse collection of topics, all with a common theme: how to maximise returns on IT investments.

I hope that alphaITjournal.com consistently provides compelling content so that you'll be a regular reader, even a promoter: add it to your RSS news reader, share it with your peers and customers, add a link to it from your blog. I also hope that you'll consider being a contributor. We have a number of writers, but we are always looking for more. If you have ideas for individual articles, a series or a column, drop me an email.

We've just gone live, so there are a few additions we'll make once we establish our rhythm (such as reader comments). Meanwhile, if you haven't done so already please give alphaITjournal.com a visit.

Sunday, June 29, 2008

Agile Made Us Better, but We Signed Up for Great

This content is derived from material presented in a ThoughtWorks-sponsored webcast in June 2008. A two minute video presentation of this material is available. A complete webinar re-broadcast, including audience Q&A, will be available soon.



The popular press makes Agile sound like nirvana. Practitioners speak of it in nearly religious terms. Yet we often find that IT teams are underwhelmed after going “Agile,” even after having expended considerable effort on making the change.

Why is this? Is there too much hype around Agile? Could it be that it doesn’t work? No, it’s because they’ve fought only half the battle: they got some of the practices, but not the behaviours.

When teams or departments decide to “go Agile” they’re typically moving away from what they’re doing now, as much as if not more than they’re moving toward what it is they want to be doing. That is, they’re trying to get away from regressive behaviours where the way work is done impedes responsiveness, or they’re trying to get away from chaotic behaviours, where people are pursuing responsiveness at the cost of consistency and quality.

Changing the way work is performed is no simple task. Making investments in how work is done is extra effort above and beyond what has to be done just to keep up with the day-to-day. And there’s stationary inertia in IT: a lot of practice and theory have roots dating to the Eisenhower and Churchill eras. Getting away from regressive or chaotic states takes a lot of effort, and that effort isn’t necessarily sustainable.

No surprise, then, that many IT teams lose their appetite for change once they’ve shed their bad practices in favour of minimally good ones. But good practices are not the same as good behaviours. And that’s what separates the “functionally Agile” teams from the truly responsive ones. Do developers have a Pavlovian reaction when the alert goes out that the build is broken or are they content to leave it to somebody else? Are people co-located and directly engaged with each other in the execution of team responsibilities, or do they simply sit near each other still working in silos and swim-lanes?

Agile is not a Boolean question. There is no single thing you can do, or tool you can adopt, that will make your team “Agile.” It is a collection of practices. The extent to which these practices are mature in a team determines how responsive the team can be. The more mature states of practice require aligned behaviours.

This isn’t academic. Working with several colleagues, we’ve constructed an assessment tool, called the Agile Maturity Model. We’ve looked at 10 dimensions including project management, requirements, development and configuration management, and identified consistent patterns of progression – or maturity – in the way people and teams move toward more agile practices. For example, a team that infrequently performs its build manually today will not be able to cope with a build that automatically fires with each code commit and fails if code quality levels are below a specified threshold. The same is true for collaboration: a team that communicates requirements or project status by presentation is not going to get much mileage from automated collaboration tools. Durable practice results from taking incremental steps. This is how we gain mastery.

A maturity model helps us understand what it is we are doing as well as what it is we are not doing. That it’s based in experience makes our path to responsiveness less a matter of opinion, and more a matter of fact. But the real value is that it gives us some insight into the cost and the returns of taking the next steps. For example, perhaps if our frequently executing build were a continuously executing gatekeeper of quality, we could eliminate hours of rework, lost productivity and late nights because of bad builds being released into an environment. Or perhaps we wouldn’t have missed a subtle shift in the business priority had we been working as a team to deliver small but complete business requirements instead of technical tasks. A maturity model helps us to clearly pinpoint our best opportunities for change.

Using the model, we can also index where we’re at. There’s merit in an index, in setting some quantitative value for our target, historic and current states. It helps us to be more communicative about our strengths as well as our deficiencies. But the point of having an index isn’t to score or grade. The model isn’t our team, and the model doesn’t give results to our business partners. All it does is gives us an indicator of the extent to which we’re past the point of doing things that undermine responsiveness, and at a point where we’re behaviourally aligned for it. Or, that we’re not yet past that point. An index is an indicator that helps us frame our situation; it is not our sole purpose. Process is important, but we’re on the payroll to deliver solutions; we’re not on the payroll just to have really great processes.

There’s nothing wrong with being “functionally Agile.” Breaking free of the restrictive practices or simply getting some control over chaos is a better situation for an IT team, and usually is the result of significant effort. But don’t mistake it for being organizationally responsive. Recognize there are degrees of practice and find the optimal combination for your team or department. Above all, hold your teams to the expectation that they will not just perform to a set of practices, but behave in such a way that they maintain the highest state of readiness for whatever comes. Achieve that, and your IT organization will be more resilient to threat and better able to capitalize on opportunity.

Tuesday, May 27, 2008

The Moral Hazard of IT projects

The longer an IT project is expected to take, the greater the risk of moral hazard: i.e., that IT will provide poor information to its business partners or have incentive to take unusual risks to complete delivery.

This is not borne of maliciousness. People on an IT project are not necessarily out to defraud anybody. It may simply be that people incompletely scope the work, make assumptions about skills and capabilities, or are overly optimistic in estimates. This creates misleading project forecasts, which, in turn, lead to a disappointing asset yield.

This is the raison d'être for the rules-based approach to IT: improve rigor in scoping, estimating and role definition, it is argued, and projects will be on-time and on budget. Unfortunately, this won't accomplish very much: the moral hazard that plagues any IT project is not a product of poor practice, but of behaviours.

Rules-based IT planning assumes that each person in a team has an identical understanding of project and task, and is also similarly invested in success. It ignores that any given person may misunderstand or outright disagree with a task, a technology choice or a work estimate. These differences amplify as people exit and join a project team: those who are present when specific decisions are taken – technical or business – have a context for those decisions that new people will not. The bottom line is, there is a pretty good chance that any action by any person will not contribute to the success of the project.

Complicating matters is the ambiguous relationship of the employee to the project. The longer a project, and larger a team, the more anonymous each individual’s contribution. This gives rise to ITs version of the tragedy of the commons: because everybody is responsible for the success of a project, nobody takes responsibility for its success. The notion that “everybody is responsible” is tenuous: success or failure of the project may have no perceived bearing on their status as employees. And, of course, people advance their careers in IT by changing companies more often than they do through promotion.

But by far, the biggest single contributing factor to moral hazard is the corporate put option. There’s a long history of companies stepping in to rescue troubled IT projects. This means people will expect that some projects are too big or too important to fail, and that the business will bail out a project to get the asset.

All told, this means that the people working in a traditionally managed IT project may not understand their tasks, may perceive no relationship between project success and job or career, and may believe that the company will bail out the project no matter what happens. There might be a lot of oars in the water, but they may not be rowing in the same direction, if at all.

Especially for high-end IT solutions, the rules-based approach to IT is clearly a fallacy: any “precise” model will fail to identify every task (we cannot task out solutions to problems not yet discovered) and every risk (project plans fail to consider external forces, such as dynamics in the labour market). Rules feign control and create a false confidence because they assume task execution is uniform. They deny the existence of behavioural factors which make-or-break a project.

A rules-based approach actually contributes to moral hazard, because the tasks people perform become ends in and of themselves. To wit: writing requirements to get past the next “phase gate” in the project lifecycle is not the same as writing actionable statements of business need that developers can code into functionality.

Work done in IT projects can end up being no different from the bad loans originated to feed the demand for securitised debt. At the time development starts in a traditionally managed project, all we know is that there are requirements to code (e.g., mortgage paper to securitise.) Further downstream, all we know is there are components to assemble from foundation classes (e.g., derivatives to create). Nobody touching the details of the project have responsibility for its end-to-end lifecycle; once a detailed artifact clears the phase gate, that person is done with it. This is supplemented with misguided governance: quality and completeness of intermediate deliverables aren't reconciled to a working asset but to an abstraction of that asset, the project plan.

Just as we don’t discover defaults until long after the bad paper has entered the securitisation process, we similarly don’t discover problems with specifications or foundation code until late in the delivery cycle. There's typically only a minor provision (in IT terms, a “contingency”), meaning we can absorb only a small amount of “bad paper” in the project. And because it comes so late in the cycle, the unwind is devastating.

This does not mean that IT professionals are untrustworthy. What it does mean is that there must be a short impact horizon for every decision and every action. Our top priority in managing IT projects must be to minimise the time between the moment a requirement is articulated and the moment it is in production. That means the cycle time of execution – detailing requirements, coding, testing and releasing to production – should be measured in days, not months and years. This way, the results of each decision are quickly visible in the asset to everybody on the project.

Short impact horizons align behaviour with project success. Each person sees evidence of their contribution to the project; they do not simply pass the work downstream. A project may still go off course, but it won't do so for very long; a small correction is far less costly than a major unwind. And, of course, we can extract better governance data from an asset than we can from a plan.

Best of all, we’re not backstopping the project with the unwritten expectation that the business may need to exercise its put option.

Monday, April 28, 2008

Rules Versus Principles

In the wake of a credit market seizure, illiquid investments, $245 billion of write-downs and losses1, collapsing funds and financial institutions, and no indication as to where it’s going to end, US capital markets are facing significant changes in how they're regulated. Hedge funds are a flashpoint. There are about 8,000 funds managing some $2 trillion of assets,2 and there is no way of knowing whether or not there’s a large write-down looming somewhere among them. Indeterminate counterparty risk in a highly interconnected financial system means there’s a chance capital markets could get blindsided yet again, so hedge funds are front and centre of the regulatory debate.

There are two schools of thought over how hedge funds should be regulated.

Members of Congress are calling for strict, rule-based regulation. Very few industries have a track record of successful self-regulation, and capital markets firms have incurred more than a few self-inflicted wounds of late. Rule-based regulation calls for tight controls on activity. Transparency is an assumed byproduct: if actions are pre-defined, everybody will know exactly what everybody else is up to. There is also an “I pay, I say” dimension: if the US taxpayer could end up footing the bill, the taxpayer must have the opportunity to set the rules. The champions of rule-based regulation believe this is accomplished through control and regulation, imposed through legislation and agency.

The US Treasury department is agitating for principles to play a greater role in regulation. Because capital is globally mobile, markets must innovate to remain competitive. Financial markets are innovating at a fast clip. Rules can't be written as quickly as markets evolve. Principle-based regulation posits that compliance with best practices is the best way to facilitate innovation while retaining transparency. Advocates of principle-based regulation argue that it is in everybody’s best interests to voluntarily comply, as compliance guarantees consistency – and with it transparency, liquidity and confidence – in capital markets.

This debate mirrors a similar phenomenon in IT.

The traditional approach to IT project management is consistent with “regulation by rule.” This camp values practices such as deterministic project plans, highly detailed task orders, explicit role definitions, and timesheet-based project tracking. The theory is that consistency is achieved through meticulous control; any deviation from plan is visible and immediately correctable. At the other extreme are the Agilists who champion regulation through principle. This camp values practices such as test driven design, continuous integration, co-located and cross-functional teams, short development iterations, and frequent releases of software. They argue that innovation, transparency, consistency and ultimately project success result from compliance with best practices more than they are adherence to a collection of rules.

Not surprisingly, the ideological arguments in IT are similar to their capital markets counterparts. Those who advocate the traditional approach argue that top-down control is essential, and that best practices are ignored by teams when things are going well. How can there be self-regulation in an industry notorious for significant overruns and spectacular project failures? Why would a business abdicate responsibility for oversight if there's a risk it will have to bail out a project? The Agilists argue that top-down control is a myth, and that everybody has a vested interest in adhering to best practices. How can anybody expect that deterministic project planning will keep pace with changes and discoveries made during development? And how can we expect innovation in an environment stifled by bureaucratic control systems that are not aligned with day-to-day execution?

“Control” is elusive in IT, particularly at the high end. Applications with the potential to yield significant business impact typically involve new processes or technologies. In these cases, development is an exercise of continuous problem solving, not rote execution. It isn’t practical to create deterministic project plans for the delivery of solutions not yet formed to problems not yet discovered. Additionally, history has shown that regulation and control do not offer deliverance from failure, let alone disaster. As US Treasury Secretary Henry Paulson commented in the aftermath of the Bear Sterns intervention, “I think it was surprising … that where we had some of the biggest issues in capital markets were with the regulated financial institutions.”3 The same can be said about IT. Rules offer no guarantee of effective risk management, as time and again, we have seen delays or functional mis-fits announced late in the lifecycle of even the most tightly “controlled” IT project.

If IT is to be a source of innovation and business responsiveness, it needs disciplined execution more than it needs imposed rules. Unfortunately, “disciplined execution” doesn’t describe how the vast majority of IT is practiced today. IT has launched its share of self-targeted missiles over the years, and its track record remains poor. On top of it, buying patterns increasingly relegate IT to utility status; they don't elevate it to strategic partnership. Principle-based regulation may be appropriate for IT, but it faces significant headwinds.

This debate will affect the role and relevancy of IT in the coming years. There is an opportunity for IT to take leadership in this debate, but it can do so only if it has its house in order. Without principled execution, IT will increasingly be treated as a utility, regulated by rule. But by adhering to best practices, IT can demonstrate an ability to self-regulate. This will allow IT to strike a balance between effective practices and the rules with which it must comply, and position itself to be a driver of alpha returns.


1Brinsley, John. Treasury Panels Lay Out Hedge Fund `Best Practices' Bloomberg.com, 15 April 2008.
2Ibid.
3Secretary Paulson as quoted in Paletta, Damian and MacDonald, Alistair. Mortgage Fallout Exposes Holes in New Bank-Risk Rules The Wall Street Journal, 4 March 2008.

Thursday, March 27, 2008

A Margin Call on Leveraged Time

IT is primarily a business of people solving problems during the creation of assets that increase Ebitda. Problem solving requires talent, and most IT organisations have to contend with a shortage of talented people. To some extent this reflects limitations of the labour market. It’s also economic: highly capable IT professionals aren’t inexpensive, and most firms struggle with budgets and costs. To get by, the experience and capability of a core few is expected to support a very large number of staff. Because IT projects are work effort delivered in time, this is, in effect, a leverage of people’s time.

Consider how leverage works. If we invest $4 of our own capital and $6 of borrowed capital into a $10 asset, and that asset increases in value by 20% in one year, we’ll yield $2 of profitability. That will considerably eclipse the $0.80 that our own capital earns in that investment, provided the interest rate on the debt doesn’t exceed 20% annually. The same thinking applies to how we invest the time of our highest-capable IT professionals: if we create teams to take the burden of rote coding off the shoulders of the most capable people, we should be able to produce more IT assets and thus drive greater returns from IT. This can be very attractive, especially if IT is engaging in labour arbitrage, sourcing staff globally at lower costs. Indeed, the cost per hour may permit contracting staff in multiples of 2x, 3x or even 4x. There is also quick impact: the income statement improves as costs drop dramatically, and the notional value of the IT assets that our core (and expensive) capability is producing is quite high relative to their total numbers. There is a powerful temptation to overload on leverage: the higher the leverage, the bigger the payday if our bets pay off.

But our bets don’t always pay off. Suppose that $10 asset drops in value by 20%, to $8. We’re still on the hook for the $6 we borrowed. When assets erode, debt holders will require additional capital as a sign that we’re good for the loan. This is what is known as a margin call. Suppose the margin requirement is 30% - that is, suppose that our broker requires that we cover no less than 30% of a position with our own capital. The erosion of our investment to $8 would mean that our $4 original investment is now worth $2, and our own capital is 25% of the total value of the investment. We need to put up more capital to restore this to a margin minimum of 30% of the now $8 asset, or $0.40. We may have to liquidate positions in other investments to come up with that 40 cents. The higher the leverage, the greater the pain: our own capital in the asset has eroded, and we’re still on the hook for the loan at whatever interest rate we’re paying. We now face a difficult decision: cashing out this investment now will post a loss, while re-investing to maintain our position in the asset might be throwing good money after bad.

Consider this again in operational terms. Suppose a “leveraged team” fails to meet expectation, either because functionality delivered is wide of the mark, or technical quality is sub-optimal, or both. Time has been invested with the expectation that this team would succeed, and that time has been lost. We now need to invest additional time to bring that particular asset into an acceptable state. Most likely, we're going to call on more talented people to do so. Since they are few in number, we're going to have to liquidate a position in another investment, directing those people’s time to shore up this investment. The operational decision is just the same - and every bit as painful - as the financial one: walk away or reinvest.

A leveraged IT project that fails can trigger a capability liquidity crisis. The more we need to invest to rescue this project, the more capability we'll need to draw down from across the portfolio. When this happens, the IT income statement very rapidly sours and the high notional value in the IT portfolio is obliterated.1

To prevent a rapid de-leveraging, we may need to make a capability "injection.” Ideally, this is an exercise in sourcing top IT talent in a project rescue mission. In addition, the rescue team will very often get that which it needs to succeed: co-located facilities, access to business partners, hardware and tools, etc. Capability injections can be costly, but they prevent a greater disaster across the portfolio.

This assumes, of course, that a project can make effective use of capability. Even when the business domain and underlying technologies are relatively simple, IT projects can become situationally complex if there’s been a team in over its head for a long period of time. Decisions made inexpertly compound over the life of a project. Very often, this means a lot of esoteric knowledge must be mastered before a person can contribute to the project. The more esoteric, the more time it takes for people to become fluent in how to get things done, the less penetrable the project. Top talent will be frustrated in attempts to get work done in an (unnecessarily) complex environment. Meanwhile, those who “get things done” do so through mastery of a set of circumstances (that is, abundant esoteric characteristics) that cause more harm than good for the business. Such a project is capability illiquid and is resistant to rescue efforts. This creates a worst-case scenario in the IT portfolio: maintaining the status quo is perpetually expensive, while the price of rectifying the situation may be staggering. Either way, yield on the asset this team produces will fall far short of expectations.

There will always be some degree of capability leverage in IT projects, if for no other reason than there will always be incongruities in talent, skill and experience among members of a project team. Leverage is most effective when it is used to develop the capability of the entire team through transfer of knowledge and structured skill acquisition, so that individual team members are capable of independently taking competent decisions that are aligned with governance expectations. An investment in people's capability reduces the risk and impact of a margin call. Of course, this doesn’t just happen by itself: skill transfer is a mission objective, and teams don’t necessarily engage in this type of behaviour naturally unless that expectation is clearly set. Simultaneously, capability development isn’t something that can be taken for granted. There is no “capability index” in IT, so it is essential to have a sense of what the desired future state of a leveraged IT team should be once it unwinds – and to have objective criteria that define that state. Otherwise, there is little assurance that any given delivery team is not a margin call waiting to happen.

There are no shortage of opportunities to leverage IT capability, but there are few opportunities to wield it in a risk responsible manner. Prudent governance requires that IT manage itself and its suppliers to mitigate capability risk so that a project isn’t over-leveraged, to be at the ready to source capability to bring to bear on a situation, and to maintain projects to be in a position to do so. Failing to do so is a lapse in governance. Doing so successfully balances risk and reward.

1 The great de-leveraging we're seeing in the financial world is both rapid and devastating. By way of example is Carlyle Capital: leveraged to 32 times equity, they couldn't meet margin calls as asset values cratered. breakingviews.com produced a splendid bit of analysis titled Carlyle's Comeuppance.

Friday, February 29, 2008

Minimising the Speculative Risk of IT Investments

The cost of IT is often confused with its value. Consider earned value management: delivery, time and cost are combined in an attempt to better represent project performance. This might show the rate of cash burn to total expected effort by a development team, but it isn’t an indicator of value as the name might imply. This is simply another way to present cost. Cost is a measure of money out of pocket, whereas value is a measure of returns. The cost of an IT project is at best the liquidation value of a project – the capital that could be raised by selling the intellectual property produced. But it is not value. Value is return, and like any use of capital, an IT investment has to provide a return that exceeds the firms cost of capital.

So what is the value of an IT project? Equities are valued by asking, “what is the market willing to pay for a dollar of profitability.” Equity is far more liquid, and more sophisticated in its measures: for example, we have P/E ratios that help us to gauge whether or not a firm’s valuation is overweight or underweight relative to forward expectations of returns (specifically through the increase of market capitalisation and dividends). IT projects don’t offer this much technical analysis, but conceptually we can borrow some concepts.

The intrinsic value of an IT asset under development is the net present value of future profitability that is expected to be derived from putting the asset to work. From an IT perspective, intrinsic value has both tangible and intangible components to it.

  • The tangible value is the return realised from that portion of the solution that is in production and contributing to EBITDA. Something in production is complete and increasing bottom line results, so the benefit of this asset isn’t ambiguous.
  • The intangible value is entirely speculative: how much additional profitability does the business expect to derive from what remains to be delivered?

All IT solutions are of speculative value until they are delivered and expectations of returns are shown to be viable. Tens of thousands of person hours and millions of dollars may be expended in development of millions of lines of code, but unless that code is in production, the firm derives no value from the investment.

Like all speculative investments, returns are at risk. The risk with which IT must be most concerned is delivery risk. Until an asset is released to production, there is a probability that the asset will fail to be developed correctly. The possibility of failure in delivery creates the threat of reduced returns. Delivery risk is eliminated once software is in production.1

The speculative component of an IT asset is at greater risk the further into the future it is expected to be completed. The probability that business, people or technology will change increases with each additional day that an IT asset is being developed. The probability of slippage in functionality, time and investment introduces volatility to the IT portfolio.

Volatility can generate windfall returns in finance. Market speculation can wildly change the value of a stock or bond relative to its purchase price. The holder can exploit this delta (up or down) to book a profit. By comparison, returns driven by operations are rarely so flexible. Software delivery is work performed over time, and time cannot be recovered. Experience has shown that an IT project is more likely to suffer delays in delivery and depress returns, than it is to accelerate delivery and increase returns. Volatility in delivery is thus a downside force, and needs to be minimised.

Traditionally, IT has attempted to apply deterministic management as a means of reducing volatility. We build elaborate project plans, we map out a predefined collection of tasks, we plot a task order down to the hour, and we track activity of what people do day by day as our barometer of progress. This top-down approach to management has low tolerance for anything that happens in the “left-to-right,” or over the course of time. This approach offers little more than wishful thinking. “Plans are useless,” said Dwight D. Eisenhower, “but planning is indispensable.” Intricate plans that forecast future activity in detail have low tolerance or complete disregard for the impact of changes that occur over time, such as staff, capability, business or technology. Indeed, deterministic project planning holds the business solution static, any staff interchangeable, and any technology change turnkey. Experience has shown overwhelmingly that this is not the case. Projects with intricate plans tend to have continuous cycles of replanning as things change. Deterministic management doesn’t decrease volatility; it simply adds overhead to the IT project, and drives down returns.

A plan cannot increase the tangible value of an IT asset. Only the asset can do that. We should therefore focus energies on rapid and incremental delivery. Tangible value is realised when some functionality is delivered that produces value. With each incremental delivery, and every increase in tangible value, the intangible or speculative value decreases.2 The reduction in speculative value at risk represents a reduction in the total value that can be depressed through delays in delivery. Thus, early delivery reduces the risk of speculative value not being realised. Simultaneously, it reduces the volatility of returns.

By itself, IT doesn’t generate business value. The business must consume the assets that IT produces in such a manner that it can put them to work efficiently and profitably. But that doesn’t mean IT is just a cost center. It can, in fact, drive alpha returns for a business. Corporate capability is largely driven by technology. IT is often the plurality, if not the majority, of spend on business initiatives. Incremental delivery of system components can increase returns on corporate investments where time is more important than cost. With capital under management that is expected to deliver returns, IT governance has a portfolio management obligation. As portfolio managers, IT must do things that maximise yield of invested capital. Concomitant with maximising yield is minimising risk. Risk is minimised through asset realisation.


1 Provided, of course, that what is delivered is functionally and non-functionally fit, and of sufficient quality. These should never be assumed outcomes.
2 New information may lead us to conclude that the impact of the asset will be different than originally forecast. For example, an asset under development might suddenly provide more impact to a firm because of changing market dynamics, making some portions of the application of greater value than others. For sake of simplicity, intrinsic value is assumed constant in this example.

Monday, January 28, 2008

IT Effectiveness is Measured by Asset Yield


We tend to consider an IT project successful if it is delivered “on time and on budget.” From an IT governance perspective, however, this doesn’t tell us all that much. At best it is an indicator of basic operational competence, that fundamental project controls are working. At worst it’s a false positive, indicating nothing more than the team was particularly lucky that all assumptions held true, or that their contingency was sufficiently large to absorb the impact of those assumptions that didn’t.

As a measure of IT effectiveness, it is incomplete. The key element missing is whether or not the project met its business objectives. Indeed, measuring systems by their compliance to plan ignores the mission of the project: it focuses on execution, at the exclusion of results. That is, “on time on budget” at best assumes that the business goal was met, at worst abdicates responsibility for it. The objective is to create a business solution, not to simply perform tasks to a forecast.

Business solutions are business investments. These investments are no different from any other use of the firm’s capital. They are made for one reason, and only one reason – to maximise profitability. Sometimes they are initiatives, for example when new systems are developed to support new trading products. Sometimes they are reactionary, driven by the need to comply with a new regulation or respond to competitive market offerings. A firm makes an investment in an IT solution as a way to maximise operational efficiency, and thus EBITDA. If IT application development produces assets which drive EBITDA, we should manage IT projects to maximise asset yield.

Asset yield tells us how effective IT is in its stewardship of the money with which it is entrusted by the firm. With this measure we have a business-oriented way to answer the first governance question: are we getting value for money? This is very powerful. It allows us to take better oversight decisions: we quickly identify where IT is contributing to breakaway results, and where it would be better off putting capital into Treasuries instead of investing in operations because IT is letting the side down. It also improves the day-to-day execution of our different projects: behaviours1 should align with the business goals (the business solution), not an abstraction of the goals (the project plan.) We thus get a simple litmus test to evaluate day to day decisions we take relative to the first governance question: does it improve asset yield?

By focusing on asset yield, we become aware of something else: time-to-market has a greater impact on yield than cost. In the on time on budget world, it’s usually tolerable for a project to be late in delivery if the budget implications are minimal. This is because in most corporations it’s far easier for a manager to be granted additional time (people are on the payroll anyway, so it’s a committed cost), while securing additional budget is nearly impossible (annual expense controls make it difficult to change allocations). The time value of capital is invisible to most managers, and its impact is noticeably absent from project decision making. To wit, rarely do project managers request additional budget to deliver a project ahead of plan because they can maximise business returns.

The fact that the time value of money is invisible to most middle management has disastrous consequences for a firm. An IT asset that is not in production yields no returns. An IT asset will yield more business benefit than it costs to develop; otherwise, the firm wouldn’t invest in it. That means each month of delay depresses yield, while the incremental cost of accelerating delivery can increase yield. Even further, lethargic delivery within budget will yield far less than aggressive delivery in excess of budget.

The converse is also true: the sooner the asset is in production, the greater the yield. Consider a project with an estimated 12 month / $6mm development cost and 17% annual maintenance that will contribute an annualised $30mm in profitability for a firm with an 8% cost of capital. A “big bang” deployment after 12 months yields a return above the firm’s cost of capital, but it is both lower and realised later than if those returns can be partially realised with incremental releases (e.g., at 3 months and 9 months) that provide modest contributions to EBITDA (say, 10% and 30% of the projected impact, respectively). It is also obvious that the disparity between incremental and single-even delivery is amplified in the event of delay.

To be a strategic capability, IT leadership must shift focus away from cost minimisation in favour of time to market. The effort spent in recent years by IT departments to reduce spend is effort misplaced for strategic IT. This is not only because volatile currency markets have made labour arbitrage tactics less effective, but because we’re focused on the wrong end of the equation: whether we’re spending $200 / hour or $20 / hour for a developer, an asset that the business cannot use is of no value. Time, not cost, is the lever IT should be looking to throw. This means IT must capable of delivering in short timeframes and working in greater collaboration with the business partners to produce assets with a high degree of solution fitness. To maximise yield, it is more important to build this capability than it is to source a low-cost capacity.

Making this the business reality isn’t that easy. IT doesn't typically make incremental deliveries, it makes single deliveries following long development lifecycles. Similarly, most business operations are not prepared to deal with training and workflow changes necessary to consume frequent solution deliveries. But do these things, it must. With a rising cost of capital, M&A, stock buyback and startup investments are out of reach. Compounding this, large debt loads coupled with a soft economy will put even more pressure to achieve bottom line results. Investments in operations are now that much more critical to the success – if not the sustainability – of a business. Hustle will be the order of the day, urgency the imperative. Well governed IT is the centerpiece of executing this strategy.


1 There is an important distinction to make here. IT is not a business of assets. It’s a business of people creating assets. We can measure results by focusing on asset yield, but those yields are only achieved by the capability and successful execution of the people we have to achieve them.

Friday, December 28, 2007

Mitigating Capability Risk

With the cost of capital on the rise, the need to focus on returns is much more acute. Unfortunately, IT has not traditionally excelled at maximising returns. Industry surveys consistently show that a third to a half of all IT projects fail outright or significantly exceed their cost estimate.1 Delays are costly: IRR craters 25% if a $5mm / 12 month project with an estimated annual yield of $30mm is 4 months late. Monte Carlo simulation that factors the most common project risks, including schedule, turnover, and scope inflation, will consistently show that the probability of delivery being made 3 months late or later is greater than the probability that delivery will occur early, on time, or within one month of delivery.2

Given the significant contribution of technology to just about every business solution, IT risk management is a critical practice. But IT risk management practices are not mature. Planning models tend to be static representations of a project universe, regardless the time horizon. Risks are managed as exceptions. When things change, as they inevitably do, we try to force exceptions back into compliance with the plan. Given all the variables that can change – core technologies and compatibilities, emergent best practices, staff turnover, and a business environment that can best be described as “turbulent” - traditional approaches of “managing to plan” have a low risk tolerance.

To manage risk in our environment, we must first understand the nature of risk. Market risk offers the possibility of returns for invested capital. The yield depends on a lot of factors which an investor may influence, but over which the investor likely has little control: that a market materialises for the offering, that the company is not outmaneuvered by competitors, and so forth. Some market risks have potential to generate breakaway returns – yields well above a firm’s cost of capital. These opportunities represent the most strategic investments to a firm. IT doesn’t face market risks, IT faces primarily execution risk: that it can deliver solutions in accordance with feature, time, cost and quality expectations. Execution risk factors that are substantially within the control of an investing company because it had more direct control over them.

Execution risk is the risk of committing an unforced error. Poor execution depresses returns (again, consider the impact to IRR for a late delivery), whereas competent execution does little more than maintain expected returns. Maximising execution can amplify yield. Using the example above, making incremental deliveries beginning at 3 months can increase project IRR between 5 and 10%. This is, obviously, a significant competitive weapon. But this capability can be monetised only if it can be exploited by the business itself. This, then, is the impact of IT on returns: highly capable execution can create extraordinary returns, but only if the business can put it to use, and the market opportunity exists in the first place. The yield ceiling is dictated by the potential in the business opportunity itself, not in how it is executed. Execution risk, then, is a threat to returns, not an enabler of them.

Execution risk is not simply the risk that things don't get done; e.g., that excessive days out of office prevent people from performing tasks by specific dates. It is the risk that the organisation has the fundamental capability to identify, understand and solve the problems and challenges it faces in realisation of a solution. This means that execution risk is substantially capability risk: that IT brings the right level of capability to bear to minimise the risk of execution failure and thus maximise returns.

Breakaway market opportunities present the greatest challenges to fulfillment. They involve things that haven’t been done before: a product, service, or business competence that doesn’t currently exist within a firm or even an industry. The business processes that need to be defined, modeled and automated to fulfill that market opportunity will not be established at the front end of a project. They will change significantly over the course of fulfillment as they become better understood. Breakaway opportunities tend also to be highly sensitive to non-functional requirements, such as performance, scalability and security. It is subsequently highly likely that there will be new or emergent technologies applied, if not outright invented, over the course of delivery. All together, this means that the problem domain will be complex and dynamic. These are not problem domains that lend themselves to a divide-and-conquer approach, they are domains that require a discover-collaborate-innovate approach. This calls for people who are not only intelligent, but strong, open-minded problem-solvers with a predisposition to work collaboratively with others. It isn’t a question of engaging experienced practitioners; it is a question of engaging high-capability practitioners.

If we fail to understand the capability demands of breakaway opportunities, and similarly fail to recognise the capability of the people we bring to bear to fulfill them, we amplify capability risk. Consider what happens under the circumstances described above if we take a “mass production” approach to delivery. We define a static set of execution parameters for a largely undefined domain. We make a best effort decomposition of an emergent business problem into compartmentalised task inventories. We then look to fulfill these using the lowest cost IT capacity that can be sourced, grading it on a single dimension of capability – experience – which constitutes the extent of our assessment of team strength. Because the situation requires a high degree of problem solving skills and collaboration, this approach quickly over-leverages the highest capable people. This leaves the mass of executors wasting effort on misfit solutions, or it leaves them idle, waiting for orders. A recent quote from Shari Ballard, EVP of Retail Channel with Best Buy, highlights this:

  • 'Look at why big companies die. They implode on themselves. They create all these systems and processes – and then end up with a very small percentage of people who are supposed to solve complex problems, while the other 98% of people just execute. You can’t come up with enough good ideas that way to keep growing.'3

Because capability isn’t present in the decision frame, we run a significant probability of defaulting into a state of capability mismatch. This obliterates any possibility of cost minimisation (over-running the mass production model) and jeopardises the business returns.

IT is a people business, as opposed to an asset or technology business. The assets produced by IT – that is, the solutions bought by the business – are the measurable results produced by capability. Capability risk management is a byproduct of effective IT Governance. While it has a stewardship responsibility for the capital with which it is entrusted, IT Governance is primarily concerned with sourcing, deploying and maturing capability to maximise business returns. It looks to trailing indicators – which with Agile practices can be made “real time” indicators – that evaluate the quality of assets produced and the way in which those assets are produced. These allow it to determine whether current capability delivers value for money, and delivers solutions in accordance with expectations. It must also look to leading indicators that assess the skills, problem solving abilities and collaborative aptitude of its people, no matter how sourced: employee, contractor, consultant or outsourcer. By so doing, IT becomes a better business partner as it can unambiguously assess and improve its ability to maximise returns.


1There is the classic mid-90s Chaos Report by the Standish group that posited that as many as 50% of all IT projects fail. See also, “Reduce IT Risk for Business Results,” Gartner Research, 14 October 2003

2The seminal work in this area is Waltzing with Bears by Tom DeMarco and Tim Lister. They published a Monte Carlo method in a spreadsheet called Riskology that allows you to explore risk factors and tolerances and their impact on a project forecast.

3Ms. Ballard was quoted by Anders, George. “Management Leaders Turn Attention to Followers” The Wall Street Journal, 24 December 2007.

Sunday, November 25, 2007

Market Power Increases Exponentially with IT Velocity

Bernoulli’s Theorem holds that the potential power that can be produced by a turbine or rotor is equal to the cube of the velocity with which the turbine rotates, expressed simply as Power = Velocity3. A basic concept of wind energy systems, it is increasingly relevant in commercial building architecture: specifically, if wind velocity can be increased through building design, the potential power that a building can derive from wind energy is considerably greater. This means that a building can be designed such that it generates a non-trivial portion of its electrical power from wind energy.1

The exponential relationship of power to velocity is similarly evident in the relationship between business competitiveness and IT application development. Specifically, market power should increase exponentially with increases in IT velocity.

We can define velocity as the measure of the rate of delivery, expressed in the time it takes for a finely grained business need to go from idea to implemented solution. Here, we are interested in assessing the rate at which functionality is delivered: a dozen features2 delivered in a 6 month time frame have an average velocity of 6 months, not 0.5 months (12 features delivered in 6 months is not 0.5 months to produce one feature; time to production is 6 months). Restated, we hold time constant to assess the time it takes for new features to go from end to end of the delivery pipeline.

The power derived from this velocity is the ability of the company to exert itself in the marketplace. That is, a company has power in the market if it attracts customers, employees, partners and investors through execution of its strategy; it also has power if it forces competitors react if they are to retain what they have. This could be anything from having a lower cost footprint, features and functionality in solution offerings that competitors simply don’t have, or having the best tools or solution offering that attracts the top talent. The more change that a firm creates in its market, the more influence it exerts over an industry: competitors will be forced to spend resources reacting to somebody else’s strategy, not pursuing their own.

In the aggregate, power is abstract in this definition. An economic model that assesses the extent to which a firm has market power would be substantially an academic exercise. There are, however, tangible indicators of market power that are worthy of mention in the annual report: net customer acquisition, relative cost footprint, and competitive hires and exits are all hard measures of market power. These are real and significant business benefits: indeed, making competitors react by destabelising their agenda is of exponentially greater value than that of the innovations themselves.

Because all of these can be enabled or amplified by IT, velocity is subsequently a key measure of IT effectiveness. It is a particularly critical concept for IT in both Governance and Innovation.

Velocity is a key metric of the first of our two Governance questions: are we getting value for money? Many company's market offerings or cost competitiveness are rooted in applied technology. It stands to reason that the rate at which functionality is delivered increases business competitiveness either by constantly adding capability or by aggressively reducing costs. Sustainable IT velocity maintains market power; an increase in this velocity increases market power. Velocity, then, is a key indicator of the first governance question in that it provides a quantified assessment of IT’s value proposition to an organisation.

It is also an indicator of how effectively IT drives innovation. Business innovation is the consistent, rapid and deliberate maturation of products, services, systems and capabilities. As, again, businesses are increasingly dependent on technology for capability and cost, the rate with which IT delivers functionality will be an indicator of how effective IT is as an enabler of business innovation. While an intangible concept, this allows IT to position itself as a driver of business innovation and not simply a utility of technology services.

This is not simply a question of delivering IT solutions, but of how those solutions are consumed by the business. IT may make frequent deliveries, but if they are not consumed, organisational velocity is reduced. This is not the same as what happens in the market. The opportunities to exploit an innovation or solution delivered may not materialise in the market; in this case, the potential market power achievable by IT velocity will not be realised. If, however, solutions are delivered by IT but not consumed by the business, velocity is never truly maximised. This is an important distinction, because IT is not governed exclusively by how it is delivered; it is governed by how effectively it is consumed. Ignoring the “buy side” makes it too easy for an IT organisation to create false efficiencies or meaningless business results because it is knowingly or otherwise out of alignment with its host organisation. This lack of alignment doesn’t leave power potential unrealised, it undermines velocity.

This is actionable market behaviour with historical precedent. General George S. Patton understood the need to constantly bring the fight to the enemy. “Patton… clearly appreciated the value of speed in the conduct of operations. Speed of movement often enables troops to minimise any advantage the enemy may temporarily gain but, more important, speed makes possible the full exploitation of every favorable opportunity and prevents the enemy from readjusting his forces to meet successive attacks. Thus through speed and determination each successive advantage is more easily and economically gained than the previous one. … [R]elentless and speedy pursuit is the most profitable action.”3 Inciting market change, then, determines whether you follow your strategy or react to another’s.

The ability to disrupt a market by introducing change allows a company to execute its strategy at the expense of its competitors. Business execution, increasingly rooted in technology, thus derives a great deal of its competitive advantage from the rate at which it can change its technology and systems. Velocity, the sustained rate at which business needs mature from expression to implemented solution, is therefore a key IT governance metric. It is, in fact, an expression of ITs value proposition to its host organisation.



1 I am indebted to Roger Frechette for introducing me to this element of Bernoulli’s theorem. There are a number of articles highlighting his work on the Pearl River Tower, which when completed will be a remarkable structural and mechanical engineering achievement.
2 In this context, a feature is the same as an Agile story: "simple, independent expressions of work that are testable, have business value, and can be estimated and prioritised."
3 Eisenhower, Dwight D. Crusade in Europe Doubleday, 1948. p. 176.

Sunday, October 28, 2007

IT Governance Maximises IT Returns

In recent years, Michael Milken has turned his attention to health and medicine. Earlier this year, the Milken Institute released a report concluding that 7 chronic illnesses – diabetes, hypertension, cancer, etc. – are responsible for over $1 trillion in annual productivity losses in the United States. They go on to report that 70% of the cases of these 7 chronic illnesses are preventable through lifestyle change: diet, exercise, avoiding cigarettes and what not.1 In a recent interview on Bloomberg Television, Mr. Milken made the observation that because of the overwhelming number of chronic illness cases, medical professionals are forced to devote their attention to the wrong end of the health spectrum in the US. That is, instead of creating good by increasing life expectancy and enhancing quality of life through medical advancement, Mr. Milken argues that the vast majority of medical professionals are investing their energy into eliminating bad by helping people recover from poor decisions made. It’s obviously a sub-optimal use medical talent, and through sheer size is showing signs of overwhelming the medical profession. It is a problem that will spiral downward until the root causes are eradicated and new cases of “self inflicted” illness abates.

This offers IT a highly relevant metaphor.

Many of the problems that undermine IT effectiveness are self-inflicted. Just as lifestyle decisions have a tremendous impact on quality of life, how we work has a tremendous impact on the results we achieve. If we work in a high-risk manner, we have a greater probability of our projects having problems and thus requiring greater maintenance and repair. Increased maintenance and repair will draw down returns. The best people in an IT organisation will be assigned to remediating technical brownfields instead of creating an IT organisation that drives alpha returns. That assumes, of course, that an IT organisation with excessive brownfields can remain a destination employer for top IT talent.

This suggests strongly that “how work is done” is an essential IT governance question. That is, IT governance must not be concerned only with measuring results, but also knowing that that the way in which those results are achieved is in compliance with practices that minimise the probability of failure.

This wording is intentional: how work is performed reduces the probability of failure. If, in fact, lifestyle decisions can remove 70% of the probability that a person suffers any of 7 chronic conditions, so, too, can work practices reduce the probability that a project will fail. Let’s be clear: reducing the probability of failure is not the same as increasing the probability of success. That is, a team can work in such a way that it is less likely to cause problems for itself, by e.g., writing unit tests, having continuous integration, developing to finely grained statements of business functionality, embedding QA in the development team, and so forth. Doing these isn’t the same as increasing the probability of success. Reducing the probability of failure is the reduction of unforced errors. In lifestyle terms, I may avoid certain actions that may cause cancer, but if cancer is written into my genetic code the deck is stacked against me. So it is with IT projects: an extremely efficient IT project will still fail if it is blindsided because a market doesn’t materialise for the solution being developed. From a solution perspective, we can do things to control the risk of an unforced error. This is controllable risk, but it is only internal risk to my project.

This latter point merits particular emphasis. If we do things that minimise the risk of an unforced error – if we automate a full suite of unit tests, if we demand zero tolerance for code quality violations, if we incrementally develop complete slices of functionality – we intrinsically increase our tolerance for external (and thus unpredictable) risk. We are more tolerant to external risk factors because we don’t accumulate process debt or technical debt that makes it difficult for us to absorb risk. Indeed, we can work each day to maintain an unleveraged state of solution completeness: we don’t accumulate “debt,” mortgaged our future by needing downstream effort (such as “integration” and “testing”) that accumulates a partial solution which is alleged to be complete. Instead, we pull downstream tasks forward to happen with each and every code commit, thus maintaining solution completeness with every action we take.

One of our governance objectives must be that we are cognisant of how solutions are being delivered everywhere in the enterprise, because this is an indicator of their completeness. We must know that solutions satisfy a full set of business and technical expectations, not just that solutions are “code complete” awaiting an unmeasurable (and therefore opaque) process that makes code truly “complete.” These unmeasurable processes take time, and therefore cost; they are subsequently a black-box: we can time-box them, but we don’t really know the effort that will be required to pay down any accumulated debt. This opacity of IT is no different from opacity in an asset market: it makes the costs, and therefore the returns, of an IT asset much harder to quantify. The inability to demonstrate functional completeness of a solution (e.g, because it is not end-to-end developed) as well as the technical quality of a solution (through continuous quality monitoring) creates uncertainty that the asset that is going to provide a high business return. This uncertainty drives down the value of the assets that IT produces. The net effect is that it drives down the value of IT, just as the same uncertainty drives down the value of a security.

If the governance imperative is to understand that results are being achieved in addition to knowing how they are being achieved, we must consider another key point: what must we do to know with certainty how work is being performed? Consider three recent news headlines:

  1. Restaurant reviews lack transparency: restauranteurs encourage employees to submit reviews to surveys such as Zagat, and award free meals to restaurant bloggers who often fail to report their free dining when writing their reviews.2

  2. Some watchmakers have created a faux premium cachet: top watchmakers have been collaborating with specialist auction houses to drive up the prices by being the lead bidders on their own wares, and doing so anonymously. The notion that a Brand X watch recently sold for tens of thousands of dollars at auction increases the brand’s retail marketability by suggesting it has investment-grade or heirloom properties. That the buyer in the auction might have been the firm itself would obviously destroy that perception, but it is obfuscated from the retail consumer.3

  3. The credit rating of mortgage backed securities created significant misinformation in risk exposure. Clearly, a AAA rated CDO heavily laden with securitised sub-prime mortgage was never worthy of the same investment grade as, say, GE corporate bonds. The notion that what amounted to high-risk paper could be given a triple-A rating implied characteristics of the security that weren’t entirely true.

Thus, we must be very certain that we understand fully our facts about how work is being done. Do you have a complete set of process metrics established with your suppliers? To what degree of certainty do you trust the data you receive for those metrics? How would you know if they’re gaming the criteria that you set down (e.g., meaningless tests are being written to artificially inflate the degree of test coverage)? We must also not allow for surrogates: we cannot govern effectively by measuring documentation. We must focus on deliverables, and the artifacts of those deliverables, for indicators of how work is performed. A recent quote dating to the early years of CBS News is still relevant today: “everybody is entitled to their own opinion, but not their own facts.”4 Thus, IT governance must not only pay attention to how work is being done, it must take great pains to ensure that the sources of data that tell us how that work is being done have a high degree of integrity. People may assert that they work in a low-risk manner, but that opinion may not withstand the scrutiny of fact-based management. As with any governance function, the order of the day is no different than administration of nuclear proliferation treaties: “trust, but verify.”

This entire notion is a significant departure from traditional IT management. As Anatole France said of the Third Republic: “And while this enfeebles the state it lightens the burden on the people. . . . And because it governs little, I pardon it for governing badly.”5 On the whole, IT professionals will feel much the same about their host IT organisations. Why bother with all this effort to analyse process? All anybody cares about is that we produce "results" - for us, this means getting software into production no matter what. This process stuff looks pretty academic, a lot of colour coded graphs in spreadsheets. It interferes with our focus on results.

Lackadaisical governance is potentially disasterous because governance does matter. There is significant data to suggest that competent governance yields higher returns, and similarly that incompetent governance yields lower returns. In a 2003 study published by Paul Gompers, buying companies with good governance and selling those with poor governance from a population of 1,500 firms in the 1990s would have produced returns that beat the market by 8.5% per year.6 This suggests that there is a strong correlation between capable governance and high returns. Conversely, according to this report, there were strong indicators in 2001 that firms such as Adelphia and Global Crossing had significant deficiencies in their corporate governance, and that these firms represented significant investment risk.

As Gavin Anderson, chairman and co-founder of GovernanceMetrics International recently said, “Well governed companies face the same kind of market and competitor risks as everybody else, but the chance of an implosion caused by an ineffective board or management is way less.”7 The same applies to IT. Ignoring IT practices reduces transparency and increases opacity of IT operations, reducing IT returns. Governing IT so that it minimises the self-inflicted wounds, specifically through awareness of “lifestyle” decisions, creates an IT capability that can drive alpha returns for the business.


1DeVol, Ross and Bedroussian, Armen with Anita Charuworn, Anusuya Chatterjee, In Kyu Kim, Soojung Kim and Kevin Klowden. An Unhealthy America: The Economic Burden of Chronic Disease -- Charting a New Course to Save Lives and Increase Productivity and Economic Growth October 2007
2McLaughlin, Katy. The Price of a Four Star Rating The Wall Street Journal, 6-7 October 2007.
3Meichtry, Stacy. How Top Watchmakers Intervene in Auctions The Wall Street Journal, 8 October 2007.
4Noonan, Peggy. Apocalypse No The Wall Street Journal, 27-28 October 2007.
5Shirer, William L. The Collapse of the Third Republic Simon and Schuster, 1969. Shirer attributes this quote to Anatole French citing as his source Histoire des littératures, Vol. III, Encyclopédie de la Pléiade
6Greenberg, Herb. Making Sense of the Risks Posed by Governance Issues The Wall Street Journal, 26-27 May 2007.
7Ibid.

Wednesday, September 26, 2007

Investing in Strategic Capability versus Buying Tactical Capacity

US based IT departments are facing turbulent times. The cost efficiencies achieved through global sourcing face a triple threat to their fundamentals:
  1. The USD has eroded in value relative to other currencies in the past 6 years1 – this means the USD doesn’t buy as much global sourcing capacity as it did 6 years ago, particularly vis-à-vis its peer consumer currencies.

  2. The increase in global IT sourcing is outpacing the rate of development of highly-qualified professionals in many markets2 – salaries are increasing as there are more jobs chasing fewer high-qualified candidates, and turnover of IT staff is rising as people pursue higher compensation.
  3. Profitability growth in the high end of the IT consumer market remains strong – As the returns of firms at high end of the IT consumer market continue to be strong – Goldman Sachs just had its 3rd best quarter in its history3 – demand will intensify for highly capable people.
This could significantly change labour market dynamics. Since the IT bubble, the business imperative has been to drive down the unit cost of IT capacity (e.g., the cost of an IT professional per hour). This has been achieved substantially through labour arbitrage – sourcing IT jobs from the lowest cost provider or geography. However, the reduced buying power of the USD, combined with increasing numbers of jobs chasing fewer people, plus an increase in demand at the high end of the labour market, means that simple labour arbitrage will have less impact on the bottom line. As IT costs change to reflect these market conditions, US-based IT organizations will face an erosion of capability.

In one sense, labour is to the IT industry as jet fuel is to the airline industry: IT is beholden to its people, just as airplanes don’t fly without fuel. For quite some time, we’ve attempted to procure labour using a commodity approach: somebody estimates they have x hours of need, which means they need y people, which will then globally sourced from the least expensive provider. The “unit cost optimisation” model of pricing IT capability defaulted into success because of the significant cost disparity in local versus offshore staff. The aforementioned market trends suggest that the spread may narrow. If it does, a number of the underlying assumptions are no longer valid, and fundamental flaws in most labour arbitrage models is exposed: specifically, that IT needs are uniform, and IT capabilities are uniform and can be defined as basic skills and technical competencies.

Unlike jet fuel, labour isn’t a commodity. Not every hour of capacity is the same. There are grades of quality of capability that defy commoditisation. This means there is a quality dimension that is present yet substantially invisible when we assess capacity. Macro-level skill groupings are meaningless because they’re not portable (e.g., one organisation’s senior developer is another’s junior). They also fail to account for labour market trends: if the population of Java coders increases in a specific market but new entrants lack aptitude and experience and their training is inferior, we have a declining capability trend that is completely absent from our sourcing model. Nor is capacity linear – two people of lower capability will not be as effective as one person of high capability, and too many low capability people create more problems than they solve. An IT organisation which stabilised around simple unit-cost optimisation will find itself at the mercy of a market which it may not fully understand, with characteristics which haven’t factored into its forecasts.

The commodity model also ignores how advanced IT systems are delivered. High-return business solutions don’t fit the “mass production” model, where coders repetitively apply code fragments following exacting rules and specifications. Instead, business and IT collaborate in a succession of decisions as they navigate emerging business need whilst constantly integrating back to the tapestry of existing IT components and business systems. This requires a high degree of skill from those executing. It also requires a high degree of meta knowledge or “situational awareness,” that is, domain knowledge and environmental familiarity necessary to deliver and perpetuate these IT assets. This includes everything from knowing which tools and technology stacks are approved for use, to how to integrate with existing systems and components, to what non-functional requirements are most important, to how solutions pass certification. Combined, this meta knowledge defines the difference between having people who can code to an alleged state of “development complete” versus having people who can deliver solutions into production.

Because the assets that drive competitiveness through operations are delivered through a high-capability IT staff, unit cost minimisation is not a viable strategy if IT is to drive alpha returns. Strategic IT is therefore an investment in capability. That is, we are investing not just in the production of assets that automate operations, we are investing in the ability to continuously adjust those IT assets with minimal disruption, such that they continue to support evolving operational efficiencies. This knowledge fundamentally rests with people. The value of this knowledge is completely invisible if we’re buying technology assets based on cost.

This brings us back to current market conditions. At the moment, tactical cost minimisation works against the USD denominated market competitor. The EUR, CHF, AUD, CAD or GBP competitor can afford to increase salaries wherever sourced without as much bottom-line impact as their USD competitors. They subsequently have an advantage in attracting new talent, and are better positioned to lure away highly capable people from US based competitors. In addition, the increased cost of IT for the US based competitor might mean more draconian measures, such as staff reductions, to meet budget expectations. To avoid the destruction of capability, a US IT organisation may look to simply shift sourcing from international to local markets. But this shift is not without its risk in durability (will the USD rise again to match historical averages?), competitive threat (other firms will follow the same strategy and drive up local market salaries), or cost of change (nothing happens in zero time, and the loss / replacement of meta knowledge comes at a cost.) Clearly, global sourcing is no longer a simple cost equation. It is complex, involving a hedge on investing in sustainable capability development relative to competitive threats and exchange rate fluctuations.

Responding to this challenge requires that the IT organisation have a mature governance capability. Why governance? Because surviving the convulsions in the cost of the “jet fuel” of the IT industry requires that we frame the complete picture of performance: that value is delivered, and that expectations (ranging from quality to security to regulatory compliance) are fully satisfied. IT doesn’t do this especially well today. It suffers no shortage of metrics, but very few are business-facing. The absence of so few business-oriented metrics gives “cost per hour” that much more prominence, and fuels the unit cost approach to IT management.

Breaking out of this requires assessing the cost of throughput of IT as a whole and of teams in particular, not of the individual. IT is only as capable as the productivity of its cross-functional execution; specifically, how effectively do IT teams steer business needs from expression to production, subject to all the oddities of that particular business environment. If the strength of currently sourced teams can be quantitatively assessed, the organisational impact of a potential change in IT sourcing can be properly framed. The lack of universal capability assessment, and the immaturity of team-based results analysis mean that an IT governance function must define these performance metrics for itself, relative to its industry, with cooperation and acceptance from its board. Without it, IT will be relegated to a tactical role, forever chasing the elusive “lowest unit cost” and perpetually disappointing its paymasters, struggling to explain the costs of execution which cannot be accounted in a unit cost model.

If an IT organisation is focused on team throughput and overall capability, it can strategically respond to this threat. Just as jet fuel supply is secured and hedged by an airline, so must labour supply be strategically managed by IT. This means managing the labour supply chain4 to continuously source high capability people, as opposed to recruiting to fill positions as they become vacant. This requires managing supply and demand by doing such things as anticipating need and critically assessing turnover, creating recruiting channels and candidate sources, identifying high-capability candidates, rotating people through assignments, understanding and meeting professional development needs, setting expectations for high-performance, providing professional challenges, offering training and skill development, critically assessing performance, managing careers and opportunities, correcting poor role fits and bad hiring decisions, and managing exits.

Doing these things builds a durable and resilient organisation – attributes that are invisible in a cost center, but critical characteristics of a strategic capability. This is, ultimately, the responsibility of an IT organisation, not an HR department. HR may provide guidelines, but this is IT’s problem to solve; it cannot abdicate responsibility for obtaining its "raw materials." Clearly, building a labour pipeline is a very challenging problem, but it's the price of admission if you're going to beat the market.

IT drives alpha returns not just through the delivery of strategic IT assets, but by investing in the capability to consistently deliver those assets. If capability moves with the labour market, an IT organisation will yield no better then beta returns to the business. Current market indicators suggest that it will be difficult for US based firms to maintain their current levels of capability, thus the business returns driven by an IT capability that moves with the market are likely to decline. Tactical buyers of IT are facing a cost disparity, and will have few cards to play that don't erode capability. Strategic investors in IT can capitalise on these trends to intensify strengths, and even disrupt competitors, through aggressive management of its labour pipeline.


1 Comparing August 2001 to August 2007 monthly averages, the USD declined 28% to the GBP, 34% to the EUR, 28% to the CHF, 37% to the AUD, 31% to the CAD, 13% to the INR, 8% to the CNY. Exchange rate data was pulled from Oanda.

2 Technology job growth and salaries are on the rise worldwide. Two recent articles highlight India
and the US. Also, I’ve referenced the following two previously, but Adrian Wooldridge makes a compelling argument for the increased competition for talent, and there’s ample data on the gap between job growth and volume of new entrants. There’s some recent articles evaluating the quality of talent but I don’t have those handy.

3 Profitability among market leaders and overall technology sector growth continues to be strong globally.

4 I am indebted to Greg Reiser for this term.