I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

Sunday, October 28, 2007

IT Governance Maximises IT Returns

In recent years, Michael Milken has turned his attention to health and medicine. Earlier this year, the Milken Institute released a report concluding that 7 chronic illnesses – diabetes, hypertension, cancer, etc. – are responsible for over $1 trillion in annual productivity losses in the United States. They go on to report that 70% of the cases of these 7 chronic illnesses are preventable through lifestyle change: diet, exercise, avoiding cigarettes and what not.1 In a recent interview on Bloomberg Television, Mr. Milken made the observation that because of the overwhelming number of chronic illness cases, medical professionals are forced to devote their attention to the wrong end of the health spectrum in the US. That is, instead of creating good by increasing life expectancy and enhancing quality of life through medical advancement, Mr. Milken argues that the vast majority of medical professionals are investing their energy into eliminating bad by helping people recover from poor decisions made. It’s obviously a sub-optimal use medical talent, and through sheer size is showing signs of overwhelming the medical profession. It is a problem that will spiral downward until the root causes are eradicated and new cases of “self inflicted” illness abates.

This offers IT a highly relevant metaphor.

Many of the problems that undermine IT effectiveness are self-inflicted. Just as lifestyle decisions have a tremendous impact on quality of life, how we work has a tremendous impact on the results we achieve. If we work in a high-risk manner, we have a greater probability of our projects having problems and thus requiring greater maintenance and repair. Increased maintenance and repair will draw down returns. The best people in an IT organisation will be assigned to remediating technical brownfields instead of creating an IT organisation that drives alpha returns. That assumes, of course, that an IT organisation with excessive brownfields can remain a destination employer for top IT talent.

This suggests strongly that “how work is done” is an essential IT governance question. That is, IT governance must not be concerned only with measuring results, but also knowing that that the way in which those results are achieved is in compliance with practices that minimise the probability of failure.

This wording is intentional: how work is performed reduces the probability of failure. If, in fact, lifestyle decisions can remove 70% of the probability that a person suffers any of 7 chronic conditions, so, too, can work practices reduce the probability that a project will fail. Let’s be clear: reducing the probability of failure is not the same as increasing the probability of success. That is, a team can work in such a way that it is less likely to cause problems for itself, by e.g., writing unit tests, having continuous integration, developing to finely grained statements of business functionality, embedding QA in the development team, and so forth. Doing these isn’t the same as increasing the probability of success. Reducing the probability of failure is the reduction of unforced errors. In lifestyle terms, I may avoid certain actions that may cause cancer, but if cancer is written into my genetic code the deck is stacked against me. So it is with IT projects: an extremely efficient IT project will still fail if it is blindsided because a market doesn’t materialise for the solution being developed. From a solution perspective, we can do things to control the risk of an unforced error. This is controllable risk, but it is only internal risk to my project.

This latter point merits particular emphasis. If we do things that minimise the risk of an unforced error – if we automate a full suite of unit tests, if we demand zero tolerance for code quality violations, if we incrementally develop complete slices of functionality – we intrinsically increase our tolerance for external (and thus unpredictable) risk. We are more tolerant to external risk factors because we don’t accumulate process debt or technical debt that makes it difficult for us to absorb risk. Indeed, we can work each day to maintain an unleveraged state of solution completeness: we don’t accumulate “debt,” mortgaged our future by needing downstream effort (such as “integration” and “testing”) that accumulates a partial solution which is alleged to be complete. Instead, we pull downstream tasks forward to happen with each and every code commit, thus maintaining solution completeness with every action we take.

One of our governance objectives must be that we are cognisant of how solutions are being delivered everywhere in the enterprise, because this is an indicator of their completeness. We must know that solutions satisfy a full set of business and technical expectations, not just that solutions are “code complete” awaiting an unmeasurable (and therefore opaque) process that makes code truly “complete.” These unmeasurable processes take time, and therefore cost; they are subsequently a black-box: we can time-box them, but we don’t really know the effort that will be required to pay down any accumulated debt. This opacity of IT is no different from opacity in an asset market: it makes the costs, and therefore the returns, of an IT asset much harder to quantify. The inability to demonstrate functional completeness of a solution (e.g, because it is not end-to-end developed) as well as the technical quality of a solution (through continuous quality monitoring) creates uncertainty that the asset that is going to provide a high business return. This uncertainty drives down the value of the assets that IT produces. The net effect is that it drives down the value of IT, just as the same uncertainty drives down the value of a security.

If the governance imperative is to understand that results are being achieved in addition to knowing how they are being achieved, we must consider another key point: what must we do to know with certainty how work is being performed? Consider three recent news headlines:

  1. Restaurant reviews lack transparency: restauranteurs encourage employees to submit reviews to surveys such as Zagat, and award free meals to restaurant bloggers who often fail to report their free dining when writing their reviews.2

  2. Some watchmakers have created a faux premium cachet: top watchmakers have been collaborating with specialist auction houses to drive up the prices by being the lead bidders on their own wares, and doing so anonymously. The notion that a Brand X watch recently sold for tens of thousands of dollars at auction increases the brand’s retail marketability by suggesting it has investment-grade or heirloom properties. That the buyer in the auction might have been the firm itself would obviously destroy that perception, but it is obfuscated from the retail consumer.3

  3. The credit rating of mortgage backed securities created significant misinformation in risk exposure. Clearly, a AAA rated CDO heavily laden with securitised sub-prime mortgage was never worthy of the same investment grade as, say, GE corporate bonds. The notion that what amounted to high-risk paper could be given a triple-A rating implied characteristics of the security that weren’t entirely true.

Thus, we must be very certain that we understand fully our facts about how work is being done. Do you have a complete set of process metrics established with your suppliers? To what degree of certainty do you trust the data you receive for those metrics? How would you know if they’re gaming the criteria that you set down (e.g., meaningless tests are being written to artificially inflate the degree of test coverage)? We must also not allow for surrogates: we cannot govern effectively by measuring documentation. We must focus on deliverables, and the artifacts of those deliverables, for indicators of how work is performed. A recent quote dating to the early years of CBS News is still relevant today: “everybody is entitled to their own opinion, but not their own facts.”4 Thus, IT governance must not only pay attention to how work is being done, it must take great pains to ensure that the sources of data that tell us how that work is being done have a high degree of integrity. People may assert that they work in a low-risk manner, but that opinion may not withstand the scrutiny of fact-based management. As with any governance function, the order of the day is no different than administration of nuclear proliferation treaties: “trust, but verify.”

This entire notion is a significant departure from traditional IT management. As Anatole France said of the Third Republic: “And while this enfeebles the state it lightens the burden on the people. . . . And because it governs little, I pardon it for governing badly.”5 On the whole, IT professionals will feel much the same about their host IT organisations. Why bother with all this effort to analyse process? All anybody cares about is that we produce "results" - for us, this means getting software into production no matter what. This process stuff looks pretty academic, a lot of colour coded graphs in spreadsheets. It interferes with our focus on results.

Lackadaisical governance is potentially disasterous because governance does matter. There is significant data to suggest that competent governance yields higher returns, and similarly that incompetent governance yields lower returns. In a 2003 study published by Paul Gompers, buying companies with good governance and selling those with poor governance from a population of 1,500 firms in the 1990s would have produced returns that beat the market by 8.5% per year.6 This suggests that there is a strong correlation between capable governance and high returns. Conversely, according to this report, there were strong indicators in 2001 that firms such as Adelphia and Global Crossing had significant deficiencies in their corporate governance, and that these firms represented significant investment risk.

As Gavin Anderson, chairman and co-founder of GovernanceMetrics International recently said, “Well governed companies face the same kind of market and competitor risks as everybody else, but the chance of an implosion caused by an ineffective board or management is way less.”7 The same applies to IT. Ignoring IT practices reduces transparency and increases opacity of IT operations, reducing IT returns. Governing IT so that it minimises the self-inflicted wounds, specifically through awareness of “lifestyle” decisions, creates an IT capability that can drive alpha returns for the business.


1DeVol, Ross and Bedroussian, Armen with Anita Charuworn, Anusuya Chatterjee, In Kyu Kim, Soojung Kim and Kevin Klowden. An Unhealthy America: The Economic Burden of Chronic Disease -- Charting a New Course to Save Lives and Increase Productivity and Economic Growth October 2007
2McLaughlin, Katy. The Price of a Four Star Rating The Wall Street Journal, 6-7 October 2007.
3Meichtry, Stacy. How Top Watchmakers Intervene in Auctions The Wall Street Journal, 8 October 2007.
4Noonan, Peggy. Apocalypse No The Wall Street Journal, 27-28 October 2007.
5Shirer, William L. The Collapse of the Third Republic Simon and Schuster, 1969. Shirer attributes this quote to Anatole French citing as his source Histoire des littératures, Vol. III, Encyclopédie de la Pléiade
6Greenberg, Herb. Making Sense of the Risks Posed by Governance Issues The Wall Street Journal, 26-27 May 2007.
7Ibid.

Wednesday, September 26, 2007

Investing in Strategic Capability versus Buying Tactical Capacity

US based IT departments are facing turbulent times. The cost efficiencies achieved through global sourcing face a triple threat to their fundamentals:
  1. The USD has eroded in value relative to other currencies in the past 6 years1 – this means the USD doesn’t buy as much global sourcing capacity as it did 6 years ago, particularly vis-à-vis its peer consumer currencies.

  2. The increase in global IT sourcing is outpacing the rate of development of highly-qualified professionals in many markets2 – salaries are increasing as there are more jobs chasing fewer high-qualified candidates, and turnover of IT staff is rising as people pursue higher compensation.
  3. Profitability growth in the high end of the IT consumer market remains strong – As the returns of firms at high end of the IT consumer market continue to be strong – Goldman Sachs just had its 3rd best quarter in its history3 – demand will intensify for highly capable people.
This could significantly change labour market dynamics. Since the IT bubble, the business imperative has been to drive down the unit cost of IT capacity (e.g., the cost of an IT professional per hour). This has been achieved substantially through labour arbitrage – sourcing IT jobs from the lowest cost provider or geography. However, the reduced buying power of the USD, combined with increasing numbers of jobs chasing fewer people, plus an increase in demand at the high end of the labour market, means that simple labour arbitrage will have less impact on the bottom line. As IT costs change to reflect these market conditions, US-based IT organizations will face an erosion of capability.

In one sense, labour is to the IT industry as jet fuel is to the airline industry: IT is beholden to its people, just as airplanes don’t fly without fuel. For quite some time, we’ve attempted to procure labour using a commodity approach: somebody estimates they have x hours of need, which means they need y people, which will then globally sourced from the least expensive provider. The “unit cost optimisation” model of pricing IT capability defaulted into success because of the significant cost disparity in local versus offshore staff. The aforementioned market trends suggest that the spread may narrow. If it does, a number of the underlying assumptions are no longer valid, and fundamental flaws in most labour arbitrage models is exposed: specifically, that IT needs are uniform, and IT capabilities are uniform and can be defined as basic skills and technical competencies.

Unlike jet fuel, labour isn’t a commodity. Not every hour of capacity is the same. There are grades of quality of capability that defy commoditisation. This means there is a quality dimension that is present yet substantially invisible when we assess capacity. Macro-level skill groupings are meaningless because they’re not portable (e.g., one organisation’s senior developer is another’s junior). They also fail to account for labour market trends: if the population of Java coders increases in a specific market but new entrants lack aptitude and experience and their training is inferior, we have a declining capability trend that is completely absent from our sourcing model. Nor is capacity linear – two people of lower capability will not be as effective as one person of high capability, and too many low capability people create more problems than they solve. An IT organisation which stabilised around simple unit-cost optimisation will find itself at the mercy of a market which it may not fully understand, with characteristics which haven’t factored into its forecasts.

The commodity model also ignores how advanced IT systems are delivered. High-return business solutions don’t fit the “mass production” model, where coders repetitively apply code fragments following exacting rules and specifications. Instead, business and IT collaborate in a succession of decisions as they navigate emerging business need whilst constantly integrating back to the tapestry of existing IT components and business systems. This requires a high degree of skill from those executing. It also requires a high degree of meta knowledge or “situational awareness,” that is, domain knowledge and environmental familiarity necessary to deliver and perpetuate these IT assets. This includes everything from knowing which tools and technology stacks are approved for use, to how to integrate with existing systems and components, to what non-functional requirements are most important, to how solutions pass certification. Combined, this meta knowledge defines the difference between having people who can code to an alleged state of “development complete” versus having people who can deliver solutions into production.

Because the assets that drive competitiveness through operations are delivered through a high-capability IT staff, unit cost minimisation is not a viable strategy if IT is to drive alpha returns. Strategic IT is therefore an investment in capability. That is, we are investing not just in the production of assets that automate operations, we are investing in the ability to continuously adjust those IT assets with minimal disruption, such that they continue to support evolving operational efficiencies. This knowledge fundamentally rests with people. The value of this knowledge is completely invisible if we’re buying technology assets based on cost.

This brings us back to current market conditions. At the moment, tactical cost minimisation works against the USD denominated market competitor. The EUR, CHF, AUD, CAD or GBP competitor can afford to increase salaries wherever sourced without as much bottom-line impact as their USD competitors. They subsequently have an advantage in attracting new talent, and are better positioned to lure away highly capable people from US based competitors. In addition, the increased cost of IT for the US based competitor might mean more draconian measures, such as staff reductions, to meet budget expectations. To avoid the destruction of capability, a US IT organisation may look to simply shift sourcing from international to local markets. But this shift is not without its risk in durability (will the USD rise again to match historical averages?), competitive threat (other firms will follow the same strategy and drive up local market salaries), or cost of change (nothing happens in zero time, and the loss / replacement of meta knowledge comes at a cost.) Clearly, global sourcing is no longer a simple cost equation. It is complex, involving a hedge on investing in sustainable capability development relative to competitive threats and exchange rate fluctuations.

Responding to this challenge requires that the IT organisation have a mature governance capability. Why governance? Because surviving the convulsions in the cost of the “jet fuel” of the IT industry requires that we frame the complete picture of performance: that value is delivered, and that expectations (ranging from quality to security to regulatory compliance) are fully satisfied. IT doesn’t do this especially well today. It suffers no shortage of metrics, but very few are business-facing. The absence of so few business-oriented metrics gives “cost per hour” that much more prominence, and fuels the unit cost approach to IT management.

Breaking out of this requires assessing the cost of throughput of IT as a whole and of teams in particular, not of the individual. IT is only as capable as the productivity of its cross-functional execution; specifically, how effectively do IT teams steer business needs from expression to production, subject to all the oddities of that particular business environment. If the strength of currently sourced teams can be quantitatively assessed, the organisational impact of a potential change in IT sourcing can be properly framed. The lack of universal capability assessment, and the immaturity of team-based results analysis mean that an IT governance function must define these performance metrics for itself, relative to its industry, with cooperation and acceptance from its board. Without it, IT will be relegated to a tactical role, forever chasing the elusive “lowest unit cost” and perpetually disappointing its paymasters, struggling to explain the costs of execution which cannot be accounted in a unit cost model.

If an IT organisation is focused on team throughput and overall capability, it can strategically respond to this threat. Just as jet fuel supply is secured and hedged by an airline, so must labour supply be strategically managed by IT. This means managing the labour supply chain4 to continuously source high capability people, as opposed to recruiting to fill positions as they become vacant. This requires managing supply and demand by doing such things as anticipating need and critically assessing turnover, creating recruiting channels and candidate sources, identifying high-capability candidates, rotating people through assignments, understanding and meeting professional development needs, setting expectations for high-performance, providing professional challenges, offering training and skill development, critically assessing performance, managing careers and opportunities, correcting poor role fits and bad hiring decisions, and managing exits.

Doing these things builds a durable and resilient organisation – attributes that are invisible in a cost center, but critical characteristics of a strategic capability. This is, ultimately, the responsibility of an IT organisation, not an HR department. HR may provide guidelines, but this is IT’s problem to solve; it cannot abdicate responsibility for obtaining its "raw materials." Clearly, building a labour pipeline is a very challenging problem, but it's the price of admission if you're going to beat the market.

IT drives alpha returns not just through the delivery of strategic IT assets, but by investing in the capability to consistently deliver those assets. If capability moves with the labour market, an IT organisation will yield no better then beta returns to the business. Current market indicators suggest that it will be difficult for US based firms to maintain their current levels of capability, thus the business returns driven by an IT capability that moves with the market are likely to decline. Tactical buyers of IT are facing a cost disparity, and will have few cards to play that don't erode capability. Strategic investors in IT can capitalise on these trends to intensify strengths, and even disrupt competitors, through aggressive management of its labour pipeline.


1 Comparing August 2001 to August 2007 monthly averages, the USD declined 28% to the GBP, 34% to the EUR, 28% to the CHF, 37% to the AUD, 31% to the CAD, 13% to the INR, 8% to the CNY. Exchange rate data was pulled from Oanda.

2 Technology job growth and salaries are on the rise worldwide. Two recent articles highlight India
and the US. Also, I’ve referenced the following two previously, but Adrian Wooldridge makes a compelling argument for the increased competition for talent, and there’s ample data on the gap between job growth and volume of new entrants. There’s some recent articles evaluating the quality of talent but I don’t have those handy.

3 Profitability among market leaders and overall technology sector growth continues to be strong globally.

4 I am indebted to Greg Reiser for this term.

Friday, August 24, 2007

Good Management Can Work Miracles

Pharmaceutical companies do not successfully deliver drugs just because they hire a lot of highly skilled researchers in lab coats. They deliver drugs because they have people to secure funding for research of some drugs over others, to take them through lab and clinical trial, to steer them through regulatory approval, to manufacture, to make doctors aware of them, to distribute them to hospitals and pharmacies, and to follow-up on results. When it all comes together, the overall benefit of the resulting system can be incredible, such as an increase in life expectancy. As Thomas Teal wrote succinctly, “Good management works miracles.”1

The same applies to an Information Technology organisation that is core to achieving alpha returns: it needs the management practices to match high-capability people. Consider Xerox PARC. In the 1970s it spawned tremendous innovation in personal computing technology. From those innovations came products, solutions, and even categories that didn’t previously exist. Billions of dollars of revenue and profitability were generated – for everybody but Xerox. Why? Bright people were inventing and innovating, but nobody was there to monetise their work.2

In general, IT has structural and social deficiencies that conspire against development of a management capability. Technical IT jobs offer individual satisfaction on a daily basis, with frequent feedback and acknowledgement of success at solving a very specific problem or challenge. IT reward systems are also often based on individual performance tied to granular and highly focused statements of accomplishment. This is contrary to IT management positions where the results of decisions made may not be realised for weeks or months, measures of success are often less tangible, and rewards based on the performance of a large collection of individuals. It is also not uncommon for a company to belittle the value of management by defining it as a collection of supervisory or hygienic tasks e.g., to ensure everybody on a team has taken requisite compliance training each quarter and submits their timesheet each week. In other instances, line management may simply have all decision-making denied it, relegated to polling subordinates to find what’s going on to summarise and report upwards, while also being given messages from higher up to deliver to the rank and file. These aren’t management practices, they’re tasks of administrative convenience masquerading as management.

These deficiencies are supplemented by an IT industry that, on the whole, doesn’t place much value on management. Technical people promoted into management positions will very often resist management responsibilities (e.g., electing to continue to perform technical tasks), and also be highly unleveraged in a team (one manager to dozens of direct reports.) There is more precise definition and greater mobility of technical jobs than of management jobs. The same is true for skill acquisition: there are a wealth of IT oriented books, magazines, journals and publications on very specific technologies or applications of technologies, but not nearly the depth on matters of management. And all too often, what passes for management guidance in books and publications are lightweight abstractions of lessons learnt wrapped in layers of cheerleading and self-esteem building directed at the reader, not principles of sound management science. No surprise, then, that it can be far more attractive for an IT professional to prefer a career path of “individual contributor” over “manager.” Collectively, this doesn’t help increase in number the ranks of capable managers.

One of the root causes is that IT is considered a technology-centric business, as opposed to a people-centric one. 3 By definition, IT solutions – what the business is paying for – are not delivered by technology, but by people. The misplaced orientation toward technology as opposed to people means that management tends to focus not on what it should – “getting things done through people” – but on what it should not – the “shiny new toys” of technology assets. This misplaced focus creates a deficiency in management practices, and widens the capability gulf relative to the demands being placed on IT today. It is a structural inhibitor to success in delivering solutions.

Top-down, high-control management techniques have proven to be orthogonal to the IT industry. Amongst other things, this is because of a high concentration of highly-skilled and creative (e.g., clever at design, at problem solving, and so forth) people who don’t respond to that style of management. It is also because top-down practices assume an unchanging problem space, and thus an unchanging solution space. Unfortunately, this is not a characteristic of much of what happens in technology: IT solutions are produced in a dynamic, creative solutions space. For one thing, most projects are in a constant state of flux. The goal, the defined solution, the people, and the technologies constantly change. This means day to day decisions will not move in lock step with initial assumptions. For another, in most projects there is not a shared and fully harmonised understanding of the problem: if anybody on the team has a different understanding of the problem, if there is latency in communicating change to each member of the team, or if somebody simply wants to solve an interesting technical problem that may or may not be of urgent need to the business problem at hand, decisions on the ground will be out of alignment with overall business needs.

Each IT professional takes dozens of decisions each day. The principal objective in managing a highly-capable organisation is not to take decisions for each person, but to keep in alignment decisions people take. Management of a professional capability is thus an exercise in making lots of adjustments, not pushing blindly ahead toward a solution. But facilitating lots of adjustments isn’t a simple problem because management doesn’t just happen by itself in a team. As Dr. George Lopez found out, changing to self-managed work teams isn’t a turnkey solution:

  • With no leaders, and no rules, "nothing was getting done, except people were spending a lot of time talking." After about a year and a half, he decided teams should elect leaders.4

But that, too, isn’t enough. The basic management tools and structures must be present or this simply doesn’t work. Consider this description of Super Aguri’s F1 pre-race meetings:

  • "The process is very formal," says Super Aguri sporting director Graham Taylor. "I chair our meetings and ask the individuals present to talk in turn. Anyone late gets a bollocking, only one person is allowed to talk at a time, and it absolutely is not a discussion. It … is absolutely the best way to share a lot of information in a short passage of time. … If you don’t need to be there, you’re not."


  • … While the briefing process has always been with F1, the structure has evolved to absorb the increasing complexity of cars and the concomitant increase in team size. "When I started in F1 there were 50 people in the team, now there are 500," says [Sam] Michael, […Williams technical director]. "If you don’t have a firm hold, not everyone gets all the information. You need to have structure.”5


Consider how many layers a team, or even an individual developer, may be removed from a large IT programme. The lack of upward transparency from the individual and downward visibility from the programme management makes it nearly impossible to keep decisions aligned. This is where good management delivers tremendous value: the basic, focused structures of Agile project management – the daily stand-up, the iteration planning meeting, the retrospective, the iteration tracking report, the release plan – provide focus, structure, and discipline that facilitate maintaining alignment of individual and group goals. A salient point in the example from Super Aguri is that greater team capability makes the need to do these things that much more acute: the higher the degree of professional capability in the team, the more adjustments there can be to make, if for no other reason than the rapidity with which the highly capable will work a problem space.

“We are not a family,” Robert Lane, the CEO of Deere & Co, recently said.6 “What we are is a high performance team.” If IT is to deliver high-performance results – to be an alpha capability that contributes to alpha business results – it requires not only high-capability people but the management practices to match.

1 Teal, Thomas. "The Human Side of Management." Harvard Busines Review. April, 1996. I highly recommend reading this article. He deftly summarises the gap between the impact management has relative to the investment made into developing talent completely. It has arguably become more pronounced in the 11 years since this article first appeared.
2 Gross, Daniel. “How Xerox Failed to copy Its Success.” Audacity. Fall, 1995. Audacity, the business journal, has long since ceased publication which is a shame: it provided excellent snippets on business decisions, both successful and unsuccessful, in the broader context of their market conditions.
3 The change in moniker from “Information Systems” to “Information Technology” was, arguably, a step in the wrong direction. The word “systems” is expansive, and extends to solutions with partial technology component to them - or none whatsoever. “Technology” narrows the remit, and thus the business impact, of what we now call “IT.”
4 White, Erin. “How a Company Made Everyone a Team Player” The Wall Street Journal, Monday 13 August 2007.
5 Grand Prix Tune-Ups” F1 Racing Magazine. August 2007. As perhaps the first among high-performance industries, it’s interesting to see what sounds very much like a stand-up meeting being core to race day execution - and how it has allowed them to scale.
6 Brat, Ilan and Aeppel, Timothy. “Why Deere is Weeding Out Dealers Even as Farms Boom” The Wall Street Journal, Tuesday, 14 August 2007.


Sunday, July 29, 2007

Alpha Returns Require an Alpha IT Capability

Demand for IT in business continues to rise. Looking backward, over the last 10 years the IT market has absorbed the new capacity in Asia and South America, yet still we find global and national/regional IT employment is up since 2000.1 Looking forward, all indications are that demand will continue to rise. More importantly, there are very strong indicators that IT will increasingly be a strategic capability: the forecasted increase in worldwide investable assets is creating demand for new sell-side financial products;2 fact-based management is increasingly being applied to sweat assets or improve competitiveness of operations, which in turn demands increasing amounts of data about specific businesses and processes;3 and the re-emergence of high-risk capital (the recent downturn in credit markets notwithstanding) is funding start-up companies suggest that demand for IT will continue to rise.


This presents both a dilemma and a competitive opportunity for companies today.


IT is, fundamentally, a people business. While the systems and solutions it produces might automate tasks of the business, not to mention allow for complex tasks not otherwise practical to be manually executed, the production of those systems is a people-centric process. It stands to reason also that the more complex the solution, the more skilled the people behind the solution. The challenge facing an increasingly strategic IT isn’t a capacity question, but a capability question. Skills and capabilities are not commodities. It takes a great deal of individual skill to solve business problems through IT systems, specifically to model, code and construct solutions that scale, are secure, and are reliable. The highly-capable IT professional, one who has the ability to perform technical tasks as well as understand the business context, is already rare. But as IT becomes a driver of strategic imperative, these professionals will be in that much greater demand. The problem facing IT, then, is that the increase in demand for strategic business solutions that have a significant IT component will outpace the arrival rate of new highly-capable IT professionals, making the highly-capable IT professional that much more scarce.


This is obvious through the example verticals posited above: an increase in development of sell-side products, or an increase in the demand for greater and more accurate data on the business (and business processes) are clear examples of companies trying to achieve returns that beat the market by making use of a significant IT component. The trick to yielding higher ROI through such strategic IT solutions is not to reduce the “investment” on the IT component. Such strategic solutions can’t be sourced based on cost-of-capacity; they need to be sourced specifically on the capability of those delivering the solution. Capacity - the time available to work on developing these solutions - will not alone deliver the solution as emergent products and greater business insight are arrived at iteratively, and through business-IT collaboration. Looking simply at IT capacity to do such tasks is to hold skills as a constant. In a people-centric business, skills are not constant. The trick to yielding higher ROI through strategic IT solutions is to achieve “alpha” IT capability relative to the market – that is, to have an IT capability that beats the average, or “beta,” market IT capability. Specifically, sourcing an IT capability and allowing it to improve at the same rate of the overall market (beta) isn’t going to be sufficient if IT is to be a driver of above-average returns (alpha.) To drive above-average market returns, that IT capability must itself be above average.


Being “above average” or “below average" is difficult to assess because there is no “IT capability index” and thus no baseline for IT in general, let alone within an industry. Subsequently, any assessment of IT capability is likely to be laden with assertion. Worse, we have often allowed “results” to act as a surrogate for an assessment of IT effectiveness, but looking exclusively at results is often incomplete, and there is a high degree of latency between results data – the quality of delivered IT systems – and capability development. It is possible, though, to take an objective assessment of IT capability. We can look to a wealth of indicators – development excellence measures such as code quality and delivery transparency, staff peer reviews, customer satisfaction, and business value delivered – to create a composite of IT effectiveness. In some cases, we may need to have relative baselines: e.g., we measure over time against an initially assessed state of something like customer satisfaction. In other cases, we can identify strong industry baselines, such as code quality metrics we can run against the source behind such open source projects such as CruiseControl, JBoss, Waffle, PicoContainer and many, many others to provide an indicator of achievable code quality.


Gaining a sense of relative strength is important because it provides context for what it means to be high-capability, but it doesn't define what to do. Clearly, there must be things a strategic IT organisation must do to become high-capability, and remain so over time. And this isn’t just a money question: while compensation is a factor, in an increasingly competitive market you quickly end up with a dozen positions chasing the same person, all the time.4 The high-capability IT organisation must offer more than comp if it is to be durable. It must be a “destination employer” offering both skill and domain knowledge acquisition as well as thought leadership opportunities. This requires investing in capability development: skills, specialisation, innovation, and so forth. Going back to our examples, the development of cutting-edge sell side products (e.g., structured credit products) will require business domain fluency as well as a high degree of technology skill. Similarly, if companies are to “sweat the assets” of a business to their limits requires a very low noise-to-signal ratio of their business intelligence; that requires IT to be highly fluent in business process and business needs. Companies seeking alpha through IT cannot be obtain this capability through beta activities such as the acquisition of new technology skills through natural turnover and changes in the overall market capability, or the introduction to the business domain through immersion in an existing codebase. Indeed, companies relying on beta mechanisms may find themselves underperforming dramatically in an increasingly competitive market for capability. Instead, to be drivers of strategic solutions and thus alpha results, capability must rise from strategic investments.


The successful development of this capability turns IT into a competitive weapon in multiple ways. The business context is obvious: e.g., new sell-side products will clearly allow a trading firm to attract investment capital. But it goes beyond this. In a market with a scarce volume of high-performance people, being the destination employer for IT professionals in that industry segment can deprive the competition of those people. Hindering a competitor from developing an alpha IT capability will undermine their ability to achieve alpha returns. This makes IT both a driver of its host firms returns as well as an agent that disrupts the competition. This clearly demarks the difference between beta reactions, such as anticipating IT demographic changes in anticipation of recruiting efforts, and alpha reactions that create those shifts through recruiting and retention activities that force competitors to react.


This does not apply to all IT organisations. Those that are strictly back-office processing are simply utilities, and can, with minimum risk, trail the market in capability development. But businesses with significant portions of alpha returns dependent on IT systems development require a similarly alpha IT capability. If they follow the utility model, firms dependent on strategic IT are going to post sub-optimal returns that will not endear them to Wall Street. If instead they do the things to build a high capability captive IT or by partnering with a firm that can provide that high capability IT for them, and by building out the governance capability to oversight it, they’ll not just satisfy Wall Street, they’ll be market leaders with sustainable advantage.


1The Department of Computing Sciences at Villanova University has published some interesting facts and links.
2"Get Global. Get Specialized. Or Get Out." IBM Institute for Business Value, July 2007.
3"Now, It's Business By Data, but Numbers Still Can't Tell Future." Thurm, Scott. The Wall Street Journal, 23 July 2007.
4The Battle for Brainpower" The Economist, 5 October 2006.

Sunday, June 24, 2007

Strategic IT Does More than Assume Technology Risk, it Mitigates Business Risk

Risk management, particularly in IT, is still a nascent discipline. Perhaps this is because there are an overwhelming number of cultural norms that equate “risk management” to “defeatism.” To wit: “Damn the torpedoes, full speed ahead!” is intuitively appealing, offering a forward-looking focus without the encumbrance of consideration for that which might go wrong. This notion of charging ahead without regard to possible (and indeed likely) outcomes all too often passes for a leadership model. And poor leadership it is: sustainable business success is achieved when we weigh the odds, not ignore them entirely. Thus leadership and its derivative characteristics - notably innovation and responsiveness - are the result of calculated, not wanton, risk taking.

How risk is managed offers a compelling way to frame the competitiveness of different business models. A recent report by the IBM Institute for Business Value1 forecasts that Capital Markets firms engaged in risk mitigation have better growth and profit potential than firms engaged in risk assumption. According to this report, risk assumers (such as principle traders or alternative asset managers) may do a greater volume of business, but it is the risk mitigators (notably structured product providers and passive asset managers) who will have greater margins. Since Capital Markets firms are leading consumers of IT, and as IT tends to reflect its business environment, this has clear implications for IT.

Traditionally, IT has been a risk assumer. It provides guarantees for availability, performance and completeness for delivery of identified, regimented services or functionality. This might be a data centre where hardware uptime is guaranteed to process transactions, a timesheeting capability that is available on-demand, or development of a custom application to analyse asset backed securities. Being asked to do nothing more than assume risk for something such as availability or performance of technology is appealing from an IT perspective because it’s a space we know, or at least we think we know. It plays directly to that which IT professionals want to do (e.g., play with shiny new toys) without taking us out of our comfort zone (the technology space versus the business space). Worrying with the technology is familiar territory; worrying about the underlying business need driving the technology is not.

IT can only provide business risk mitigation if it is partnering with the business for the delivery of an end-to-end business solution. If it is not – if IT maintains the arms-length “you do the business, we do the technology” relationship - IT assumes and underwrites technology risk, nothing more. The trouble is, this doesn’t provide that much business value. Technology itself doesn’t solve business problems, so the notion of managing technology – be it to optimise cost, availability, performance or completeness – is no different from optimising, say, the company's power consumption.

This defines IT's business relevance. Being a provider of utilities is not a strategic role. Energy offers a compelling parallel: while it is important for a business to have electricity, most businesses don’t think of the power company as a strategic partner, they think of the power company as just “being there.” The concern is far more utilitarian (e.g., are we turning off the lights at night) than it is strategic (e.g., nobody measures whether we're maxmising equity trades per kilowatt hour.) Worse still, businesses don’t think of their utilities whatsoever. They're aware of them only when they're not there. Awareness is negative, at best escalation, at worst a disruption to the utility’s cash flow.

In any industry, risk assumption is the purview of utility providers. Software development capacity, IT infrastructure, and software as a service are all examples of risk assumption. They are useful services to be sure, but of low business value. They compete on cost (through, for example, labour arbitrage or volume discounts) as opposed to differentiation on value (that is, as drivers of innovation or invention.) For IT as a whole to have business value it must not be viewed as a technology risk assumer but a mitigator of business risk.

The latter role has been historically de facto granted because business systems are substantially technology systems, hence IT has had a direct (if unforseen and unofficial) hand in business process re-engineering, partner management, and compliance certification. In the future, defaulting into this role is not a given: while business solutions have a significant technology component they are not solved entirely by technology. The actual business solution is likely to involve increased regulatory compliance, complex process changes, constant training, ongoing supplier and customer management and integration, and so forth, all of which are increasingly complex due to multiplicity of parties, tighter value chain coupling, and geographic distribution, amongst other factors. Clearly, the technology component is but one piece part of an overall business solution.

Here lies the point-of-pivot that defines IT as strategic or tactical. If IT subserviates itself to a role of technology sourcer abdicating responsibility for success of the end-to-end business solution, so shall the business cast IT as nothing more than an interchangable component to the solution to the business problem. Conversely, when the business and IT both recognise that the technology piece is the make-or-break component of the overall business solution (that is, the technology bit is recognised as the greatest single controllable factor in determining success), IT has strategic footing. It achieves this footing because it mitigates risk of the business solution in business terms, not because it assumes risk for services that the business can competitively source from any number of providers.

Being a mitigator of business risk does in fact require that IT has robust risk management of its own capabilities. That is, internally, IT effectively and competently insures delivery. Here, we run directly into the fact that risk management is not a particularly strong suit of IT, and for the most part is primitively practiced in the IT space. Certainly, it is not simple: executive IT management must be able to analyse risk factors of individuals and teams (e.g., staff turnover, knowledge continuity, individual skill, interpersonal factors.) It must do so across a broad spectrum of roles (quality engineer, developer, DBA, network engineer) with regard to a wide variety of factors (e.g., are we a destination employer / destination account for our suppliers, are we giving people stretch roles with proper mentoring or simply tossing them in the deep end filling out an org chart?) These people factors are critical, but are but one source of risk: IT must be able to manage service availability, requirements accuracy, security and performance across software and infrastructure. Furthermore, this risk spectrum must be presented in a portfolio mannner that assesses risk factors on in-flight and potential fulfillment alternatives, both at this moment and forecast over time. A trivial task it is not.

Regardless of the complexity, technology risk assumption does not provide business value. In Hertzberg’s terms, keeping one’s house in order does not provide satisfaction to one’s partners, even if the absence of order creates genuine dissatisfaction with one’s partners. Successful assumption of risk – maintaining uptime within tolerance, or being “on time and on budget” - are nothing more than basic hygiene. We have allowed these to masquerade as providing “business value.” They don’t. The absence of hygiene – reliability, performance, continuity, completeness – relegates IT to a tactical role because it gives the appearance that IT is incapable of keeping its house in order. At the same time, the presence of hygiene – making deliveries on schedule, or meeting conditions of SLAs – does not entitle IT to a strategic role, it merely contains dissatisfaction.

To become a strategic capability, IT must offer motivators to the busisiness. To accomplish this, IT must focus specifically on activities that mitigate business risk. The core opportunity lies with people. IT is still very much a people-based business; that is, code doesn’t write itself, projects don’t manage themselves, network topographies don’t materialise, solution fitness dosesn't just happen, etc. A key differentiator in what makes IT strategic versus tactical is the extent to which people are leveraged to create business impact: the developer who creates a clever solution, the analyst who connects a complex series of support issues and expressed business requirements, the project manager who brings business solutions to fruition. This requires an outward view that includes domain knowledge and business intimacy on the part of IT professionals. A greater, outward looking context core to each person’s day-to-day is how IT is provides satisfaction to the business. The absence of this - reverting to the “you do the business we’ll do the technology” approach - relegates IT to a utility service, at best a department that doesn't let the business down, at worst something that does. Conversely, an outward-looking, business-engaged capability that is focused on the business problems at hand is what distinguishes a strategic as opposed to a tactical IT.

An efficient, risk-assuming IT capability is a superb utility that contains cost. It is well regarded by the business until a less expensive alternative presents itself, at which time that same IT capability becomes an under-achiever, even a nuisance.2 By comparison, an effective, expansive and business-risk-mitigating IT is a superb driver of business value, in touch with the environment such that it anticipates change and adjusts accordingly. In so doing, IT is not in the minimising business – minimising downtime, minimising cost, minimising catastrophic failures – but in the maximisation business – specifically, maximising business returns. A risk-assuming IT defines tactical IT; a risk-mitigating IT defines strategic IT.



1IBM's Institute for Business Value offers a number of interesting research papers. On the whole they have much more of a consumer-oriented view of IT and offer different market perspectives on the role of IT. Look for another Capital Markets paper in July 2007.
2Michael Porter made the case in Competitive Strategy that competition on price isn't sustainable; we should expect nothing different for an IT utility.

Monday, May 28, 2007

Just as Capital Has a Static Cost of Change, So Must IT

The global economy is awash in cash. We’ve experienced unprecedented profitability growth for the past 16+ quarters, the cost of capital is low, investment risk is more easily distributed, and companies find themselves with strong cash balances. Increasingly, though, we're seeing companies being taken private and their cash taken out by new ownership, or companies buying back their own stock.


This has implications for IT, as it competes for this same investment dollar on two fronts. First, if the executive decision is to concentrate equity or engage in M&A, it inherently means that these types of investments are expected to provide greater return than alternatives, notably investments in operations. IT projects, being operations-centric, are losing out. Second, when companies are taken private, it’s often with the expectation that they’ll be flipped in a short period of time; to maximise return, operations will be streamlined before the company is taken public again. This means private capital will scrutinise the business impact of not only new projects, but existing spend.


To win out, IT has to change the way it communicates. It must think and report more in terms common to capital, less in terms common to operations.


This means that IT has to show its business impact in a portfolio manner. For every project, there must be some indication of business impact, be it reduction of risk, reduction of cost of operations, revenue generation, and so forth. This is not a natural activity for IT because, for the most part, IT solutions don’t themselves provide business return; they do only as part of larger business initiatives. As a result, IT often abdicates this responsibility to a project’s business sponsor. As stewards of spend and benefactors of budgeting, IT cannot afford to be ignorant of or arrogant to the larger context in which its solutions exist.


ITs effectiveness depends on its ability to maximise use of people and resources. This means taking decisions across multiple initiatives, which can bring IT into conflict with the rest of the business, especially with sponsors of those initiatives that are deprioritised. Business issues are not universally understood by IT's business partners. For example, Accounting and Finance people may recognise the need for systems that reduce restatement risk, whereas operations people may see systems and processes designed to reduce restatement risk as contributing only to operational inefficiency. Communicating business imperative, and then people and resource decisions in a business-priority context, make IT decisions less contentious. It also makes IT more of a partner, and less of a tool.


This is not to say that business-impact and return offers ubiquitous language for business projects. Not every dollar of business value is the same: an hour of a person’s work reduced is not the same as reduced energy consumption of fewer servers is not the same as a reduction in restatement risk is not the same as new revenue. However, always framing the project in its business context makes both needs and decisions unambiguous, and gives us the ability to maximise return on technology investment.


Because the business environment changes, so do returns. As a result, assessing business impact is an ongoing activity, not a one-off done at the beginning of a project. Over the life of any project it must be able to show incremental returns. The further out that returns are projected, the more speculative they are, if for no other reason than the changes in the business environment. Capital is impatient, and can find faster returns that provide greater liquidity than long-term programmes. If the business itself is providing quarterly returns, so must any IT project.


Operating and measuring an IT project in the context of its business impact is a fundamental shift for IT. The purpose of continuing to spend on a project is to achieve a business return; we don't continue to spend simply because we think we’ll continue to be “on-time and on budget.” This latter point is irrelevant if what doing - on-time or otherwise - has zero or even negative business impact. Measuring to business impact also allows us to move away from a focus on sunk costs. Sunk costs are irrelevant to capital, but all too often are front-and-centre for operations-centric decision-making: e.g., the criteria to keep a project going is often “we’re $x into it already.” This inertia is, of course, the classic “throwing good money after bad.” We forget that it’s only worth taking the next steps if the benefits outweigh remaining costs.


Managing to business impact requires perspective and visibility outside the IT realm. The actual business impact made must be followed-up and assessed, and all stakeholders – especially business sponsors – must be invested in the outcome. That might mean a budget reduction with the successful delivery of a solution, or bonus for greater revenue achieved. Whatever the case these expected returns must factor into the budgets and W2s of the people involved. This makes everybody oriented to the business goals, not focused on micro-optimisation of their particular area of focus (which may be orthogonal to the business goal.)


To execute on this, the quality of IT estimating must also be very high. When the business does a buy-back or engages in M&A, it has a clear understanding of the cost of that investment, an expectation of returns, and the risks to the investment. IT projects must be able to express, to the greatest extent possible, not only expected costs but the risks to those costs. Over time, as with any business, it must also be able to explain changes in the project’s operating plan – e.g., changing requirements and how those requirements will meet the business goal, missed estimates and the impact on the business return model. This creates accountability for estimating and allows a project’s business case to be assessed given historical estimate risk. It also improves the degree of confidence that the next steps to be taken on a project will cost as expected, which, in turn, improves our portfolio management capability.


Estimation must also go hand-in-hand with different sourcing models. Very often, projects assume the best operating model for the next round of tasks was the operating model taken to date. We often end up with the business truism: “when the only tool you have is a hammer every job looks like a nail.” Estimates that do not consider alternative sourcing models – different providers, COTS solutions, open source components, etc. – can entrap the business and undermine IT effectiveness. Continuous sourcing is an IT governance capability that exists at all levels of IT activity: organisational (self-sourcing, vendor/suppliers), solutions (COTS, custom), and components (open-source, licensed technologies, internally developed IP.) The capability to take sourcing decisions in a fluid and granular manner maximises return on technology investment.


In this approach, we can also add a dimension to our portfolio management capability to attract high-risk capital of the business. Every business has any number of potential breakaway solutions in front of it, not all of which can be pursued due to limited time and capital, not to mention the need to do the things that run the business. In addition to offering potential windfall benefits to the business, they are most often the things that provide the most interesting opportunities and outlets for IT people, necessary if an IT organisation is to be competitive for talent as a “destination employer” for best and brightest. These are impossible to charter and action in an IT department managing expectations to maintain business as usual. It becomes easier to start-up, re-invest and unwind positions in breakaway investment opportunities – and the underlying IT capability that delivers them – if they’re framed in a balanced technology portfolio.


By doing these things, we are better able to communicate in a language more relevant to the business: that of Capital. The behaviour of IT itself is also more consistent with Capital, with a static, as opposed to an exponential, cost of change. Such an IT department is one that can compete for business investment.

Saturday, April 28, 2007

Patterns and Anti-Patterns in Project Portfolio Management

A critical component of IT governance is Project Portfolio Management (PPM). Effective portfolio management involves more than just collecting status reports of different projects at specific dates; it also involves projecting the delivery date, scope and cost that each project is trending towards and the impact of that projected outcome on the overall business. Without this latter capability, we may have project reporting, but we are not able to take true portfolio decisions – such as reallocating investment from one project to another, or cancelling underperforming projects – to maximise return on technology investment.

As with IT governance as a whole, many PPM efforts belie the fact that the discipline is still very much in its infancy. We see around us a number of practcies executed in the name of PPM that are in effect PPM anti-patterns.

Anti-Pattern: Managing by the Fuel Gauge

Traditional or Waterfall project planning defines a definitive path by which milestones of vastly different nature (requirements documentation, technical frameworks, user interface design, core functionality, testing activity and so forth) can be completed in an environment (team composition, team capability and requirements) projected to be static over the life of a project. This definitive path, defined to the granular “task” level in traditional project planning, creates a phase, task and subtask definition within the GL account code(s) that track spend against the project budget. When wed to the IT department’s time tracking system – which tracks effort to the subtask level – it is not uncommon for people to draw the mistaken conclusion that the total cost expended against the budgeted amount is representative of the percent complete we are to the overall project.

This is akin to “navigating the car by the fuel gauge” – the amount of time spent in each task is assumed to be an indicator of delivery progress because the plan itself is held out to be fact. Unfortunately, the environment is not static, and the different nature of project milestones makes the project prediction highly suspect. The car could be heading toward a completely different destination than originally envisioned, and in fact could be going in circles. This granular level of data does not translate into meaningful PPM information.

Anti-Pattern: Navigating through the Rear or Passenger Windows

Another approach to portfolio management is to survey every project at regular intervals to ascertain where it’s at relative to its original, deterministic plan, and for those “off course” reporting what will be necessary to restore project back to plan.

In its simplest form, this is akin to “navigating the car out the rear window.” Surveying projects to ascertain their overall percent complete is a backward looking approach that is easily – and not infrequently – gamed (e.g., how many projects quickly report “90% complete” and stay there for many weeks on end?) In its slightly more complex form – communicating the gap of current status to projected status and reporting a detailed set of deliveries a team is hoping to make by a particular date – is akin to “navigating the car out the passenger window.” It assumes that the original project plan itself is the sole determinant of business value, and is the basis of control of projects.

These are anti-patterns because they miss the point of PPM. The objective of portfolio management is to maximise return for the business, not maintain projects in a state of “on time and on budget.” Those time, scope and budget objectives, which might have been set months or even years ago, lose relevance with changing business conditions: market entrants and substitutes come and go, regulation changes frequently, and the sponsoring business itself changes constantly through M&A. These factors – not “on time and on budget” to an internally defined set of objectives – are what determine a maximum return on technology investment. In addition, this approach substitutes “sunk costs” for “percent of value delivered.” Sunk costs are irrelevant to business decisions; it is the remaining cost of completion relative to value achieved that matters.

Anti-Pattern: Breaking Every Law to Reach our Destination

An unintended consequence of PPM is the distortion of organisational priority. A culture of results can quickly morph into a culture of “results at any cost.” This, in turn, may mean that in the process of traveling to a destination, we commit multiple moving violations, burn excess fuel, and pollute excessively simply so that we appear to have “met our numbers.”

This is not typically considered part of PPM as much as it’s really a question of our overall IT governance. Still, it’s relevant to PPM: knowing that our investments are performing as they purport to be performing is important protection for our returns. To wit: through the 1990s Parmalat and Enron may have been strong performers in an equity portfolio, but gains were obliterated once shown to have been misrepresented all along. It must be remembered that project portfolio management relies on good governance and, in fact, exists as a component of it. Reaching our destination might make an initial delivery appear to be successful, but any returns achieved for reaching the destination might be completely obliterated by the cost of remediating the brownfield we created. Maximising return on technology investment is concerned with the total asset life, not just achieving a goal at a specific point in time.

Characteristics of a Pattern

We don’t yet – and should not expect to have – “GPS-style” navigation systems for individual projects that can feed our PPM. Because we cannot predict the future, any “map” is a fallacy. But we do have the tools by which we can "navigate through the front windshield," and do so without leaving destruction in our wake. We can do this if:


  • We have fact-based manner by which to assess if a project can achieve its business goals in a time and for a cost acceptable to its business objectives.

  • We have detailed visibility into the fact that energies are being directed toward high priority work.

  • We have current, meaningful indicators of the completeness of the work being done – that we are working in such a way that we are maximising objectives under the circumstances, and that work declared to be “complete” is a matter of fact, and not opinion.

Agile project management is uniquely capable of bringing this about. Inclusive, independent statements of scope allow the path of system delivery to adjust to changes in priority, changes in capacity, changes in the understanding of requirements, and the experience of the team. Instead of relying on a prescriptive path, we have unadulterated transparency that exposes to everybody whether the best decisions relative to the business objectives are being taken given current information at the moment of decision.

These constructs provide the foundation for a fact-based, forward-looking PPM capability, because they enable informed “what if” scenario building across a portfolio of projects. Using these practices, we can develop meaningful, time-sensitive models, founded in fact, that allow us to forecast the impact of changes to team capacity (e.g., through turnover or reassignment), priority (through changing business environment) or scope (through expansion or better understanding) on our total portfolio. This isn't “project tracking” masquerading as “portfolio management;” it is the foundation of true portfolio management that maximises return on technology investment.

Thursday, March 29, 2007

Agile under Sarbanes-Oxley

The business cycle of most firms is cash-driven: work is performed, invoiced at completion, and collected on negotiated payment terms. Obviously, cash flow is important to the business as it affects our ability to do the things to run the business, like meet payroll and pay expenses. Cash flow isn’t revenue, however. To recognize work delivered as revenue, client work must be delivered and unambiguously accepted by the client.

This is a priority, particularly for publicly traded firms. As the stock price usually trades at a multiplier to income (or in the case of many Nasdaq companies, a multiplier to revenue in lieu of earnings), revenue recognition is critical to Wall Street. For engagements that span many months, this can mean that revenue recognition is deferred for many reporting quarters. We can end up in a situation where cash flow is consistent and healthy, but net income is variable and frequently weak.

Amongst other things, Sarbanes-Oxley (a.k.a. Sarbox, or SOX) establishes compliance guidelines for publicly traded companies so that revenue isn’t gamed. The intent is to define clear guidelines accounting the facts of what operations have delivered or not delivered. As simple as that may seem, the pressure in the executive offices to recognize revenue is quite real, and the software industry in particular is rife with examples of companies gaming revenue numbers with incomplete deliveries.

The rules under Sarbox governing revenue recognition are explicitly defined. The governance mechanism for this under Sarbox is a “proof of completion” certificate. This is a simple document that serves as the client’s acknowledgement that specific functionality was delivered to satisfaction by the supplying vendor. This document must be received in the reporting period in which the revenue is to be recognized; e.g., if we’re going to recognize revenue in Q3, the proof of completion must be received by the supplier in Q3.

The capability for operations to deliver what they forecast will go a long way to letting the air out of the results bag. Of course, it’s not so easy. The ability for ops to deliver isn’t purely an internal function. Factors outside the control of a company’s internal ops, such as customer staff turnover or change in a customer’s business direction, can impair execution of even the best laid plans. Thus no matter how strong the internal operational performance, external factors will significantly affect results. Still, the ability to forecast and respond to this change in a timely fashion will go a long way to meeting revenue targets and goals, and reduce the risk of change in the business environment.

Our traditional ways of working in these environments are often based in hope and unnecessarily produce a lot of uncertainty and inconsistency of our own making. We set our forecasts based on individual “quarterly completion commitments” and “business feel” based on what we see in the sales pipeline. As we approach quarter-end, we swarm disproportionate numbers of people on specific projects to drive to what amounts to an internal definition of “complete,” only to then plead with the customer to accept. The pursuit of a mythical number given at the beginning of a quarter in the vain hope of making it a reality through the fortunate combination of contracts, capacity and capability coming into alignment is a primitive practice. This ultimately results in a mad scramble at quarter-end to complete deliveries, introducing operational risk. For example, if a delivery proves to be more complex than originally thought, or if people are not available, or if some customer deliveries are prioritsed at the cost of others, quarterly ops revenue contribution is at the mercy of things substantially out of our control. Without mitigating this risk – or indeed providing visibility into it in the first place – we increase the probability of a disappointing quarter.

In fact, these practices stifle operational maturity. In this model, operations are at best a hero-based delivery group that relies on a few talented individuals making Herculean effort 4 times a year (that is, at quarter end. At worst, they’re an under-achieving function that requires a high degree of tactical supervision. In either scenario, operations are reactive, forever executing to a mythical, primitive tactical model, never rising to become strategic contributors to the business.

Because they bring operations into alignment with regulatory requirements in a non-burdensome manner, Agile management practices are especially valuable in Sarbox or similarly regulated business environments. There are several Agile practices we can bring to bear in this environment:


  • Instead of defining large, monolithic packages of delivery, we can decompose client deliverables into independent, uniform statements of business functionality or Agile Stories. Each of these Stories will have an associated revenue amount, specifically the cost of delivering each to the customer. This gives us a granular unit of work with economic properties.

  • Each of these requirements can have an associated Proof of Completion document. This provides tangible affirmation that client acceptance criteria have been met.

  • We can define the fiscal quarter as an Agile “release” divided into 13 iterations of 1 week each. This gives us time-boxes around which we can construct a release plan.

  • We can forward plan our capacity by taking a survey of known downtime (vacations, holidays, etc.).


By executing to these, we yield significant financial and operational benefits.

  • We accelerate revenue recognition. Granular, federated expressions of business requirement can be developed, delivered and accepted incrementally by the customer. This will yield faster revenue recognition than highly-coupled requirements made in one delivery.

  • We reduce the risk of not being able to recognize revenue. Incremental customer acceptance reduces the risk to revenue recognition inherent in a single large delivery. For example, suppose a sea change within a customer threatens revenue from projects. If we have granular delivery and acceptance we can recognize the revenue for deliveries made to date. If we don’t, we lose revenue from the entire project, making both the revenue and the efforts to-date are a business write-off.

  • We have more accurate forecasts of revenue capacity and utilization. By planning capacity, and taking into account our load factor, we can assess with greater accuracy what our remaining quarterly capacity looks like. Expressing this in revenue terms gives us a realistic assessment of our maximum revenue capacity. From this we can take investment decisions – such as increasing capacity through hiring – with greater confidence.

  • We have more accurate revenue reporting. Each POC received creates a revenue recognition event in that specific iteration. This gives us a “revenue burn-up chart” for the quarter. In tracking actuals, we can show our revenue recognition actual versus our burn-up. This means revenue forecasting and reporting is based more in fact than in hope.

  • We have more accurate revenue forecasting. By forming a release plan that includes the complete cycle of fulfillment stages for each customer requirement – analysis, development, testing, delivery and POC/acceptance – we have a clear picture of when we expect revenue to be realized. As things change over the course of the quarter – as stories are added or removed, or as capacity changes – the release plan is modified, and with it the impact on our revenue projection is immediately reflected.

  • We have transparency of operations that enables better operational decisions. Following these practices we have a clear picture of completed, scheduled, open and delayed tasks, an assessment of remaining capacity, and visibility into a uniform expression of our backlog (i.e., a collection of requirements expressed as stories). With this we have visibility into delayed or unactioned tasks. We can also take better scheduling and operating decisions that maximize revenue contribution for the quarter.

  • We have transparency of operations that reduces surprises. The release plan tells us and our customers when we expect specific events to take place, allowing us to schedule around events that might disrupt delivery and acceptance. For example, we may expect to make a delivery in the last week of the quarter, but if the person with signature authority on the POC is unavailable, we’ll not recognize the revenue. Foreknowledge of this allows us to plan and adjust accordingly.

  • Acceptance Criteria are part of everything we do. The Proof of Completion document builds acceptance criteria into everything that we do. We think of completion in terms of delivery, not development. This makes everybody a driver of revenue.


In sum, Agile practices professionalize operations management. By being complete in definition, being fact-based, providing operational transparency and exposing and mitigating risk consistently throughout a reporting period, they align execution with governance. This results in non-burdensome compliance that actually improves the discipline – and therefore the results – of business operations.

Sunday, February 18, 2007

Corporate Mental Health

In the course of implementing Agile practices, an organisation is likely to come face to face with deficiencies in both IT and business operations. Shortcomings in specifications, in quality, in team capability, in technologies all quickly come to the fore. Agile practices allow us to not only identify these shortcomings, but to call them out in a fact-based manner.1 Still, how people in an organisation respond when presented with these will determine how successfully it adopts Agile practices.

While in theory, decision-making in a business context is coldly rational, decisions made by people in business usually are not. Because they impact people, business decisions – especially where performance or results are concerned – can be highly emotional. With regard to adopting Agile practices, this creates an important consideration. While they lend themselves to tremendous transparency, that transparency can unintentionally create discomfort and embarrassment. One person's "liberation" is another person's "fear."

The reaction to this increased transparency is very much an organisational characteristic. In the November-December 2006 edition of World Business, James Bellini writes:

  • Businesses behave like people… the nature of this behaviour gives us vital clues as to the condition of a company’s underlying psychological state and in so doing helps us identify those that will succeed and those doomed to fail. It also offers the means by which those confronting failure because of an ailing, dysfunctional psyche can be given a new direction, towards revival and profitability.2

Mr. Bellini makes the point that organisations can fail because they fool themselves into believing something to be true, no matter the facts. There is an important alert here for those wanting to inject Agile practices: organisational self-delusion can be an obstacle to injecting fact-based management. Clearly, we’ll struggle to inject facts if facts don’t have currency. More constructively, there is clearly a “healthy” approach to presenting facts, as opposed to an “unhealthy” approach of confronting with facts.

The business sponsor and implementers of an Agile “programme of change” must be able to honestly assess the following:

  • How well do people understand and accept the problems the organisation faces today? Do they see, for example, a deficiency in scalability materially impacts bottom line profitability? Do they see long development cycle times as interfereing with customer responsiveness and, therefore, new business? Do they look for and recognize features in competitive products as disadvantage in the market? Or is the prevailing attitude that the company have its customers, and the competition have its customers, and it will just sort itself out in the end?
  • To what extent do people tolerate and encourage risk, and to what extent can the organisation accept fast failures? The initial adoption of Agile involves a lot of experimentation, trial-and-error, learning-by-doing, and failure. Is the organisation risk-averse with a “shoot the messenger” culture when things don’t quite turn out as planned? Or is there a recognition of the benefit to continuous learning and “fast failures.”3 Specifically, is there an expectation that people are in "stretch roles" learning and honing their capabilities as individuals? Is there greater infrastructure - training, mentoring, coaching - to support this?
  • Is there a culture of responsibility or a culture of blame? Exposing problems will make people react, one way or another. For example, many companies are burdened with highly manual processes. Staff changes through resignation or promotion disrupt these processes. Do people accept the natural turbulence of transition and look for constructive ways to create more durable solutions? Or do they simply pass around blame for work not done?
  • Finally, with what discipline do we take project sponsorship decisions? That is, how disciplined is our project portfolio management? Can we accept a project’s actual cost, time and trajectory in the context of a business case, and take necessary action? Do we accept reporting the actual state of a project as an expression of mastery of our profession? Or are go/no-go decisions substantially rooted in non-business factors, the team's credibility intertwined wth a delivery commitment, and hope the fundamental strategy for a project to maintain course?

Further complicating matters, each of these are bi-directional. The manner in which we expose problems – confrontational or progressive – will contribute to the success of the programme of change. That is, there must be a constructive, as opposed to a destructive, language for change. The responsibility for creating this positive nomenclature lies with the instigator of change.

Collectively, this sounds "soft," and perhaps it is. But at the end of the day, management is still “getting things done through people.” Not money, not technology, but people. We understand intuitively that these things matter, whether we can measure them or not. “Soft” or otherwise, they are relevant to us as managers. And references abound.4

The first example comes from Ford Motor's recently initiated turnaround. A recent story in the Wall Street Journal pointed out that Alan Mullaley, CEO of Ford Motor, openly applauded a division head for owning up to poor performance in a senior staff meeting. Mr. Mullaly’s reasoning? You can’t solve problems if you don’t acknowledge them, and not enough people were acknowleding them when Mr. Mullaley arrived. Something to think about in this example, too: if this is what's happening at the most senior levels of the company, what makes us think it will be any different within a project team?

The second example is Scuderia Ferrari F1. In an interview with F1 Racing Magazine, Ross Brawn, the recently retired technical chief at Ferrari, described team decision-making as:

  • "Every last detail is critical… You cannot be weak in the tangibles, like the design of the car, and you cannot be weak in the intangibles, like your … interpretation of the rules. Whatever you felt you could achieve you’ve then got ot go out and find another 10 per cent…. We all knew that we had to do it and we knew the other guys were doing it, so that if you were not doing it you would be letting the side down. It was great to be a part of that mind-set; a group where we were all giving absolutely everything."5

Albeit extreme, there is a lot to be gleaned from this as an instantiation of organisational psyche. The cultural norms, the expectations of the peer group, while soft, are unambiguous: push yourself to the limit to push the platform (e.g., the car) to the limit. Leaving the potential for competitive advantage on the table due to lack of effort was unacceptable.

Clearly, Scuderia Ferrari is an organisation that deals in facts, not excuses or justification. Somebody will ask as a matter of fact, and not accusation: did we perform every combination of performance and reliability test? Are we in compliance with the sporting rules of the FIA? You don’t want to be the person called out for not answering those questions to their fullest; the organisational psyche guides you to fully pursue these answers prior to facing the questions.

The bottom line: organsational “psyche” is a factor in our ability to change and respond. Ultimately, it impacts our fundamental ability to compete. Mr. Brawn: “'It’s very rare in modern F1 to come up with a dramatic new concept or idea that will give you a step change in performance. So you cannot give anything away.'”6 This is not accidental, but systematic, part of the landscape. It might initially read a defensive tactic, but it’s very much an offensive strategy. Thoroughness means we don’t leave anything on the table; aggressiveness means we find and maximize innovation. That makes an organisation able to accept the need for change, and implement that change. And that makes it more competitive.

1I am indebted to David Pattinson on this point. In the course of an engagement some years ago, he very specifically made the point that we hadn’t established a basis of fact for an issue, and we risked escalating a problem to a client “as a matter of opinion and not as a matter of fact.”
2Bellini, James. “Disguises and denial” World Business, November-December 2006. There’s a lot of information in his article, it’s worth re-reading a few times to get the full extent of his messages.
3It’s worth mentioning that Gartner has called out “fast failures” as a principle for effective innovation: “Fail Fast for More Effecitve Innovation.” Gartner Research. 1 March 2006.
4Oddly, and hopefully coincidentally, both are automotive.
5Allen, James. “Ciao for Now, Ross.” F1 Racing, January 2007. There’s a full quote from Mr. Brawn that truly captures the essence of Ferrari’s F1 team. Go buy the magazine.
6Ibid.

Saturday, February 10, 2007

An "Innovation Maturity Model?"

Our innovation model now has multiple practices within each of three dimensions. Collectively, it looks like this:


Agility
Requirements
Responsiveness
Collaboration
Delivery Assurance
Testing
Build
Source Code Management
Collective Code Ownership
Architecture

Community
Tools & Infrastructure
Community Management
Participation
Bar Lifting
Risk Tolerance

Governance
Continuous Sourcing
Metrics
Compliance
Investment
Commercials

We can make this model still more finely grained by identifying the ways in which each practice area is performed. For example, we can define different ways to fulfill the Governance practice of “commercials and contracts.” By sequencing in degree order the extent to which each method of execution enables innovation, we can create a simple innovation “maturity model.”

This is, clearly, going to be a bit involved, but it gives us useful information. The extent to which the different practices are performed will be a strong indicator of our organisation’s aptitude for innovation. In addition, having a “maturity flight” for each will highlight strengths and deficiencies in how we operate now. This, in turn, allows us to focus our efforts and investments specifically where it will make us more competitive.

In constructing progressions of practice in rank order by the extent to which they facilitate innovation, we start with a simple taxonomy of “innovation maturity.”

  • Level 3: Practices that are fully aligned with and engender innovation
  • Level 2: Practices that engender but do not maximize innovation
  • Level 1: Practices that are neutral or marginally enable innovation
  • Level -1: Practices that inhibit consistent innovation

    Having a “Level -1” is useful from the point of view that there is value in calling out practices that inhibit innovation. A Level 3 might be thought of as a maximum degree of facilitation. Between are varying degrees to which things are done that engender innovation but are suboptimal in a purely “innovation” context. This is not to say that Level 3 should be the target: there may be economic or operational reality why an organization peaks at level 2 or 1. This is also not to say that there are not levels beyond 3, but for purposes of constructing an initial model, it is best to keep it simple. Bear in mind that this is simply a model that provides us a structured way to analyse what we’re doing now and suggest what we should be doing instead.

    With a taxonomy, and a series of practices, we should be able to construct a “maturity flight” for each practice, sequencing activity in degree order to which each engenders innovation.

    For example, the “maturity flight” of the Compliance dimension of Governance might look like this:

  • Level 3: Compliance rules are automated and implemented as gatekeeper events in daily activity (e.g., through build) and feed a compliance dashboard
  • Level 2: Key success factors for solution completeness (security, performance, reliability) implemented as a battery of repeatable test suites
  • Level 1: Multi-disciplinary compliance working group formed with delivery leaders to articulate “solution completeness” in plain terms
  • Level -1: After-the-fact audit and review intended to find errors as opposed to guide solutions; rules set by non-delivery staff; rules exist in documentation

    Similarly, within Community, the Participation flight might look like this:

  • Level 3: Programmes established to recognize and enable participation (e.g., a 10% scheme, internally funded conferences)
  • Level 2: Communities fo Practices form; practice membership recognized by HR as a soft “matrix” organization; soft budget allocated
  • Level 1: Collaboration tools and space passively advertised; demand-based “at-will” participation
  • Level -1: Teams work in isolation; collaboration is a function of individual networking

    And so on, for all practices.

    By doing this for every practice area, it becomes possible for us to:

  • Identify specifically the deficiencies and obstacles to our ability to innovate,
  • Map the affinity that we have for innovation.
  • Track our progress as we improve our ability to innovate.

    Of course, with as many as 19 practice areas identified, this is going to produce a lot of data.As we have an index and a categorization for each, we can consolidate our scores and get an overall assessment of our Agility, Community and Governance practices.

    In so doing, we have a composite picture of where we’re at relative to where we’d like to be relative to an “ideal state.” Again, we have to use this judiciously: this is not intended as a certification or a mandate. It is an indicator that, properly applied, should allow us to better ask and answer: to what extent does our organisation really do the things that lend themselves to innovation? We cannot expect innovation from an environment that is not conducive to it. This degree of insight goes a long way to avoiding a futile “innovation mandate.”

  • Friday, February 09, 2007

    Aligning for Innovation

    It’s worth looking more closely at each of the factors that enable innovation within an organisation: Agility, Community and Governance. Each of these means something specific. If we identify practices within each that affect our aptitude for innovation, we have something more concrete than “IT Governance, Community and Agility contribute to our ability to innovate.”

    By taking a structured approach to critically examine how work is performed, we get unvarnished insight into how we do things today.
    In so doing, we are more likely to align organisational objectives – efficiency through re-use, creativity from collaboration – with day-to-day work. This gives us an indication of the extent to which we have an aptitude for innovation. As a result, a focus on innovation is less of a hope-based initiative, and more of a fact-based exercise.

    That said, if we are to critically look at things we do to facilitate – or, for that matter, stifle – innovation, we first must understand in greater detail the elements of Agility, Community and Governance.

    Practices that Create Community

    An innovation culture requires a Community that extends beyond any individual's immediate team or department visibility. The dimensions of forming Community include the following:

    1. Tools and infrastructure create an intuitive platform for robust exchange of rich content in structured ways

    2. Participation and individual contribution to this community is recognized, facilitated and rewarded through HR and corporate policy

    3. Community management links participants, bridges groups and manages content

    4. Global resource management gives people across the organisation participation and therefore visibility into different projects and opportunities

    5. The organisation develops a culture that values and rewards continuous learning and fast failures.

    Practices that create a Static Cost of Change

    We also recognise that the cost of change – that is, the extent to which we are able to accommodate change with minimum disruption – is a contributing factor to our ability to produce and consume ideas and code.
    Traditional ways of working have an exponential cost of change: design decisions, taken in early stages of a project, have long horizons. It becomes increasingly expensive to change course as we get further into a project as code is highly coupled to this set of decisions.

    Agile practices have been shown to yield a more static cost of change. Decision horizons are shorter because decisions – requirements, architecture, code – are much more independent. Agile practices include the following:

    1. Requirements are decoupled, with needs captured as independent, complete and valuable statements of business functionality

    2. Continuous planning, estimation and execution happen in fixed iterations of 1 to 3 weeks, with the measure of time serving as the project “currency”

    3. There is a deployable, functional deliverable at the conclusion of each iteration; integrity of completion is stressed in favour of feature pile-on

    4. Teams establish a cadence – daily, weekly, monthly, quarterly – of frequent, low-ceremony, highly-focused communication events

    5. A continuous cycle of requirements analysis, development, integration and test prevents a team from mortgaging its future, borrowing blindly against a future time-box to accommodate change

    6. The business partner is engaged in business terms, not IT terms, and is involved in day-to-day decisions and activities

    7. Simplicity of design and well-defined services are preferred over big up-front design or coarsely- or finely- grained functions.

    8. Robust testing – in the form of programmer-written unit tests and QA-written functional tests – in conjunction with continuous integration, makes “development complete” less a matter of opinion, and more a matter of fact.

    9. Teams report status and forecast completion in unambiguous terms to all project stakeholders

    For purposes of critically analysing our day-to-day practices we will look at these a bit differently, but for now this list allows us to understand in context how we deliver – and how we would like to deliver – IT solutions.

    Practices of IT Governance

    Finally, innovations must not be a matter of opinion, but a matter of fact: do they deliver value for money, and are they delivered to a complete set of expectations. To be able to ask and answer those questions in a context of innovations, we must look at the following:

    1. There needs to be a continuous sourcing model so that teams can exploit and contribute to emergent and rapidly evolving code and ideas in the Community

    2. The economic impact of the Innovation Network on individual projects and the organization is assessed and publicised with indicators, metrics and measures

    3. There must be a discipline of compliance – legal, technical and economic – of all IP released to the Community; this discipline must be non-invasive

    4. The development of solutions from raw, unfinished ideas in the Community is facilitated by an investment framework

    5. Traditional ways of working to the letter of a contract and suffering under the weight of rigid change controls must give way to commercial contracts that facilitate constant assessment of project variables – quality, time, resources and scope – in pursuit of meeting evolving business.

    This is not to say that these practices are mandatory for innovation and we don’t look to these as elements of compliance or certification of an “innovative enterprise.” We do, however, recognise that these are things that systematically and synergistically incite innovation: if we have strong Community, a high degree of responsiveness and mature Governance we will be more inclined to innovate than if we have a weak community, long-term decision lock-in, and limited means by which to oversight IT results and activity.