I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

Saturday, June 30, 2012

Resiliency, Not Predictability

A couple of months back, I wrote that it is important to shorten the results cycle - the time in which teams accomplish business-meaningful things - before shortening the reporting cycle - the time in which we report progress. Doing the opposite generates more heat than light: it increases the magnification with which we scrutinize effort while doing nothing to improve the frequency or fidelity of results.

But both the results cycle and reporting cycle are important to resiliency in software delivery.

A lot of things in business are managed for predictability. Predictable cash flows lower our financing costs. Predictable operations free the executive team to act as transformational as opposed to transactional leaders. Predictability builds confidence that our managers know what they're doing.

The emphasis on predictability hasn't paid off too well for IT. If the Standish (and similar) IT project success rate numbers are anything to go by, IT is at best predictable at underperforming.

When we set out to produce software, we are exposed to significant internal risks: that our designs are functionally accurate, that our tasks definitions are fully defined, that our estimates are informationally complete, that we have the necessary skills and expertise within the team to develop the software, and so forth. We are also subject to external risks. These include micro forces such as access to knowledge of upstream and downstream systems, and macro forces such as technology changes that obsolete our investments (e.g., long-range desktop platform investments make little sense when the user population shifts to mobile), and labor market pressures that compel people to quit.

We can't prevent these risks from becoming real. Estimates are informed guesses and will always be information deficient. Two similarly skilled people will solve technical problems in vastly different ways owing to differences in their experience. We have to negotiate availability of experts outside our team. People change jobs all the time.

Any one of these can impair our ability to deliver. More than one of these can cause our project to crater. Unfortunately, we're left to self-insure against these risks, to limit their impact and make the project whole should they occur. We can't self-insure through predictability: because these risks are unpredictable, we cannot be prepared for each and every eventuality. The pursuit of predictability is a denial of these risks. We need to be resilient to risks, not predictable in the face of them.

This brings us back to the subject of result and reporting cycles: the shorter they are, the more resilient we are to internal and external risks.

Anchoring execution in results makes us significantly less vulnerable to overstating progress. Not completely immune, of course: even with a short results cycle we will discover new scope, which may mean we have more work to do than we previously thought. But scope is an outcome, and outcomes are transparent and negotiable. By comparison, effort-based execution is not: "we're not done coding despite being at 125% of projected effort" might be a factual statement, but it is opaque and not negotiable.

In addition, a short results cycle makes a short reporting cycle more information rich. That, in turn, makes for more effective management.

But to be resilient, we need to look beyond delivery execution and management. A steady diet of reliable data about the software we're developing and how the delivery of that software is progressing allow a steering committee to continuously and fluidly perform its governance obligations. Those obligations are to set expectations, invest the team with the authority to act, and validate results.

When project and asset data are based in results and not effort, it is much easier for a steering committee to fulfill its duty of validation. And it helps us out with our other two obligations as well. We can scrutinize which parts of the business case are taking greater investment than we originally thought, and whether they are still prudent to pursue as we are making them. We also see if we are taking technical shortcuts in the pursuit of results and can assess the long term ramifications of those shortcuts nearer the time they are made. We are therefore forewarned as to whether an investment is in jeopardy long before financial costs and technical debt rise, and change or amplify our expectations of the team as we need to. This, in turn, gives us information to act on the last governance responsibility, which we do by changing team structure, composition and even leadership, and working with the investment committee to maintain viability of the investment itself.

A short results cycle, reporting cycle and governance cycle make any single investment more resilient. They also enable a short investment cycle, which makes our entire portfolio more robust. From an execution perspective, we can more easily move in and out of projects. Supported by good investment flow (new and existing opportunities, continuously reassessed for timeliness and impact), hedging (to alleviate risks of exposure to a handful of "positions" or investments), and continuous governance assessing - and correcting - investment performance, we can make constant adjustments across our entire portfolio. This makes us resilient not only at an individual investment level, but at a portfolio level, to micro and macro risks alike.

IT has historically ignored, abstracted or discounted risks to delivery. Resiliency makes us immune to this starry-eyed optimism which is at the core of IT's chronic underperformance. That makes IT a better business partner.

Thursday, May 31, 2012

Rules versus Results

Assessments are based, not on whether the decisions made are any good, but on whether they were made in accordance with what is deemed to be an appropriate process. We assume, not only that good procedure will give rise to good outcome, but also that the ability to articulate the procedure is the key to good outcomes.
-- John Kay, writing in the Financial Times
A common error in both the management and governance of IT is an over-reliance on rules, process and "best practices" that portend to give us a means for modeling and controlling delivery. We see this in different ways.
  • Project managers construct elaborately detailed project plans that show business needs decomposed into tasks which are to be performed by specialists in a highly orchestrated effort. And detailed they are, often down to the specific task that will be performed by the specific "resource" (never "person", it's always "resource") on a specific day. This forms a script for delivery. The tech investor without a tech background cannot effectively challenge a model like this, and might even take great comfort in the level of detail and specificity. They have this broken down into precise detail; they must really know what they're doing.
  • Companies invest in "process improvement" initiatives, with a focus on establishing a common methodology. The belief is that if we have a methodology - if we write unit tests and have our requirements written up as Stories and if we have a continuous build - we'll get better results. Methodology becomes an implicit guarantor of success. If we become Agile, we'll get more done with smaller teams.
  • "Best practices" are held out as the basis of IT governance. Libraries of explicit "best practices" prescriptively define how we should organize and separate responsibilities, manage data centers, contract with vendors and suppliers, and construct solutions. If there is a widely accepted codification of "best practices", and we can show that we're in compliance with those practices, then there's nothing else to be done: you can't get better then "best". We'll get people certified in ITIL - then we know we'll be compliant with best practices.



* * *

To see how stultifying such behaviour can be, imagine the application of this emphasis on process over outcome in fields other than politics or business. Suppose we were to insist that Wayne Rooney explain his movements on the field; ask Mozart to justify his choice of key, or Van Gogh to explain his selection of colours. We would end up with very articulate footballers, composers and painters, and very bad football, music and art.
-- John Kay
It is tempting to try to derive universal cause-and-effect truths from the performance of specific teams. Because this team writes unit tests, they have higher code quality. Or, Because we ask users to review security policies and procedures at the time their network credentials expire, we have fewer security issues. Does unit testing lead to higher quality? Does our insistence on policy result in tighter security? It may be that we have highly skilled developers who are collaborative by nature, writing relatively simple code that is not subject to much change. It may be that our network has never been the target of a cyber attack. A "rule" that dictates that we write unit tests or that we flog people with security policy will be of variable impact depending on the context.
There are no such things as best practices. There are such things as practices that teams have found useful within their own contexts.
@mikeroberts
Suppose a team has low automated unit test coverage but high quality, while another team has high automated unit test coverage but low quality. A rule for high unit test coverage is no longer indicative of outcome. The good idea of writing unit tests is compromised by examples that contradict lofty expectations for going to the trouble for writing them.

It isn't just the rules that are compromised: rule makers and rule enforcers are as well. When a team compliant with rules posts a disappointing result, or a team ignorant of rules outperforms expectation, rule makers and rule enforcers are left to make ceteris peribus arguments of an unprovable counterfactual: had we not been following the rule that we always write unit tests, quality would have been much lower. Maybe it would. Maybe it would not.

Rules are not a solid foundation for management or governance, particularly in technology. For one thing, they lag innovation, rather than lead it: prescriptive IT security guidelines circa 2007 were ignorant of the risks posed by social media. Since technology is a business of invention and innovation, rules will always be out of date.

For another, rules create the appearance of control while actually subverting it. Rules appear to be explicit done-or-not-done statements. But rules shift the burden of proof of compliance to the regulator (who must show the regulated isn't following the rule), and away from the regulated (who gets to choose the most favourable interpretation of the rules which apply). Tax law which explicitly defines how income or sales transactions are to be taxed encourage the individual to seek out the most favourable tax treatment: e.g., people within easy driving distance of a lower cost sales tax jurisdiction will go there to make large purchases. The individual's tax obligation is minimized, but at a cost to society which is starved for public revenues. In IT terms, the regulated (that is, the individual responsible for getting something done) holds the power to determine whether a task is completed or a governance box can be ticked. I was told to complete this technical task; I have completed it to a point where nobody can tell me I am not done with it. The individual effort is minimized, but at a cost to the greater good of the software under development which is starved for attention. It is up to the regulator (e.g., the business as the consumer of a deliverable, or a steering committee member as a regulator) to prove that it is not. 

When the totality of the completed technical tasks does not produce the functionality we wanted, or the checboxes are ticked but governance fails to recognize an existential problem, our results fall short of our expectations despite the fact that we are in full compliance with the rules.



* * *

Instead, we claim to believe that there is an objective method by which all right thinking people would, with sufficient diligence and intelligence, arrive at a good answer to any complex problem. But there is no such method.
-- John Kay
Just because rules-based mechanisms are futile does not mean that all our decisions - project plans, delivery process, governance - are best made ad hoc.

Clearly, not every IT situation is unique, and we can learn and apply across contexts. But we do ourselves a disservice when we hold out those lessons to be dogma. Better that we recognize that rigorous execution and good discipline are good lifestyle decisions, which lead to good hygiene. Good hygiene prevents bad outcomes more than it enables good ones. Not smoking is one way to prevent lung cancer. Not smoking is not, however, the sole determinant of good respiratory health.

Principles give us an important intermediary between prescriptive rules and flying by the seat of our pants. Principles tend to be less verbose than rules, which makes them more accessible to the people subject to them. They reinforce behaviours which reinforce good lifestyle decisions. And although there is more leeway in interpretation of principles versus rules, it is the regulator, not the regulated, who has the power to interpret results. Compliance with a principle is a question of intent, which is determined by the regulator. If our tax law is a principle that "everybody must pay their fair share", it is the regulator who decides what constitutes "fair share" and does so in a broader, societal context. Similarly, a business person determines whether or not software satisfies acceptance criteria, while a steering committee member assesses competency of people in delivery roles. Management and governance are not wantonly misled by a developer deciding that they have completed a task, or an auditor confirming that VMO staffing criteria are met.

"We always write unit tests" is a rule. "We have appropriate test coverage" is a principle. "Appropriate" is in the eye of the beholder. Coming to some conclusion of what is "appropriate" requires context of the problem at hand. Do we run appreciable risks by not having unit test coverage? Are we gilding the lily by piling on unit test after unit test?

We are better served if we manage and govern by result, not rule. Compliance with a rule is at best an indicator; it is not a determinant. Ultimately, we want people producing outstanding results on the field, not to be dogmatic in how they go about it. Rules, processes, and "best practices" - whether in the form of an explicit task order that tells people what they are to do day in and day out or a collection of habits that people follow - do not compensate for a fundamental lack of capability.

Adjudication of principles requires a higher degree of capability and situational awareness by everybody in a team. But then we would expect the team with the best players to triumph over a team with adequate players in an adequate system.

Monday, April 30, 2012

Shorten the Results Cycle, not the Reporting Cycle

A big software development project collapses just weeks before it is expected to be released to QA. According to the project leadership team, they're dogged by integration problems and as a result, the software is nowhere close to being done. It's going to take far more time and far more money than expected, but until these integration problems get sorted out, it isn't clear how much more time and how much more money.

The executive responsible for it is going to get directly involved. He has a background in development and he knows many of the people and suppliers on the team, but he doesn't really know what they're doing or how they're going about doing it.

The team situation is complicated. There are the consultants from a product company, consultants from two other outsourcing firms, several independent contractors, plus a few of our own employees. QA is outsourced to a specialist firm, and used as a shared service. All these people are spread out across the globe, and even where people are in the same city they may work out on different floors, or in different buildings, or simply work from home. Teams are organized by technology (e.g., services developers) or activity (analysts, developers, QA). Project status data is fragmented: we have a project plan, organized by use cases we want to assemble and release for staged QA. We have the developer tasks that we're tracking. We have the QA scripts that need to be written and executed. We have test data that we need to source for both developers and QA. And we have a defect list. Lots of people, lots of places, lots of work, lots of tracking, but not a lot to show for it.

The executive's first action will be to ask each sub-team to provide more status reports more frequently, and report on the finest details of what people are doing and how long it will be before they're done. New task tracking, daily (perhaps twice daily) status updates, weekly status updates to stakeholders to show progress.

* * *

The common reaction to every failure in financial markets has been to demand more disclosure and greater transparency. And, viewed in the abstract, who could dispute the merits of disclosure and transparency? You can never have too much information.

But you can.

So wrote John Kay in the Financial Times recently. His words are applicable to IT as well.

Gathering more data more frequently about an existing system merely serves to tell us what we already know. A lot of people are coding, but nothing is getting done because there are many dependencies in the code and people are working on inter-dependent parts at different times. A lot of use cases are waiting to be tested but lack data to test them. A lot of functional tests have been executed but they're neither passed nor failed because the people executing the tests have questions about them. The defect backlog is steadily growing in all directions, reflecting problems with the software, with the environments, with the data, with the requirements, or just mysterious outcomes nobody has the time to fully research. When a project collapses, it isn't because of a project data problem: all of these things are in plain sight.

If getting more data more frequently isn't going to do any good, why do the arriving rescuers always ask for it? Because they hope that the breakthrough lies with adjusting and fine tuning of the existing team and organization. Changing an existing system is a lot less work - not to mention a lot more politically palatable and a lot less risky - than overhauling a system.

* * *

There are costs to providing information, which is why these obligations have proved unpopular with companies. There are also costs entailed in processing information – even if the only processing you undertake is to recognise that you want to discard it.

More reporting more often adds burden to our project managers, who must now spend more time in meetings and cobbling together more reports. Instead of having the people close to the situation look for ways to make things better, the people close to the situation are generating reports in the hope that people removed from the situation will make it better. It yields no constructive insight into the problems at hand. It simply reinforces the obvious and leads to management edicts that we need to "reduce the number of defects" and "get more tests to pass."

This reduces line leaders into messengers between the team (status) and the executive (demands). As decision making authority becomes concentrated in fewer hands, project leadership relies less on feedback than brute force.

* * *

[M]ost of the rustles in the undergrowth were the wind rather than the meat.

Excessive data can lead to misguided action, false optimizations and unintended outcomes. Suppose the executive bangs the drum about too many defects being unresolved for too long a period of time. The easiest thing for people to do isn't to try to fix the defects, but to deflect responsibility for them, which they can do by reassigning those in their queue to somebody else. Some years ago, I was asked by a client to assess a distressed project that among other things had over 1,000 critical and high priority defects. It came as no surprise to learn that every last one of them was assigned to a person outside the core project team. The public hand wringing about defects resulted in behaviours that deferred, rather than accelerated, things getting fixed.

* * *

The underlying profitability of most financial activities can be judged only over a complete business cycle – or longer. The damage done by presenting spurious profit figures, derived by marking assets to delusionary market values or computed from hypothetical model-based valuations, has been literally incalculable.

Traditional IT project plans are models. Unfortunately, we put tremendous faith in our models. Models are limited, and frequently flawed. Financial institutions placed faith in Value at Risk models, which intentionally excluded low probability but high impact events, to their (and to the world's) detriment. Our IT models are similarly limited. Most IT project plans don't actively include the impact analysis of reasonably probable things like the loss of key people, of mistakes in requirements, or business priority changes.

In traditional IT, work is separated into different phases of activity: everything gets analyzed, then everything gets coded, then everything gets tested, then it gets deployed. And only then do we find out if everything worked or not. It takes us a long, long time - and no small amount of effort - to get any kind of results across the finish line. That, in turn, increases the likelihood of disaster. Because effort makes a poor proxy for results, interim progress reports are the equivalent of marking to model. Asking project managers for more status data more frequently burns the candle from the wrong end: reducing the project status data cycle does nothing if we don't shorten the results cycle.

* * *

It is time companies and their investors got together to identify information [...] relevant to their joint needs.

We put tremendous faith in our project plans. Conventional thinking is that if every resource on this project performs their tasks, the project will be delivered on time. The going-in assumption of a rescue is that we have a deficiency in execution, not in organization. But if we are faced with the need to rescue a troubled project, we must see things through the lens of results and not effort. Nobody sets out to buy effort from business analysts, programmers and QA analysts. They do set out to buy software that requires participation by those people. This is ultimately how any investment is measured. This and this alone - not effort, not tasks - must be the coin of the realm.

As projects fail for systemic more than execution reasons, project rescues call for systemic change. In the case of rescuing a traditionally managed IT project, this means reorganizing away from skill silos into teams concentrating on delivering business-specific needs, working on small business requirements as opposed to technology tasks, triggering a build with each code commit and immediately deploying it to an environment where it is tested. If we do these things, we don't need loads of data or hyper-frequent updates to tell us whether we're getting stuff done or not. Day in and day out, either we get new software with new features that we can run, or we don't.

It is irrational to organize a software business such that it optimizes effort but produces results infrequently. Sadly, this happens in IT all the time. If we believe we can't deliver meaningful software in a short time frame - measured in days - we've incorrectly defined the problem space. Anything longer than that is a leap of faith. All the status reports in the world won't make it anything but.

Monday, March 26, 2012

When the Unstoppable Force of Growth Meets the Immovable Object of Control

You create a new software application. It grows, rapidly. And it keeps growing. You add tools. You add people. You add roles and structure. You split the codebase into different technical components. You divide teams. You add environments. You make rules for merging and deploying.

One day, you look round and realize you have 20 times the staff but deliver only a fraction of what you used to when you had only a handful of people. Demand is still growing, but you can't keep up.

On top of it, everybody is quarreling.

There are the old-timers, the people who were there at the beginning, who know the code backwards & forwards, who were there for the early triumphs. They're still the people who get called in when a deployment is in trouble, when there's a mystery problem, when it's already well past the 11th hour. Which, of course, means they're called in for every deployment. They want to carry on with the "trust me" ways. "Trust me" when I tell you that I'll get it done, just leave me to it. "Trust me" that if I have to, I'll go to heroic lengths to make my deadline. "Trust me" that we'll pull it off - we always have. This is how we've always done it. This is what works.

Then there are the people hired in the past year to run QA and create a PMO. They want control. "I want estimates, and task orders, and a project plan. I want specifications and approvals. I want to maximize the time programmers spend programming and testers spend testing. I want the cheapest resources I can get. I want test scripts. I want documentation. I want process." This is how we did it at my last firm. This is what works.

Then it happens again. Another botched deployment. Several days of downtime. Angry customers. Day after day of crisis calls and coordinated recovery efforts. Too much drama makes management nervous. We can't go on like this. This doesn't work.

But even management is divided. Some of the managers have been around since the early days: We operate in a fast & furious environment, this comes with the territory. Some of the managers are new: You can't run a business of this size by the seat of your pants. The rhetoric heat up and escalates. Neither side convinces the other. Disagreements become arguments become accusations. "Cowboys". "Control freaks". Impasse. Stalemate. Nothing changes.

The bickering continues, until the moment when the decision is made for everybody. Another deployment failure. This one in spectacular style: very public, and very severe, and very embarrassing, and very bad.

Time for new leadership. Call a search firm. Hire some hotshot from a firm like the one we want to be, one that grew and scaled. Give him a mandate. Have him report directly to the President.

The hired-in management hopes this new leader will bring deliverance from the chaos. The old-timers hope that "how he did it at his last firm" is "how we've always done it here".

Whatever the case, it is now entirely out of their hands.

* * *

I've seen this same pattern in a lot of growth business, both captive tech and tech start-ups: its market is still growing, but operations have become sclerotic and performance erratic.

By the time it gets to this point - lots of people, little getting done, low confidence, open bickering - the overriding mission has already started to change from innovating in the business through technology to The Website Must Not Go Down.

This mission change is driven from the top. Leadership feels the pain of operational failure more acutely as they come to see the business as an established competitor rather than a plucky start-up. Whether a start-up with investor money, or a captive IT project that has prominence within a large corporate, leaders are held to the expectation that operations will be predictable. This is how they are measured, so this is how they will manage.

Caution rules the day. We deploy less often and with more ceremony. We err on the side of not making a mistake. The fear of a service interruption causes organizational seizure. The price of innovation is subconsciously deemed too high.

We didn't used to be like this.

* * *

Let's dissect the situation.

On the plus side, you still have a core of people who are fluent in the code because they were among the principal authors. You've lost some key people over the years, but you still have people with in-depth knowledge of the whats and the whys of the code. And, the hero culture means they are personally invested in success: this is their code, this is personal. You also have hired in new people - a lot of new people - some of whom can become fluent in the product, the customers, and the business over time.

Most everybody will have a job on the line, and many senior people are still "close to code". There won't be much time for luxury positions, people in jobs off the line focused on things like process and group standards.

Strange as it may seem, another plus is that you operate in a fast-paced business environment. This is counter-intuitive: the environment seems to be the source of the problem. But it is your greatest asset. The critical success criteria are not costs but growth. Riding that growth will depend on innovation and time-to-market more than predictability and control.

Then there are the minuses.

You are beholden to the knowledge of the people who form your core group, and that knowledge exists exclusively in their heads. All those people you added to scale your capacity have given you a bloated team; if costs aren't a concern now they will be soon. Worse, you're not getting value from those new hires. Many are net negative contributors. With a bit of time and attention, some can become net positive, but the rest - maybe 30%, maybe 80% - are just plain bad hires. This happens when people are hired for capacity as opposed to capability.

You've added new roles, new structure, and new formality, in an attempt to gain control. That's given you a more complex business. It also creates mission confusion: as much as people are trying their level best they're adding as much confusion and delay as clarity and certainty because the structure is at odds with results.

Your core team of "heroes" have had a lot of freedom and independence for a long time. They will generally resist any change you try to make that they perceive will curtail that freedom.

People may be close to code, but if you've split the code into technical silos, your people will be pretty far removed from how and why your customers use the software.

Many of these minuses are by-products of the responses to growth: hire more people, add structure, divide and conquer the problem. But the fundamental hero culture that is resistant to any change is a hold-over from the early, free-wheeling days.

If the business is still growing, the heroes should have a case for remaining aggressive and getting stuff done. But priorities change when business leaders think they've got something - market share, outside investors - to lose. And the credibility of the hero culture erodes with every production snafu, every minute of unscheduled downtime, every unforced error.

* * *

The cowboy approach puts stability at risk. Control will stifle growth. So what can you do? It seems you are forced to choose between "responsibly stable" and "recklessly aggressive".

You are not. You must unwind and rebuild.

Fundamentally, things in this organization still get done in an ad-hoc way. Layers of control, scale, and structure have been grafted onto this. They are a mismatch. We know this because all those people and processes are shunted aside when stuff absolutely needs to get done, when a new release absolutely needs to get deployed.

Unwind

Here are things we can do to unwind these layers.

Furlough the net negative people. Having net negative contributors isn't good for anybody: not for the business, not for the net positive contributors, not for the people who are net negative. Frustration and disappointment are lousy characteristic of operations and careers.

Institute a policy of "do no harm". Introduce greater rigor in how you work - in how you analyze, code, build and test. Publicly expose quality statistics. Every new line of code, every new requirement, every new test, should be performed to a higher level of simplicity, clarity, ease of maintenance. Agile is pretty good for this.

Practices aren't enough. You need to instill value systems and social norms that reinforce them. Agile is pretty good for this, too. If you haven't done so in a while, re-read the Agile manifesto. It offers a core set of values. And a policy of "do no harm" is a good starting point for a value system.

These things gives us a foundation of excellence. They reduce the noise and distractions, and makes quality central to, not a potential byproduct of, what we do.

Rebuild

The unwinding work strips out the misfit layers and lets us improve our fundamentals. But that isn't enough. We also have some rebuilding to do.

Organize people by how your customers work with you, not how the technology works together. It doesn't make sense to organize a business orthogonally to the way it makes money, by product, or by customer-activity model.

Hire in new people who are not only smart and talented but specifically have the completion and collaboration genes in their DNA. Then, pair. Then rotate pairs. Then pair across discipline - BAs with Developers, Developers with QAs, UX with QAs. Do not underestimate how difficult it will be for people to pair. People will quit as a result of this. But this is a highly effective way to unwind knowledge silos. Unit tests are helpful, Stories with acceptance criteria are helpful, but nothing reduces proprietary knowledge as effectively as people actively working together on a specific solution.

Advance the state of your practice as far down the path of Continuous Delivery as you can. Make many small deliveries (Agile Stories are fantastic for enabling this). Commit code frequently. Build continuously. Automate everything. Deploy many times a day.

Results

This will leave you more resilient to downside risks (less susceptibility to catastrophic events) and able to capitalize on upside opportunities (quickly satisfy needs). Stability and innovation are no longer trade-offs.

You can start to rebuild while you are still unwinding. Just know why you are doing what you are doing. Reorganizing without behaviour change is just going to add layers and confusion. Similarly, introducing better practices will make your organization less bad, but won't make it world class.

Do not under-estimate the work and commitment this requires This is a major overhaul. This requires changing people and changing mind-sets, a lot of communication and reassurance, a suspension of disbelief that this will work, and tenacity during the trough of despair when it looks like things are getting worse, not better.

Most of all, remember that the most critical acts of leadership are the repeated commitments not to a vision, but to the people responsible for executing it. They'll make mistakes. They'll make plenty of mistakes. Make it safe to make mistakes. Do not hold people to an expectation of perfection, but that they act, iterate, and learn.

Monday, February 27, 2012

Utilities, Strategic Investments, and the CIO

The Financial Times recently ran analysis and guest op-eds that sought to explain value in and from IT. One went as far as to challenge whether the corporate CIO has a future. Each is a new take on an old theme, echoing one part of the contradiction that has riddled every business with a captive technology department: we want to minimize how much we spend on IT, and we want IT to be a source of innovation.

In one camp are those arguing that IT has become largely irrelevant. Personal technology such as spreadsheets and smartphones empowers increasingly tech-savvy knowledge workers. The rise of renting over owning, such as outsourcing, IaaS, PaaS and SaaS, has commoditized IT services. Most IT dollars are spent on business as usual: maintenance and upgrades represent 70% or more of the annual IT budget. IT, the argument goes, is less a source of value and more a cost of doing business.

In the other camp are those arguing that IT remains a source of value, just as it has always been. The advent of mobile and social media allow firms to interact more directly, more frequently and more intimately with customers than ever. The rise of Big Data - the ability to store and analyze large volumes of structured and unstructured, internal and external data - promises to let companies react more nimbly than ever before. The advent of cloud computing untethers customers, employees and even algorithms from captive ecosystems.

There is merit in both arguments, but only so far as they go.

E-mail and ERP are not sources of competitive advantage. Nor is cloud computing. They are utilities that enable people to conduct business. These services are no different from tap water or electricity. A megabyte of cloud-based disk storage is no different from a kilowatt of electricity. A business is best served by minimizing the amount it spends on the consumption of each. It is disingenuous to ascribe "value" to these: business don't measure value or return on their electricity or tap water. Nor should they on a technology utility.

At the same time, firms invest in themselves through technology. Fashion magazines are launching electronic retail sites. Airlines are pursuing new revenue streams with captive in-flight technology. Apple is now in the greeting card business, Google in travel. Although at some point each makes use of utility services such as cloud computing and ERP systems, these are strategic competitive investments into the business. Treating them as utilities relegates them to being costs, starving them for investment and suppressing the innovative punch they should pack.

Both arguments are just the latest incarnation of the financial paradox posed by IT at least as far back as the 1980's: should corporate Information Systems departments (as they were called then) be a profit center or a cost center? As the FT articles make clear, that debate rages on.

What they all missed is the change that is already taking place in corporate technology leadership today.

More and more, we're seeing corporate eCommerce chiefs who are "digital asset investors", responsible for the digital strategy and assets of a business. He or she is responsible for a portfolio of technology investments made through software and digital media. The proto-eCommerce chief is business-fluent, financially aware and technology-savvy, concerned with organizational learning and adaptability, with the confidence to fail fast. Their success is measured in revenue and yield. They may also be responsible for a P&L.

This fell to the CIO during the tech bubble, when IT was going to "reinvent the business". Firms gorged on technology that, in the end, provided zero, and often negative, returns. This came to an abrupt end with the ensuing recession, and with it went the luster of business leadership for the CIO. But in the decade since, firms have discovered ways to make money from technology investments through advertising, merchandising, subscriptions and licensing. The margins on those activities make investments in digital assets attractive. Technology has subsequently re-emerged as a business leadership role, but equally (if not more) heavily weighted in business and finance than technology.

The CIO, meanwhile, is becoming a "digital platform provider". He or she is responsible for negotiating with different suppliers to obtain core operating services that meet service and performance agreements (availability, performance, response and resolution times, and the like), have suitable features for business operations, are highly secure, and are obtained at the minimum price possible. The proto-CIO is a strong technologist with vendor management and negotiating skills, with a steady-as-she-goes disposition. The CIO's success is measured in terms of "levels of dissatisfaction" - the absence of delays, drag and downtime - more than it is measured in levels of satisfaction.

No matter how ubiquitous utility services become, and how tech savvy the workforce becomes, it is naive to think that responsibility for obtaining utility technology services will simply disappear. Regardless of how it is obtained and maintained, this is the firm's digital platform, core to it conducting business, and with which most digital assets will interact. It will evolve: today's source of strategic advantage is tomorrow's utility, changing the digital facilities that sit at the core. Also, the needs and priorities will change over time: to the CIO, the minutia of cyber-security are more important and the details of the data center less important today than they were a decade ago. Those priorities will be different again a decade from now.

Organizationally, one is not subordinate to the other; they are peers. The two organizations work together, but not exclusively. The eCommerce chief is a customer of the CIO, particularly for utility services that strategic digital assets consume. But the eCommerce chief is most likely not sourcing development services from the CIO. eCommerce invests in digital applications that interact with the digital platform provided by the CIO. The skills and capabilities that define application development are not the same as those that define platform development.

Companies break-up all the time when the whole is less than the sum of the parts: Motorola into handset and network businesses, Kraft into snacks and grocery brands companies. This liberates value by managing each to respective strengths and distinct characteristics. A firm investing in digital assets should similarly separate that activity from utility IT to get maximum bang for its technology buck.

It is happening today. It is most evident among firms such as publishers and retailers caught up in a technology arms race. The trend is likely to spread as industries from commercial farming to transportation become dominated by software. That shift to software means the split may prove durable. If it does, it just might put paid to the persistent paradox of IT: it is both value and utility, only separately.

Monday, January 30, 2012

Business Cases: Simplicity over Sophistry

In textbook portfolio management, we have a thorough and well researched business case that details the features, costs, and ROI we expect to achieve for each initiative.

Even if this did describe how most businesses start IT projects (and it does not), it is futile. Business cases are easily co-opted, marginalized, and bypassed in the normal course of day-to-day business.

Let's look at the theory behind having a business case. A business case defines why we want to make an investment. Beyond just being a rational thing to do, policy dictates that we have a business case: the rules governing capitalization require that we explicitly define what we expect to achieve, at what cost, for what benefit, over what time. A well formed business case satisfies the preliminary (pre-capitalization) obligations that allow an investment committee to approve or reject a proposed investment.

But most software investments are never pitched to an investment committee. We only need investment committee approval when spend is expected to exceed a set threshold, which in some businesses can be as high as $10m. Most IT projects are projected to cost far less than this. Funding for these projects is secured through the annual budgeting cycle. The temporal nature of financing decreases the demand for a business case. That, in turn, means a whole lot of IT investment is going on without a clearly articulated statement defining why it is being done in the first place.

We tend to assume that the more thoroughly it is researched and written, the stronger the case. This tends to lead to lengthy and elaborate justifications (perhaps they are lengthy and elaborate to justify both the investment being proposed and the time required to produce the business case itself). But rather than insight and wisdom, we get rationale that is highly suspect. This is true for all kinds of investments. John Kay pointed out that the economic cases for the HS2 rail project in the UK and the trams in Edinburgh are strangely precise, wildly optimistic, and ultimately justified by impact ancillary to the problem they should seek to solve: it makes no sense to offer a 25 minute journey from the center of town to the airport on a GBP1b tram when you can offer the same journey in the same amount of time on a far less expensive bus. They are vanity projects justified by complex, black box analyses. We see this same phenomenon in IT: people will go to great lengths to justify technology projects as a means of career advancement, job protection, skill acquisition, or organizational power building.

Nothing erodes the credibility of a business case faster than overstated benefits. Academic examples of business cases use hard figures such as revenue increases or cost decreases to calculate project ROI. But most corporate IT investments don't directly increase revenue or directly reduce costs. They make people "more productive" or they "improve customer service". These are worthwhile things to do, but they are soft benefits. Since these usually do not become hard benefits such as staff reductions or additional sales, we use proxies, things like "positions not hired" or "customers retained", to make them more tangible. Although it's good to acknowledge potential strains and risk of losses, these make for weak justifications when they are unlikely (projecting an un-needed job is only valid if there is funding for a position) and when they are hard to quantify (we generally don't know with any accuracy what customers we stand to lose if we don't make this investment).

The disparate nature of these measures often drives us to consolidate our projected benefits into a single index of "business value." As covered previously, this confuses and weakens our case. Better that we communicate projected hard and soft benefit independently, and do not overstate either our optimism or the precision with which we can measure.

Finally, change doesn't happen in isolation. Technology investments are usually implemented in conjunction with a broader package of business initiatives that may stretch from marketing to fulfillment to accounting policy. Although one specific investment may be the focus of a business case, we have to keep in mind that businesses are constantly changing how they operate, interact with suppliers, and appeal to customers. Market forces also take their toll, forcing us to react to changes in our operating and competitive landscapes. This makes it hard to trace with any accuracy the impact any single change has on any specific outcome. Most often, the best we can say is that a solution is a contributing factor to a changed state of the business. That is still reason enough to make an investment, but it dulls the confidence we can responsibly have in a projected ROI.

All told, writing elaborate business cases is tilting at windmills.

This is not to say that business cases lack value. A good business case is critical to governance, portfolio management and execution. They capture the best information that we have at the time. They provide a foundation to "why" we elect to make an investment, guidance as to "what" it will mean to fulfill that investment, and an ability to assess both the attractiveness, viability and success.

But we should think about our business cases a little bit differently than the textbook would have us do.

Thomas Lissajoux makes the point that it shouldn't take more than a day to write a business case, and it shouldn't require more than a single sheet of A4 to make your case. A business case may be multi-faceted, but it need not be overly verbose. We should also follow John Kay's advice and be less concerned with precision and more concerned with clearly and simply expressing our expectations. If we're investing too much time, or being elaborately descriptive, or pursuing precision, or building too complex a model, we are reaching for justifications. Short and simple will also mean that our business case is accessible, understandable and believable to a wide audience of stakeholders and participants over the life of an investment.

Elaborate models developed in support of precise justification are not business cases, but sophistry. The textbook business case is best left to academics to debate. Those in business are best served by simple statements of wants, needs and expectations.

Thursday, December 29, 2011

Business Value is a Weak Currency

Investments in infrastructure, whether public transport or IT applications, tend to lack hard numbers because they are a means to an end and not an end in themselves. We have transport to enable people to travel to work and allow goods to reach markets. Captive IT departments produce systems that enable us to conduct business faster and more efficiently and at larger scale.

In the absence of hard measures, we concoct soft ones. Since the 1960s, transport improvements such as HS2 have been justified by cost-benefit analysis of things such as accident reduction and time savings. In IT, we use "business value", a nebulous catch-all for any and all types of economic benefit a business will conceivably derive from an IT solution.

Any given IT investment will be expected to yield a hodge-podge of benefits as diverse as revenue increases, efficiency gains and improved customer satisfaction. It is appealing to combine these into a single measure of business value because it makes it easier to compare costs with benefits. It is also appealing to sum up business value across all projects as a way of expressing the impact that IT has on the business. In practice, though, business value makes for poor coin of the realm because it suffers two serious deficiencies.

First, it attempts to aggregate benefits that have fundamentally different economics. Not every dollar of business value is the same: a dollar of revenue has much different value to a business than a dollar of cash flow, or a dollar of profit, or a dollar's worth of increased productivity, or a dollar's worth of improved customer service. Rolling these up into a single metric of value is akin to aggregating apples and corn syrup into "sweet foodstuffs". It does less to upgrade the perception of the intangible benefits from an IT solution than it casts doubt over the more tangible ones.

Second, business value is prone to runaway inflation. Suppose we create a shoddy but effective solution to solve an urgent business problem, and sometime later we take on the important task of replacing that shoddy solution with a more robust one. How much business value do we get from the re-implementation? Since we cannot accrue the same business benefit multiple times, about all we get is greater reliability and lower maintenance costs. These have merit in their own right, but the benefits may not exceed the costs of the re-development. This encourages people to play loose and free with what "business value" means to justify an investment. That might mean taking credit for solving unintended process inefficiencies of the shoddy solution, or increasing the alleged risk that the shoddy solution fails in spectacular fashion. This makes business value a weak currency: because it is so easily conjured, it is easily inflatable, and it quickly loses value.

Some firms go as far as to track their annual business value delivered. I once worked with a firm that reported a total business value delivered that was greater than their market capitalization. Since nobody working there felt the firm was undervalued by investors, everybody dismissed the business value metric for what it was: an imaginary measure of imaginary benefits.

We should always base IT decisions in context of value to the business, but wanton overstatement undermines IT's perceived business value. If we delineate value by its underlying economics, we make a more compelling case for investing in the business through IT at all.

Wednesday, November 30, 2011

Strategic IT: Picking Winners is Hard, Cutting Losers is Harder

Previously, we looked at how we can separate Strategic IT from Utility IT, position the Strategic portion as an investment arm of the business, and manage it as a portfolio. Successful portfolio management requires that we have investment flow (regular promotion and demotion of opportunities), hedging strategies, and to behave as activist investors. If we do this well, Strategic IT becomes "investment" rather than "operations", a driver of business returns, and a source of innovation to the business.

But "doing this well" is hard. It's worth looking at why that is.

Let's start with the basic premise of portfolio management. Most descriptions of IT portfolio management go something like this:

  1. We put all the ideas for IT solutions into a review/approval funnel.
  2. Only the Very Best Ideas get approved.
  3. Those that do get developed and delivered.
  4. We reap massive profits.
If only it were so simple.

If you've ever managed investments of your own - even just a periodic redistribution of your retirement account - you know how difficult it is to manage a portfolio. We have very accurate historical data on the performance of specific companies, funds, and indexes, but we have no idea how any specific investment will perform in the future. Every investment is a leap of faith that our assessment of the opportunity and attendant risks is accurate, that governance is true and honest, that management have the expertise, that regulation is effective, and so forth. These might be informed leaps of faith, insofar as we've done our due diligence on the opportunity and its competitive landscape, as well as alternative investment opportunities before us. But as we don't know what tomorrow holds, it's still a leap of faith.

These same characteristics apply to Strategic IT investing, if to differing degrees. Compared to a securities investor, the Strategic IT investor has a more intimate relationship with the people managing execution of an investment, and is in a position to apply hands-on governance. But the Strategic IT investor doesn't have anywhere near the diversity of investment vehicles. The Strategic IT investor is essentially making very specific, targeted investments.

This means that Strategic IT investing is a business of picking winners. And if there's one thing we know about investing, it's hard to pick winners. Many researchers have argued that random walk investing performs no worse than picking winners. With index and sector ETFs making it easy for investors to match the market, it comes as no surprise that most capital is managed passively, not actively.

There are no passive instruments in Strategic IT investing. We're investing in a specific business through IT. Our investments are active by definition. We have to pick where and how we're going to place our bets, and just as importantly where and how we're not. We're in the business of picking winners. And no matter how much somebody touts their "rigorous and high standards for choosing investments", active investment management is hard. Go as John Paulson or Jon Corzine how hard it is to always pick winners.

And the challenge in portfolio management goes beyond simply picking winners. In our pursuit of picking winners, we're going to pick losers. In fact, we're going to pick a lot of losers.

So it is more apt to say that portfolio management is a process of picking winners that sufficiently outperform our losers. The objective isn't to avoid picking losers and only pick winners, but to recognize our losers quickly and minimize their impact on our portfolio.

Investors tend to hang on to losers for too long. It's tough for investors to admit a loss, because it's tantamount to admitting a mistake. Jason Zweig described it best: "it isn't that I've been proven wrong, it's that I haven't been proven right yet." For the Strategic IT investor, the emotional difficulty of parting with a loser is going to be reinforced by corporate cultures that insist that "our projects never fail". It will be impossible to fail fast and frequently if we can't come to grips with our mistakes.

Although we may very well need "rigorous and high standards" for choosing Strategic IT investments, it's perhaps more important that we have the discipline to quickly exit losers. This is why it is important that we have investment flow in our portfolio: as a portfolio manager I want to be able to fluidly enter and exit investments so that I can quickly cut my losses and redirect people and capital toward things I believe to be better opportunities. It also makes the case for activist investing: as a Strategic IT investor, I want to be able to continuously and consistently scrutinize my portfolio so I can continuously reassess and reinforce the viability and relevancy of my investments.

Even if we do this well, we still may never have an "optimal" portfolio. Our results will still be subject to forces and events we cannot foresee at the time we make an investment. And we'll pass on opportunities that turn out to be winners. But we will be much better at scuttling our losers. We'll be better at failing fast.

Wednesday, October 26, 2011

Annual Budgeting and Agile IT, Part III: Operational Predictability versus Financial Rationality

We've seen how Agile IT conflicts with the CFO's goals, and why the latter tends to trump the former. What can we do about it?

Conceptually, our starting point is to hive off IT investment activity from utility services. If the CIO doesn't draw this distinction, the CFO isn't going to, either. Making this separation allows us to talk about strategic IT in financial terms as opposed to operational ones. Not to become more coin-operated, but to level the playing field between IT and the rest of the business.

Let's look at capital for a minute. Firms acquire capital through many different means. There’s the capital accumulated through retained earnings. There’s also the capital we can raise by getting loans and selling bonds (debt) or issuing shares (equity). At any given time, a firm has many different ways to deploy capital, such as investing in operations, awarding bonuses, or paying a dividend to shareholders, just to name three. The Board of Directors, CEO and CFO will use existing capital, or raise new capital, and deploy it where it is expected (or just plain hoped) it will provide a return.

From an operations perspective, some of those uses of capital may seem unusual. For example, it's not uncommon for a firm to borrow money to make a dividend payment to shareholders. By doing so, the firm is simply borrowing against future cash flows to compensate shareholders. While this may seem counter-intuitive and even risky from an operations perspective, it illustrates the point that capital formation is dynamic: a business will raise funds to go after an opportunity. By comparison, budgets are static: we will constantly look to squeeze money out of a business.

Strategic IT must be seen as investments competing for capital against all other uses.

Every capital investment a firm makes has a business case that comes down to a simple question: “we're investing y capital in pursuit of x result". There are countless candidates for “x” that a business can throw a limited “y” at. Not every result is financial. There may be no quantifiable ROI. We could be looking to make social or political impact, or improve employee retention. The point isn't to measure financial returns, but to ask: how much are we willing to pay to get something in return? And in the extreme, how far are we willing to stretch our balance sheet to achieve a collection of “x” outcomes?

This would seem to make things a lot more complicated for IT. IT can't write the business case, it has to come from the business. And before we get something into production, we don’t really know what an IT investment will do for a business (what business impact it will have), let alone what it will actually take to get it into production. We can study, analyze and guess, but we really don't know. Why not just leave the business to the business, and the tech to IT?

Because in Strategic IT, we're doing the latter in pursuit of the former, and what's true for IT investments is no different from any other investment a business makes. We can do all the market research we want, but marketing doesn't know whether a new product will sell well or not until that product makes it to market. We can agonize over population demographics, but we won't know whether we’ll find skilled labor to staff a new manufacturing facility we've built until we set up and start hiring. An acquisition may look good on paper, but we may never realize the expected cost savings from a merger.

The fact is, every capital investment is subject to uncertainty. 'Twas ever thus. The best we can do is to make well informed decisions and do everything in our power to minimize the things that operationally impair our success.

This helps IT tremendously. In this context, IT doesn't need to be operationally predictable, but financially rational. That's a better way to run a business. It levels the playing field for Agile IT: even a business with low tolerance for fluctuations in cash flow from operations will invest in itself. This means it has higher tolerance for investment variability than it does operational variability.

If Strategic IT is financial more than it is operational, it needs aggressive, Agile portfolio management. There are a lot of things that go into this, too much to cover in this blog post, so we'll focus on three: investment flow, hedging strategies, and activist investing.

Investment Flow

There are countless IT investment opportunities for a business. As technology continues to evolve, the number of those opportunities will only increase. This gives us a very broad portfolio of ideas we might pursue.

Clearly, some ideas are better than others. We can take a closer look at those ideas that look a little more promising by putting them through an initial inception: make a broad survey of the opportunity, perform some due diligence, and produce a business case and an initial estimation of cost. This will filter out the plainly bad ideas, and give us a portfolio of candidates that appear to be good ones. Agile inception practices are well suited for targeted, short duration discovery and for producing relevant (not to mention short and focused) artifacts. Agile inception gives us a simple litmus test to apply to any candidate investment, and it doesn’t cost a lot to apply it.

Those ideas that clear the first hurdle are subjected to a second, more rigorous inception. The objective is to refine the business case and fulfillment details such that business and IT are comfortable presenting an investable decision to an investment committee. To be clear, the objective is not to produce a definitive, closed-ended, detailed plan. Our facts, forecasts, expectations and assumptions are going to be wide of the mark. We're not trying to be predictable, we're trying to determine if there's an investment case given the information that we have today. In this second stage, we want to produce a sufficiently refined assessment of benefit, cost, execution expectations and risk guidance so that an investment committee can determine if this opportunity looks like a good use of capital given there are known and unknown risks.

Some opportunities will fail to live up to their promise and fail during the second stage of inception. Some will be rejected by the investment committee. Some will be approved and become investments that the business agrees to make.

Although we promote opportunities, investment flow is not linear. Continuous assessment of investment opportunities means a new arrival may cause an existing investment to be demoted or curtailed, while others previously deemed unviable yesterday may look attractive tomorrow. The portfolio of investable opportunities do not follow a one-way promotion from idea through fulfillment, but will fluctuate relative to each other.

The goal is to be constantly performing inceptions so that we get a healthy churn of our investment opportunities. This has residual benefits as well. It partners IT with the business to secure an investment. It gives us a defined collection of investments we want to make that will deliver some expected value (financial or otherwise) for some expected investment. It gives us a portfolio of things the business “intends to invest” in through IT, sufficiently well defined to satisfy guidelines for capitalizing intangible assets. It gives the CFO guidance on IT's expected capital needs.

Hedge The Investments

An investment that makes it into the portfolio of investable opportunities may still never be developed. It’s simply in the investment portfolio. Like any portfolio, we need to hedge our positions.

Suppose 10 opportunities are currently in the “approved to invest” portfolio. We don’t have to secure funding for all 10. Perhaps we work with the CFO to secure funding for 8, with 2 at the ready. We can still have all 10 “approved” by an investment committee because in just about every business, “approved” is not the same as “funded”.

Why leave 2 on the table? From a business perspective, this seems ridiculous, especially for the person leading the business unit holding the odd project out.

Let’s look at what happens when we’re delivering against this portfolio. All of these investments will be at-risk of losing viability during development for any number of reasons: because the business case becomes shaky, sponsorship fades, or we get into execution and find out it’s going to cost far more than our prior looks led us to believe. Not having a hedged position would put us right back in the long-range budgeting trap that we’re trying to avoid. Strategic IT is an investment arm of the business. Investments contain an element of risk. A good investment manager hedges his or her risks.

Which is why we have a hedged position in the form of other investments which have been approved, and why we’re constantly looking for new investment opportunities (inception flow) to promote. That reduces the overall volatility of our portfolio, which, in turn, gives us operational flexibility to reassign staff with minimal SG&A impairment. Should one investment fall out, we have another at the ready, and we're able to quickly move people (the most important thing we've got) into that next investment. This is important: maintaining liquidity in our project portfolio prevents an erosion of our solvency (that is, our capability to get things done) by avoiding a spending squeeze. Looking at it another way, hedging within the IT portfolio means operational continuity doesn't suffer as a result of misguided portfolio maximization.

It's worth pointing out that hedging financial risks is a big change from pursuing operational predictability, efficiency, or optimization. The CFO is directly accountable to the board and to shareholders for business returns. Performance is at the mercy of all kinds of factors outside the CFO's control: currency fluctuations, macroeconomic events, and political change just to name a few. The CFO will not be held accountable for failing to predict the future, but will be held accountable for hedging to a reasonable level of risk awareness, even of some Black Swan events. Sometimes risks will exceed expectations, and sometimes hedges will be excessive and appear to be waste. CIOs with responsibility for an investment portfolio would be held to this type of accountability. Being seen as responsible only for operating costs, however, the CIO is relegated to cost control.

Another hedging strategy is to have short-term horizons for every investment. The longer the time we spend delivering any single investment, the greater the risk accretion, the greater the risk of default. Large capital projects that default either need additional cash injections to keep them solvent, or face being written off. Making several small investments allows us to actively revisit the business case, viability, and priority of a large investment. Smaller investments keep our portfolio much more liquid, and increases our resiliency to operational default.

Activist Investing in IT

An IT portfolio must be reviewed and assessed with the same rigor as any financial portfolio. The span of time over which human effort is applied to convert capital to an intangible asset requires a lot of attention. We do this through continuous governance, to align operations with financial intent. These mechanisms allow us to continuously ask whether an investment is still viable, or whether it is being operationally impaired, or has lost business justification. This is no different from what we do with investments in a financial portfolio. This is a subtle but critical difference with traditional IT: we're not trying to "meet plan", we're constantly assessing whether an investment is viable and, if not, what we can do without having to go hat in hand back to an investment committee to ask for more capital.

But there's a difference between mechanical governance and investing. Too often, IT portfolio management is staffed with little more than project reporters. Continuous governance is only effective if we have activist investors: people experienced with technology investments who not only scrutinize the data but manipulate it, reframe it, challenge it, supplement it by getting their own, and interrogate the people behind it. There's a fine line between fulfilling a duty of curiosity and just plain meddling, so think before you act(ivist). Take cues from successful activists (one could do much worse than to do your homework as thoroughly as David Einhorn), engage outsiders as board members for investment governance, and above all challenge silence and rubber-stamping.

Portfolio Management

Our strategy, then, is to separate strategic IT into an investment arm, and manage it like an investment portfolio: inception flow that continuously screens and revisits investable opportunities, have a diverse and hedged portfolio of small capital investments, bringing continuous governance and activist investing practices to bear on those investments, and rebalancing the portfolio when necessary.

In general, this is how IT portfolio management should be practiced.

Doing these things gives IT investments a robustness they don’t typically have:

  • We continuously assess new investment opportunities.
  • We have continuous assessment of the viability of in-flight opportunities.
  • We have a pool of funds out of which we can fulfill IT investment activity (an expectation for what we’ll spend on “human effort”).
  • It makes capex more liquid (accessible at a more coarsely grained level), protecting any expectation we set for payroll funding out or capex and reducing the risk of a solvency (a/k/a “capability”) crisis should several projects be suspended (the equivalent of our “tier 1 capital” of IT).
  • It decouples the budgeting decision from finely-grained (and inaccurate) project planning exercises, and roots our budgeting in value as opposed to cost.
  • We can link the financing of the investment opportunity to the investment itself (e.g., we may raise capital specifically to fund a tech investment if we think it represents a major business opportunity or we need to stave off a threat to our business), and feed that directly into our portfolio management.
  • We talk in financial terms and solve problems of the firm's capital allocation, instead of asking the CFO to talk in operations terms and solve (and often, over-simplify) IT's operational problems.
It’s worth looking at pure-play software investing as a useful comparison to captive IT investing. Generally speaking, software firms have low capital intensity (lower now especially with cloud) and little debt (the volatility of tech makes financing via fixed income instruments unattractive). They also tend to throw off copious amounts of cash (e.g., Microsoft, Google, etc.) This gives tech firms several degrees of freedom that firms in other industries simply don’t have.

People accustomed to the low-debt-high-cash tech investing environment tend to bring the same set of expectations of captive IT departments. Those expectations are well intentioned but wide of the mark, as illustrated in previous posts: because IT is seen by the business as part of operations, it is subservient to, not a component of, the financing demands of the business.

As I stated at the beginning, the fundamental concept is to have IT separate its investment activity from the utility services it provides. That seems like a conversational non-starter. But in most firms, there's a pretty good business case for doing that.

To put things in perspective, if the entire $350m discretionary IT investment [of this firm] had been retained as profit instead of spent on projects, the company’s earnings per share would have risen, creating more than $5bn of additional shareholder value.

Richard Bhanap, Managing Director, KPMG Europe writing in the Financial Times
A board doesn't have to invest in the business through IT. It can use capital to retire debt or buy back shares, invest in other securities, buy other companies, or make a dividend payment to shareholders. As Mr. Bhanap points out, when a company does invest in IT, those investments have a very high standard to meet. We lose sight of that standard when Strategic IT is thought of as "operations" as opposed to "investment". IT stands to benefit by taking on responsibility for investment performance.

Decoupling Strategic IT from operations, and instead casting it as an investment arm, gives us an opportunity to get Strategic IT out of the annual budgeting cycle and into an investment cycle. Doing that creates a more conducive atmosphere for Agile IT.

As a post-script to this series, we'll look at IT portfolio management - and what we're really asking IT to do.

Updated 29 December 2011

Thursday, September 08, 2011

Annual Budgeting and Agile IT, Part II: Why Agile Gets Compromised When It Goes Corporate

In the first installment, we had a look at how the CFO is primarily concerned with consistent cash flow so that the business can service long-term financing obligations. As a result, when the CFO is first introduced to Agile, he or she will not be terribly pleased to hear that we’re doing away with predictive planning in favour of continuous reprioritization, even if we allege to be doing it in pursuit of maximizing capital allocation. To the CFO, although IT is a capital investment, it's also a drag on cash flow – cash that the business needs to meet its finance obligations.

In this installment, we’ll take a closer look at this discrepancy. We'll start by looking at what IT does for a business.

Most of IT consists of utility services, the things we need to run the business, such as laptops, virus protection and an office productivity suite. IT utilities become running or operating costs to the business, just like water and electricity: we pay maintenance fees for virus protection and office suite licenses, and buy new laptops when we add a new FTE to the payroll.

Replacing a utility, such as substituting Google Mail for Lotus Notes, can be expressed in investment terms: for the cost of migration, we expect our license fees and maintenance costs to be lower in the future. But replacing utilities is highly invasive to the business, and typically brings with it little capability gain. Firms don't have infinite capital and utility investments tend to offer low payback. Since there's limited upside (we'll squeeze only a little more cost out of the business) and there is significant downside should something go wrong (such as a loss of data or long-term interruption of service), we do these things when business is otherwise calm and we have nothing better to do.

But what about non-utility IT, investments in custom solutions that give us an edge in customer service, makes our supply chain more resilient, or builds our brand strength? Martin Fowler calls this "strategic IT": the investments into technology that gives a business a competitive advantage.

At first glance, these look like they should be treated differently. We don’t always know when an investment opportunity will arise or when we’ll get an idea. We don’t know what that investment will look like until we roll up our sleeves and get on with creating it. From an IT perspective, it would seem a business would be well served by being able to finance a strategic opportunity at the drop of a hat. This also seems like precisely where the Agile would appear to shine.

Unfortunately, it isn’t as simple as that.

Software development is the act of converting capital into intangible assets by way of human effort. Let’s look at what it means to finance IT effort.

Human effort is a payroll cost, which is a running cost to the business. If that human effort comes from our FTEs or direct contractors, it's our cash covering our payroll. If that human effort comes from a firm we've contracted with, it’s our cash covering somebody else’s payroll. As CFO, you don't miss your own payroll, and it doesn't do you any good if you cause a key supplier to miss one of theirs.

Payroll, like debt servicing, requires consistency. If software development is going to be a core capability, the CFO needs to know how big that capability is going to be and what impact it's going to have on cash flow. The CFO will also tell us if we’re building a captive IT organization that we simply can’t afford.

In strategic IT, meeting payroll isn’t just a matter of people and salaries. We have multiple funding buckets to be concerned with.In many businesses, software is treated as an asset. Even though it's intangible, software shares many of the same properties as tangible assets such as trucks or machinery: we can't operate the business without it, it tends to be expensive, we get multiple years' use out of it, we might make improvements to it, and it requires ongoing service and maintenance.

When we treat software investments as assets, we capitalize them. Software is capitalized over a 3 year period. Since we're going to get multiple years of use out of something, it's acceptable to distribute the cost of acquiring it over multiple periods. This reduces the volatility of the income statement: as they tend to be expensive, taking the full cost of acquisition in period 1 would excessively depress earnings, while having already reported the cost would mean earnings in periods 2 and 3 would be that much higher. This means capitalization has income statement impact for future periods - something the CFO is going to be particularly interested in.

Before we go any further, let's be clear about the accounting going on. For income statement and balance sheet purposes, we're going to capitalize the cost of developing software. This is a long-term treatment of software assets. But we still have payroll costs to meet, which impacts our cash flow. This is a short-term treatment of the effort used to develop those software assets.

We saw a lot of this in 2009: record cash balances allowed companies to cover costs, while moving more spending to capex contributed to strong earnings. Depending on a firm's experience of the financial crisis, this either deferred difficult decisions such as layoffs until cash became too tight, or, if they rebounded relatively quickly, it allowed them to emerge a much stronger competitor because they were able to retain experienced people throughout the crisis.

In practice, though, this two-speed accounting introduces a bit of friction. A CIO can’t simply choose a finance bucket out of which they’ll pay for salaries. Payroll allocated from a capital account is incurred against a specific asset in the general ledger, something the CFO must authorize. The rules governing capital expenditure are pretty strict. Labor costs can only be capitalized if they are demonstrably performed in the fulfillment of the expected characteristics of the asset itself. Labor costs incurred in R&D and administrative work always go to operating expense. So must any labor costs associated with defining what the asset is to be in the first place, work typically associated with early stage analysis. The devil is in the details, and in large corporate IT organizations, knowing that we're tracking the right effort to the right bucket gets cumbersome very quickly. We must be able to show that we're consistent and in compliance with these accounting guidelines. If we can't satisfy the auditors, we'll face a financial restatement. That's career limiting.

Where it gets really complicated is when there is a volatility in the IT portfolio. If the business pulls the plug on an in-flight capex project, we have to figure out how we're going to cover payroll of the people who were working on that project. We either have to have another capital investment ready for them to work on, we have to have sufficient unallocated opex to cover their payroll costs, or we have to lay them off. In finance terms, this is equivalent to a liquidity squeeze (inaccessible budget or insufficient budget) that can cause a solvency crisis (loss of skills and capability).

This brings us back to the question of how we finance "human effort". Human effort isn’t as liquid as capital. We steadily bleed cash from the business to meet payroll costs of the effort we’re buying, with only an occasional delivery of a software asset that enters use and has an impact on our business. The effort that we’re financing hasn't the financial properties of the capital we pour into it nor of the asset that it produces.

When we choose to fund salaries out of capex, we are beholden to that effort yielding a result. If that's going to happen, the effort has to be reasonably framed (estimates are valid, scope well defined, etc.) and that the business environment has to be static (the business will still want the intangible asset we’re delivering). Capex spending further binds IT to the financing structure of the business. The annual budgeting cycle that still governs companies sets an expectation that operations, IT included, will perform consistently with a big, up-front plan.

This is a contributing factor to why we see CIOs resorting to terms such as “control” and “predictable” rather than “fail fast” when explaining Agile to the CFO: it's a capitulation to the over-riding realities that drive a company. Being "predictable" reinforces the operational objective to produce consistent cash flow for finance; failing fast is a threat to it. It comes as no surprise that by the time Agile reaches the most senior levels of the business, it's been co-opted into the language of industrial management, just substitute "Scrum" and "burn-down charts" for "Waterfall" with "Gantt charts". Agile is rolled-out as a means of "guaranteeing predictability", or greater efficiency, not as a means of making better use of capital, or being more resilient to unforeseen events.

It's worth drawing a comparison between captive IT that engages in application development, and technology firms. Tech firms typically have highly volatile earnings and cash flow, which means they tend not to rely on debt financing. It's no coincidence that tech firms tend to be hotbeds of innovation, while captive IT departments in large corporates are not. Large tech firms are typically debt free, while tech startups tend to go one step further by trading equity for salary. Because tech firms aren't beholden to consistency, let alone predictability, independent tech firms have the luxury to pursue discovery. Discovery tends not to come from the mundane, and is amplified by creative freedom.

Which brings us back to the fundamental disconnect between Agile and the CFO. In corporate IT, the CFO isn't trying to solve a "make better use of capital" problem in the business. He or she is trying to solve a "consistent cash flow from operations to service our capital obligations" problem. When Agile goes corporate, it is subservient to, and most often compromised by, that latter problem.

In the final installment of this series, we’ll look at what we can do to make Agile IT appealing to the CFO, without compromising the core characteristics of Agile.

Sunday, August 28, 2011

The Tech Bubble: A Cool Breeze in Blistering Times

Reading the headlines, tech is showing some signs of relaxing a bit.

  • The first is a slowdown in corporate capital formation. Businesses hold record amounts of cash, but have nowhere to put it: a stagnant economy doesn't encourage investment for growth, while real interest rates on Treasurys are negative. Why raise more capital?
  • Next are signs that captive IT spend is slowing amid general economic uncertainty in the US and Europe.
  • Public sector austerity will reduce government spending on tech, softening demand further.
  • Investment capital is also getting a bit tighter. Volatile equity markets don't make for the best of times to IPO. Also, market uncertainty has triggered a run for cash, depleting high-risk investments. That is cooling off venture backed businesses.
  • Finally, H-P pulling the plug on WebOS devices (if not the OS) may portend an inevitable shakeout in smartphone & tablet platforms, while market battles waged with patents threaten to make innovation an early victim.
After setting such a blistering pace during the first half of the year, a breather isn't all together a bad thing.

But it is likely to be a short breather. The overall trend in tech remains inflationary.

Demand is still strong. Businesses are still spending on technology as a way to lock-in productivity gains to protect margins in a period of flat revenues. Business spending on software is forecast to increase nearly 10% this year. Smartphones and tablets are selling in copious volumes. Mobile as well as social media platforms are spawning new applications and new categories of applications.

Investment remains strong, too. M&A in the tech sector is back to pre-crisis levels. VC firms late to the game will add more froth to valuations. Some tech firms - encouraged by moribund investment banks - may still believe the time is right to IPO. Tech behemoths such as Oracle, Microsoft and Google are sitting on large cash piles.

There is also a sea-change in tech from hardware and services to software. H-P paid a juicy 78% premium for UK software firm Autonomy, and is shopping WebOS as a platform for automobile and appliance makers. H-Ps desire to reinvent itself as a software firm might portend The Great Software Pile-in, inducing other tech firms to migrate out of low-margin hardware and high-touch services in favor of highly-scalable software.

In a broader economy plagued by deflation, tech is still robust. A bit of capital tightening and demand slacking probably isn't a cooling off as much as it's just a cool breeze. Still, it's a welcome respite, particularly if it blows through tech labor markets. High labor costs don't just drive up development costs, they also make project rescues and bailouts that much more expensive. Any reduction in labor market pressure would make tech investments more resilient to failures, particularly important in a line of work notorious for spectacular ones.

Best of all, it would give tech leaders the opportunity, however brief, to adjust and reposition for another round of tech inflation.

Sunday, July 31, 2011

Annual Budgeting and Agile IT, Part I: Why the CFO Isn't Impressed with Agile

I’ve been asked by a number of people recently how we can reconcile Agile IT, which shuns long-range deterministic planning, with annual budget & planning cycles, which are dependent on it. This 3 part series will look at the CFO's perspective on the business, the inherent conflict in IT investments financed through business operations, and what CIOs can do to decouple IT finance from IT operations.

Let’s look at things from the perspective of the CFO.

The CFO needs to be in front of a lot of things over the course of the year, notably earnings and cash flow. He or she wants as much future indication of what we want to spend and when we want to spend it, so he or she can determine how that spending will be financed: from cash already in the bank, from collections made throughout the year, through a short term credit facility, long-term debt, paid-in capital, or any of a number of sources of funds.

Businesses are held to specific reporting cycles, but not every month or quarter is going to be the same: businesses that are seasonal such as retail or cyclical such as railroads will go through longer spans of time before they know whether their forecasts about revenue prove true or not. Of course, many businesses are neither cyclical (most of the luxury sector seems immune to the fact that there is a recession) nor particularly seasonal (airlines spike with holidays and such, but revenues are consistent quarter-on-quarter). These businesses have far more immediate indication of the accuracy of their revenue forecast and collections. More frequent feedback might allow people in a company to reprioritize more frequently what they buy and how much they spend month-on-month, but CFOs of such firms will still err on the side of making decisions consistent with long-term expectations.

When operations are consistent in their financing demands, the CFO doesn’t have to crisis-manage the checkbook day-to-day; they can instead guide the business by getting in front of financing needs or investing opportunities. Clearly, it isn’t good if we spend money in anticipation of cash flow from future sales only for those future sales to fail to materialize. CFOs tend to not to like to go hat in hand to credit markets to raise cash, or immediately contract spending across the business. They particularly don't like having to answer questions from analysts during earnings calls about having needed to make such sudden changes, because it indicates those in charge of the business aren’t very capable at running it.

Consistency is particularly important for CFOs of capital intensive firms, companies with high asset value and a lot of equity or debt. The people who financed the acquisition of those assets will want to know that the firm earns more from what it does with the assets than the assets themselves are worth, and those to whom the firm owes money (such as bondholders) want to know that the company is going to be able to service its debt. The CFO is, in many ways, the voice of those who provide capital to the business, and has a fiduciary duty to them all.

The CFO perspective is also compounded by the fact that we are increasingly financing businesses with complex instruments to provide working capital and hedge risks. While this may reduce the cost of capital, complex corporate treasury operations leave the CFO with less time and less patience for cash flow from operations being inconsistent with expectations.

This isn’t to say that our numbers are locked for the year. We revisit the numbers every month and quarter But those numbers are still revised to a baseline, for the aforementioned reasons: e.g., we need to hit a target return to satiate bondholders or equity holders, we don’t want to overheat spend before our big revenue cycle in the event our forecasts are wide of the mark. Only if the business environment has completely changed – think about what firms in everything from retail apparel to investment banking did in Q3 2008 - will we throw out the baseline.

Banks make money by borrowing short and lending long. Most businesses follow the same pattern, using month-to-month cash flow (short) to meet the demands of the firm’s investors and creditors (long). This requires a very well oiled short-term cash generating machine to sustain the demands placed on the firm from their long-term financing. This is particularly obvious in firms that are highly leveraged e.g., where private equity has taken out money from a business by borrowing against future cash flows, and then sweating the business to maximize cash flow quarter-on-quarter. But this is true in any business beholden to outside capital.

Along comes the CIO with the good news that we're adopting Agile practices, which will do away with predictive planning and instead constantly re-scope and re-prioritize to maximize use of capital.

To a CFO, the prospect of financing captive IT operations that can only determine their financing requirements by muddling through is not particularly attractive. Vague financing requirements threaten to introduce volatility in financial demands of business operations. The CFO doesn't have a lot of tolerance for anything that could upset the tuning of the short (cash flow) / long (debt and equity) financing behind the business. Any short-term capital optimization the firm stands to gain from Agile is appreciated, but it pales in comparison to the long-term capital monster that needs to be fed.

If anything, the CFO wants greater certainty in operational forecasting so that he or she has one less thing to worry about. Not less.

Financing Agile IT thus has a steep hill to climb.

In the next part, we'll take a look at the conflict in financing day-to-day IT operations as capital investments.