I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

Friday, March 25, 2011

Sprint's Marathon

There's been a lot of public hand-wringing at the proposed AT&T / T-Mobile USA merger, particularly with regard to Sprint. Sprint is a distant third in subscribers today, but this is a very dynamic market. Today's subscriber count may not be relevant to tomorrow's.

First, most mobile subscribers aren't bound to a network. While a post-merger AT&T/T-Mobile and Verizon Wireless would have most of the customers, there are no switching penalties for long-term handset contract customers. On top of it, many of those subscribers are not using smartphones, but ordinary handsets; little amounts of data trapped in those handsets also means low switching costs.

Second, the mobile platform field is about to get very crowded with RIM (new tablet, vigorously fighting an erosion of their smartphone business), HP (WebOS), Windows Mobile, and Nokia possibly retaining MeeGo for tablets. Along with Apple and Android, that makes as many as 6 different mobile computing platforms. They're each spending billions in pursuit of this market. None of them want to finish in 6th place.

Third, hardware competition is increasingly fierce. Motorola is smarting from losing pride of place in the Verizon stable of offerings, having been displaced by Apple. Nokia needs to protect sales of existing devices while placing their future line of Windows Mobile devices.

While the name of the game is scale, these dynamics suggest it's not a mature market share game for the networks yet, as much as they're caught in the middle. On the sell side, those hardware and platform vendors late to the party are especially motivated sellers. On the buy side, first time smartphone buyers - the market everybody is slugging it out for - have low switching costs, while first-time tablet buyers will be more price sensitive than the first movers.

Being in the middle isn't all together a bad thing, as it means opportunities for organic customer growth. That offers a less expensive alternative to growth through M&A, something to which Deutsche Telecom (Voicestream), Sprint (Nextel), and Verizon (the primary beneficiary of customer defections in the wake of the Sprint-Nextel merger) can attest.

Which brings up Sprint in 2011. Data hungry devices should perform fast and reliably on WiMAX. With a low subscriber base, there can't be as much competition for Sprint's bandwidth as there is for customers on AT&T and Verizon networks. As a means of getting customers hooked, that alone might be attractive to platform and hardware providers. Perhaps Messers Elop and Ballmer do a deal with Sprint because they believe networks are integral to their "3rd platform" strategy. Perhaps HP, more US centric in their mobile offering than the rest of the field, sees the same. Or perhaps hardware and platform vendors going all out in a market share game will find Sprint an attractive channel through which they can provide hardware on the cheap without appearing to be the solution of little value.

If true, this makes the networks more spectator to the action than central to it. For Sprint, that makes it a marathon.

Thursday, February 10, 2011

The Tech Bubble, One Month In

In the month since I blogged that tech looks like it could be in a bubble, there have been plenty of headlines to suggest that it is:

A rise in interest rates will throw a little cold water on the tech fire. Writing in Breakingviews, Martin Hutchinson points out that low rates make capital spending attractive, but capital investments made to improve productivity (e.g., business investment in technology) stifles business hiring. He goes on to explain that the contrary is also true: when capital is more expensive, hiring becomes more attractive than investment. Interest rates may rise for any number of reasons, not the least of which to stifle inflation which may very well be on the rise. When they do, business preference will gradually change from productivity investments to hiring. It behooves the tech exec, particularly in captive IT, to pay close attention to what the Fed does.

Tech firms, particularly start-ups, are less vulnerable to a rate rise than captive IT. As Robert Cyran points out in Breakingviews, the tech entrepreneur calls the shots these days as cloud technology allows the tech firm to rent, as opposed to own, sophisticated infrastructure. This makes tech far less capital intensive. If anything, as Mr. Cyran notes, tech firms have just the opposite problem: excess capital desperately seeking yield is trying to find a way into tech, only to find tech has little use for it. When liquidity declines with QE2 expiry (provided there's no QE3), tech valuations may decline, but tech firms are less likely to suffer from financial starvation.

Not so captive IT, which still owns more than it rents, and thus remains very capital intensive. Last month I mentioned that the tech exec should sweat their revenue. Interest rates are a leading indicator of revenue durability, particularly for captive IT. Know your firm's cost of capital, and know the yield of those tech investments in your stewardship, and you'll know how resilient your revenue is to Fed decisions.

Wednesday, January 12, 2011

The Tech Sector: Bull, Bubble or Both, and What It Means for IT (Part II)

In the previous post, we took a look at market factors that suggest the technology sector could be on a multi-year bull run, in a bubble, or possibly both. In this post, we'll look at what that means for leaders of captive IT departments and tech firms.


If tech is in a bull market or bubble, it means inflation and volatility for a lot of tech businesses and captive IT shops.

Lets look at inflation first.  Excess liquidity fueling a lot of tech companies will create a lot of tech jobs. So will tech investments made by traditionally non-tech firms (e.g., automakers and appliance makers are all in the software business now).  Despite all the progress in global sourcing, labor and jobs aren't universally mobile.  This means that demand (jobs) may outstrip supply (candidates) in specific markets.  When that happens, prices (wages) tend to rise. It's reasonable to expect that salaries and tech are going to inflate.

But inflation takes many forms.  In this case, a bull market for labor will inflate career paths.  When there are more jobs than people, hiring standards tend to decline.  Availability becomes a skill for candidates. This leads to people being over-promoted to fill vacancies, which creates misplaced expectation of competency.  It is just as bad for the hiring firm as it is damaging for the person hired. The more a business hires, the poorer it will perform, the more vulnerable it becomes.  Employees will find themselves in roles or on career paths for which they have few qualifications.

Inflation begets volatility.  When there's both wage and title inflation afoot, firms are likely to see higher staff turnover.  While a little staff change isn't inherently a bad thing, no firm wants to be desperately trying to fill positions in a job market where job seekers have the upper hand.  Staff volatility will obviously impair operations as situational knowledge erodes along with it.

Bull markets and bubbles create volatility of commercial technology.  New technologies such as tablets or rapid replacement markets such as smartphones are contests for market share of new units sold.  A large installed base of legacy customers isn't as valuable as costs of change are relatively low.

Bubbles, though, tend to amplify the effect of volatility with a sharp correction.  Tech marketplace battles can go on for a long, long time, especially when it's a market share game with high customer turnover.  Don't under-estimate the power of the sell side to buy customers, particularly if the sell side has deep pockets laden with investor capital.  Don't underestimate the stars buyers get in their eyes with the idea that they'll be able to get some slick tech capability unique to their business.  And don't underestimate the value-added resellers, custom appdev shops and other middle-men who will willfully exchange the hard money earned by being little dogs in the mainstream for the easy money that comes with being big dogs on the fringe.  The 1980s and 90s saw a lot of fringe technologies carry on for far longer than was economically justified. Investor cash - sometimes it's just excess liquidity, other times it's just dumb money - fuels the party. The longer this cycle perpetuates, the bigger the bubble inflates, the more severe the correction when the money runs out.

Bull or bubble, this will place a lot of demands on tech execs. What can you do?

Create resiliency in your staff.  Get in front of tech inflation by setting expectations with the CFO and head of HR. Reduce the risk of exit of key people and create broader bases of knowledge (less specialization, more generalization).  Recognize that it's one thing to fend off competition with other firms for staff you have, it's entirely another to have your staff poached by headhunters or former employees.  Be hypersensitive to staff targeting and have a plan at the ready (agreed already with the CFO, HR and key partners) should you need to aggressively retain.

Be creative in evolving relationships customers and suppliers.  Inflation and volatility will strain relationships between sell side and buy side, particularly between captive IT and tech services.  A lot of relationships became genuine partnerships in 2008 as firms found creative ways to sustain contracts and projects in what were difficult times for everybody.  But as the economics change in favor of the sell-side, the relationship dynamics will change, too.  Sell-side firms will pass salary increases to customers.  Loss of key people in delivery or relationship roles will reduce customer confidence.  Benevolence and capitulation are rarely rewarded, but creativity is.

Hedge your technology exposure. Take nothing for granted (remember that CP/M was going to be the dominant OS for microcomputers, until it wasn't), and pursue opportunities on alternative platforms to hedge your portfolio of business partners and revenue.  A little exposure to what looks like a short-play platform may provide outsized benefits if the supplier succeeds or if it brings you some long-term customer relationships.

Finally, whether you're a sell-side exec or a CIO, sweat over the durability of your revenue and funding.  Look for risk in your book of business.  A customer or business sponsor prepared to invest in something today may suddenly have that funding cut by mid-year.  For example, a tech services firm doing business with a bank that holds a lot of assets that suddenly turn sour (contracting liquidity and threatening solvency in the process) may find allegedly strategic projects suddenly cancelled. We still face these risks: recall that Irish banks passed the European stress tests in the summer of 2010, only to require recapitalization before the year was out.

When tech is on the rise, it makes everybody in the sector a little more optimistic about the future: new technologies, new career challenges, and a bit more money in the bank. But a rise in tech could put the squeeze on the businesses we run. We would do well to temper our optimism with the pragmatism that there's no such thing as a free lunch.

Monday, January 10, 2011

The Tech Sector: Bull, Bubble or Both, and What it Means For IT (Part I)

There's been a sea-change in the technology sector, from counter-cyclical to pro-cyclical.

Tech, and especially tech services, went counter-cyclical a few months into the last recession (2008). Companies entered the downturn with record levels of cash. With so much uncertainty, most firms didn't bet on expansion. Instead, they elected to preserve cash rather than deploy it, and slashed payrolls to cut operating costs. Cutting payrolls meant firms needed to get the same work done with fewer people, so they spent modestly on IT to increase productivity. As a result, tech, and in particular tech services, has been counter-cyclical to the broader economy for most of the past 2 years. This created modest inflationary pressures in IT, notably on wages, but those were kept in check by the deflationary cycle affecting the host businesses. On the whole, tech went through this recent recession much as it did the 1995-6 recession.

But tech is becoming pro-cyclical. The success of both tablets and smartphones (and to a lesser extent the fringe battlegrounds of e-readers and other specialized devices), as well as the rapid maturation of cloud-based services, has created a tech hardware war, an OS war, a bidding war for tech firms, and spawned a feeding frenzy in application development. This is happening during a period of excess liquidity (a result of Quantitative Easing, low borrowing costs and other loose monetary policies), and low yields on most investments. Put simply, there's a lot of money in search of yield.

A lot of that money is now coming into tech, because tech offers investments with the potential for high yield. Or, more crassly, tech offers "gambling opportunities" for investors. This means the money coming into tech has changed from "we're investing in tech to reduce operating costs" to "place your bets on tech!" This, in turn, is changing the tech space from counter- to pro-cyclical. That's great during a bull market, but painful during a bear market.

There's no certainty of how this will play out. This could simply be the next stage of what may be a multi-year bull run for tech, similar to its 1983-2001 bull run that had several mini-cycles. It could be a bubble, with too much cash chasing more yield than will materialize. It could be both: a bit of excess euphoria during what is otherwise a fundamentally strong bull run in tech.

Such euphoria in any sector tends to come with little in the way of popular recognition of what's behind it, and mass underappreciation for how vulnerable it is. We saw this in the housing bubble, the internet bubble before it, and bubbles going back to the Dutch Tulip trade in the 17th century. But this sudden rise in tech calls for as much caution as enthusiasm. This is true particularly for those who are long tech (e.g., those building wealth by building companies and careers in it) more than those who are comparatively short (those investing capital in pursuit of yield). While investors can be burnt by a bubble, careers can be wrecked.

One of the things that makes tech a potential bubble now is the resilience of that capital coming in. Capital has stayed largely on the sidelines for the last two years because of so much economic uncertainty. That capital, while being deployed now, is still jittery and remains at risk of systemic shock. Problems lurk in peripheral Eurozone debt, US housing debt and US muni debt. China and India are fighting inflation while the US is trying to stoke it. The net effect is, capital is still highly nervous. It's also highly mobile, as evidenced by the volatility of many different asset classes in 2010. Tech can take no capital placement for granted: what the capital markets giveth tech, they can just as quickly taketh away.

As tech has gone pro-cyclical, it will rise and fall with the broader market. Those peaks and valleys will be amplified by the amount of capital coming in. If the broader market is characterized as "volatile" in 2011, tech could be in for a wild ride.

In the next post, we'll look at the different ways volatility manifests itself in tech. We'll also look at what a tech boom, and potential bubble, means for people leading and managing captive IT and tech firms.

Friday, December 24, 2010

Heathrow Mess is Explained by Taleb

Like many thousands of other people, I was forced to stay in London for an extra few days because weather-related factors caused Heathrow and other UK airports to close. Nearly a week after it began, thousands remain stranded.

Most analyses of why this happened have looked at how supply-side factors such as additional snowplows or seat capacity on contract could lessen the impact of an event like this. Hugo Dixon, writing in Reuters Breakingviews, suggested that a better demand management mechanism - one that creates a more efficient market for seat demand under circumstances where seats are at a premium - is just as important to consider.

I took a different perspective, which Breakingviews was kind enough to publish. Here's the abstract and a link to my letter:

Heathrow mess is explained by Taleb.
Europe's main airport is a highly inefficient market with poor and asymmetric information. Snow turned it from a utility to a casino. This wasn't a Black Swan event. But it reflects Taleb's argument that optimized systems are vulnerable to catastrophic failure.

Vulnerability to catastrophic failure makes abundantly clear we would be better served by demanding and rewarding robustness over optimization, especially from utilities.

Tuesday, November 30, 2010

Regulatory Capture and IT Governance

Industries are regulated by governments so that companies don't compromise the public interest.  Regulatory agencies usually grab headlines because most regulation comes in response to nefarious actions, but it isn't always the case: people in a company may conduct their affairs in what they believe to be a perfectly justifiable manner, only for there to be unintended consequences of what they do to consumers or society.  

In the same way, we have governance within businesses to make sure that management doesn't compromise the interest of investors.  And just as it is with businesses in a regulated industry, management of a well-governed business may have a set of priorities that are perfectly justifiable in management's context, but are orthogonal to investor's interests.

Industrial regulation and business governance are both poorly understood and poorly practiced. Each is also easily compromised. John Kay provided a fantastic example of how easily governance is compromised earlier this month in the FT, describing a phenomenon he referred to as "regulatory capture":

Regulatory capture is the process by which the regulators of an industry come to view it through the eyes of its principal actors, and to equate the public interest with the financial stability of these actors.


Let's think about this in the IT governance context.  We may have good governance instrumentation and a governing body that meets consistently.  But it's still easy for our governance infrastructure to be co-opted by the people it's supposed to be governing.  Mr. Kay explains how:

[T]he most common form of capture is honest and may be characterised as intellectual capture. Every regulatory agency is dependent for information on the businesses it regulates. Many of the people who run regulated companies are agreeable, committed individuals who are properly affronted by any suggestion that their activities do not serve the public good. ... It requires a considerable effort of imagination to visualise that any industry might be organised very differently from the way that industry is organised now. So even the regulator with the best intentions comes to see issues in much the same way as the corporate officers he deals with every day.


In IT governance, management provides and frames governance data.  Overtly or covertly it imposes structural limitations on the presentation of that data.  People in governance roles are all too often lulled into a sense of complacency because integrity of the messenger - management in this case - isn't in doubt.

Yet one of the most critical expectations we have of people in governance roles is that they have a broader picture than management of what should be happening, and how it should be taking place.  Perhaps management doesn't want to look bad, or they're not comfortable delivering bad news.  And all too often, management can do no better than to play the cards they're dealt (e.g., people, scope, technology or something else).  Whatever the underlying situation, we need a governing body that doesn't look at the cards in hand, but at the cards they can get out of the deck.  There's no mechanical process that enables this; it all comes down to having the right governors.

Which leads to Mr. Kay's next point, where he provides some important insight into the characteristics of a good regulator that are very much applicable to somebody in an IT governance role:

You require both an abrasive personality and considerable intellectual curiosity to do the job in any other way.


IT governance requires activist investors: people who will ask challenging and uncomfortable questions, reframe the data provided by management, and propose completely different solutions. This is a specific behavioral expectation, and a high one at that.  But, as Mr. Kay points out:

[T]hese are not the qualities often sought, or found, in regulators.


Sadly, this is all too true for IT governance as well.  

The value of governance is realized by its professional detachment. Whether you're recruiting a board for an IT investment or evaluating the people you have in one today, think very hard about their ability to act independently.

Sunday, October 31, 2010

Restructuring IT: First Steps

In this last post in the series on restructuring IT, we'll take a look at some things we can do to get going on a restructure.

The place to start is to establish a reason for restructure that everybody inside and outside the organization can understand. Tech is inherently optimistic, and we have short memories. As a result, we don't have very good self awareness. So it's worth performing a critical analysis of our department’s success rate. That means looking at how successful we are at getting stuff into production. What is our batting average? How does it stack up against those Standish numbers? But it isn't enough to look at the success of delivery, but the impact. Is there appreciable business impact of what we deliver? Is the business better because of the solutions we put in production? These questions aren't so easily answered because IT departments often don't retain project performance history and we very often don't have business cases for our project. A critical self-assessment, while valuable, may not be all that easy to perform.

Of course, the point isn't just to assess how we have performed, but to look at how ready we are for the future. What will be expected of us in 2 years? What business pressures will build up between now and then? How ready are we to deal with them?

To properly frame performance, we need to have a very firm definition of what “success” means. I worked with a firm that had a very mature phase-gated governance system. With maturity came complexity. With complexity came loopholes and exceptions. Whenever a project was at risk of violating one of the phase gates such as exceeding rate of spend or taking too long, somebody would invoke an exception to change the project control parameters to prevent the project from defaulting. As a result, they could report an extraordinarily high rate of success – upwards of 99%. But a 99% success rate of changing the problem to fit their reality is a dubious record of achievement.

In addition to scrutinizing results, take a look at the top 5 constraints you believe your IT organization faces today. Of those top 5 things, very likely most (if not all) will be rooted in some behavioural mis-alignment with results. One quick way to get the contours of the impact of these mis-alignments is to bring business and IT people into a short, facilitated workshop to focus on the mechanics of delivery. This will reveal how people react to those constraints (working within them, working around them, or doing things that reinforce them), and subsequently the full effect that they have on delivery.

Finally, get a professional assessment of your organization, looking at the behaviours and practices behind what gets done and what doesn't get done. It’s also important to engage business partners in this process. While we very often find IT organizations that are being outpaced by their business partners, it’s been our experience that with a little bit of concentrated effort, it doesn’t take much for IT to outpace its host business. That isn’t healthy either. Ultimately, we need a firm peer relationship between the business and IT, so we need to engage business partners in this process. We’re looking to create a symbiotic relationship so that responsiveness is both mutual and durable.

Doing these things will give you the classic look in the mirror, a critical assessment of the "lifestyle" decisions that your organization is making today. That will allow you to speak in firm facts of why the organization needs to change, set the bar for what is acceptable and what is not, and define a target and a set of action items that will create change.

One parting thought.

I hope this series on IT restructure has crystallized some of the thoughts and experiences that you have had as an IT professional, that this gives you perspective on the impact of industrialization on the IT industry, particularly in the people you interview and the skills and experiences they have.

Hopefully, you’ll go back to your office and think, “yeah, I remember being in a team of professional workers, and now I’m with industrial workers.” And you might think being the lone change agent is too much. Maybe if you were more senior, you could pull it off. But as a [insert your title here], you just don’t feel you can pull it off.

Quick story to tell about a securities firm I worked with. Some years ago, the CIO was dev lead of a dozen-person team that created the next gen platform for their trading operations. He now had 2,000 people in NY and India and wondered, “why is it so difficult to get things done around here.” He stepped into a conference room at one point where he'd sequestered the team and got a bit misty-eyed about “the good old days.” Here’s the guy running the IT department - in fact, he created most of it - feeling that same sense of frustration with the industrial model. And, ultimately, he felt trapped by it.

This is coming from the CIO.

The frustration is there, from top to bottom in IT. The people to make change are the people who recognize the difference between industrial and professional IT. It will take time. It will take a lot of convincing and explaining. But it’s there to be done. The time is now. And you’re not alone.

Wednesday, September 29, 2010

Restructuring IT: Guiding Principles

This is a continuation of a series I left off in December 2009 on Restructuring IT. This post presents a few guiding principles to understand before undertaking a restructuring exercise.

First, don't fool yourself about your ambitions. Come to grips with what you think you want to be: a demonstrably world-class organization, or just less bad at what you do. The prior is easy to say but hard to achieve. If you're wed to any aspect of your current organization, if you think process will make your business better, or if you're concerned about making mistakes or losing staff, you're really no more ambitious than being less bad. There's nothing wrong with that. Minor restructure can have a beneficial impact. Just don't mistake "sucking less" for "software excellence".

Second, be aware of your level of commitment. Change is hard. As liberating an experience as you hope it will be, the people in your organization will find restructuring invasive, divisive and confusing. Some people will resist, some will revert. Some will exit of their own volition, and some you'll have to invite to leave. Change is tiring and frustrating. Staying the course of change through the "valley of despair" requires a deep personal commitment, often in the absence of compelling evidence that the restructure is going well and frequently against a chorus of voices opposed. Fighting the tide is never easy, even from a leadership position.

Third, don’t expect that you’re going to restructure with tools, training and certification. That won’t change behaviours. If you believe change comes about through tools and training, you should release 80% of your staff and hire brand new people every year: just put them in training and give them tools to run your business. Of course you wouldn’t do that, because you’d lose all the experience. So it is with this restructure: you’re developing new experience. Tools can make good behaviours more efficient, but tools alone don’t introduce good behaviours in the first place.

Finally, be less focused on roles, titles and hierarchy and focus instead on what defines business success and what actually needs to get done to achieve it. Tighten up governance scrutiny to verify that people are working to a state of “demonstrably done" and not just "nobody can tell me I'm not done". And prioritize team over the individual. Don't privatize success while socializing failure: incentivize the team, not each person. People focused on a team accomplishment are less concerned with individual accolade. Culturally, make clear that an outstanding individual performance is a hollow (and dubious) victory in a failed team.

The final installment in this series will cover some immediate actions you can take today to restructure.

Tuesday, August 31, 2010

One-Way Risk and Robustness of IT Projects

Writing in the FT's Long View column, James Mackintosh makes the point that hedge fund managers “appeared smarter than they really were, because they were taking a risk they did not recognize.” That’s an apt description for a lot of what goes on in IT, too.

Despite all of the risks that commonly befall an IT project, we still deal with IT planning as an exercise in deterministic forecasting: if these people do these things in this sequence we will produce this software by this date. The plan is treated as a certainty. It then becomes something to be optimized through execution. As a result, management concerns itself with cost minimization and efficiency of expenditure.

Trouble is, an operations plan isn't a certainty. It's a guess. As Nassim Taleb observed in Errors, Robustness and the Fourth Quadrant:

Forecasting is a serious professional and scientific endeavor with a certain purpose, namely to provide predictions to be used in formulating decisions, and taking actions. The forecast translates into a decision, and, accordingly, the uncertainty attached to the forecast, i.e., the error, needs to be endogenous to the decision itself. This holds particularly true of risk decisions. In other words, the use of the forecast needs to be determined – or modified – based on the estimated accuracy of the forecast. This, in turn creates an interdependency about what we should or should not forecast – as some forecasts can be harmful to decision makers.

In an IT project context, the key phrase is: “This holds particularly true of risk decisions.” We take thousands of decisions over the course of an IT project. Each is a risk decision. Yet more often than not, we fail to recognize the uncertainty present in each decision we make.

This comes back to the notion that operations plans are deterministic. One of the more trite management phrases is “plan your work and work your plan.” No matter how diligently we plan our work in IT, we are constantly under siege while “working our plan”. Developers come and go. Business people come and go. Business needs change. The technology doesn’t work out as planned. The people responsible for the interfaces don’t understand them nearly as well as they believe they do. Other business priorities take people away from the project. Yet we still bake in assumptions about these and many other factors into point projections – as opposed to probabilistic projections – of what we will do, when we will be done and how much it will cost.

Our risk management practices should shed light on this. But risk management in IT is typically limited to maintaining a “risks and issues” log, so it’s never more than an adjunct to our plan.

That most IT projects have only rudimentary risk management is quite surprising given the one-way nature of risks in IT. One-way risks are situations where we have massive exposure in one direction, but only limited exposure in another. Taleb gives the example of trans-Atlantic flight times. It’s possible for an 8 hour flight to arrive 1 or possibly 2 hours early. It can’t arrive 6 hours early. However, it can arrive 6 hours, or 8 hours, a day or even several days late. Clearly, the risks to flight duration are substantially in one direction. IT risks are much the same: we may aggressively manage scope or find some efficiency, but by and large these and many other factors will conspire to delay our projects.

The fact that risk in IT is substantially one-way brings a lot of our management and governance into serious doubt. Having a project plan distinct from the risk log makes the hubristic assumption that we will deliver at time and cost, so we must pay attention to the things that threaten the effort. Given that our risk is substantially one-way, we should make a more humble assumption: odds are that delivery will occur above our forecast time and cost, so what do we need to make sure goes right so that we don't? While such a pessimistic perspective may be in direct contrast to the cheerleading and bravado that all too often pass for "management", it makes risk the core activity of management decision making, not a peripheral activity dealt with as an exception.

In Convexity, Robustness and Model Error in the Fourth Quadrant, Taleb makes the point that one-way risk is best dealt with by robustness – for example, that we build redundancies into how we work. Efficiency, by comparison, makes us more vulnerable to one-way risk by introducing greater fragility into our processes. By way of example, think of the "factory floor" approach to IT, where armies of people are staffed in specialist roles. What happens to the IT "assembly line" when one or more role specialists exit, depriving the line of their situational knowledge? Without redundancy in capability, the entire line is put at risk.

Common sense and statistical analysis both conclude that an optimized system is sensitive to the tiniest of variations. This means that when risks are predominantly one-way – such as in IT projects – it behooves us to err on the side of robustness.

That risk in IT is substantially one-way brings a lot of our management and governance into serious doubt. Having a project plan distinct from the risk log makes the hubristic assumption that we will deliver at time and cost, so we must pay attention to this list of things that could go wrong. Given the one-way risk - and the uncertainty of what those risks are - we should make a more humble assumption: delivery will occur well above our forecast time and cost, so what do we need to make sure goes right? While such a pessimistic outlook may be in direct contrast to the cheerleading and bravado that pass for "management", it makes risk the core activity of management decision making, not a peripheral activity dealt with as an exception.

Robustness is the antithesis of efficiency. Maximum efficiency of execution against a plan calls for the fewest people delivering the most output to a predetermined set of architectural decisions. Building in robustness – for example, redundancy of people so that skills and knowledge aren’t resident in a single person, pursuing multiple technical solutions as a means of mitigating non-functional requirements, etc. – will not come naturally to managers with a singular focus on minimizing cost, especially if, like hedge fund managers James Mackintosh was referring to, they’re blissfully unaware of the risks.

So, what can we do?

First, we have to stop trafficking in the false precision of IT project management. This is no easy task, particularly in a business culture rooted in fixed-budgets and rigid planning cycles, buyers of industrial IT expecting that technology labor is interchangeable, and so forth. We won’t change the landscape all at once, but we can have tremendous influence with current business examples that will be relevant to sponsors and investors of IT projects. If we change the expectations of the people paying for IT projects, we can create the expectation that IT should provide probabilistic projections and take more robust – and therefore one-way risk tolerant – solution paths.

Second, we can introduce risk management that is more sophisticated than what we typically do, yet still easy to understand. If you haven’t read the book, or haven’t read it for a while, pick up Waltzing with Bears by DeMarco and Lister. Their statistical model for risk profiling is a good place to start, quick to work with and easy to understand. Nothing stops us from using it today. Now, the act of using the tool won’t make risk management the central activity of project managers or steering committees, but adding a compelling analysis to the weekly digest of project data will shift the balance in that direction. That, in turn, makes it easier to introduce robustness into IT delivery.

On that subject of robustness, Taleb observed:

Close to 1000 financial institutions have shut down in 2007 and 2008 from the underestimation of outsized market moves, with losses up to 3.6 trillion. Had their managers been aware of the unreliability of the forecasting methods (which were already apparent in the data), they would have requested a different risk profile, with more robustness in risk management …. and smaller dependence on complex derivatives.

Given the success rate of IT projects – still, according to the research organizations, less than 40% - IT project managers should similarly conclude that more robustness in risk management would be appropriate.

Friday, July 09, 2010

Separating Utility from Value Add

One of the more hotly debated subjects in the recent debate on financial services reform has been the reintroduction of Glass-Stegall. Enacted in 1933, the intent was in part to prevent banks from financing speculative investments with money obtained through deposit and lending. Because of the importance of commercial banking to the stability of the economy (and, arguably, society), it was deemed unacceptable to make it easy for a bank to take imprudent risks with money for which has a stewardship responsibility. The law was substantially repealed in the 1990s. Quite a few people have suggested that it be brought back.

Whether it's appropriate or not for banking isn't the purpose of this blog post. But there is some thinking behind the separation of business activity that's worth considering in the IT context.

Retail banking serves largely a utilitarian purpose in an economy. Deposits give banks the capital to make loans to small businesses, write mortgages, and so on. This banking infrastrucure allows a community to pool its resources to grow and flourish as it would likely not be able to do otherwise. It also provides new businesses with capital at startup, and stabelizing cash through business cycles. Still, you don't loan out money to everybody who asks for some. If a bank makes loans to people and companies that aren't creditworthy, they put deposits at risk. Needless to say, commercial banks have (historically, anyway) held high lending standards because they are expected to be highly risk averse. With low risk appetite come low returns.

While low returns aren't all that exciting, there's an argument to be made that low returns are just fine for this kind of banking. The mission of a commercial bank isn't to produce outsized returns; the mission is to be a financial utility, to be stable and consistent. With stability comes confidence in the financial system (a confidence underwritten by federal deposit insurance), and that confidence is a pillar of a strong society.

Investment banks are vastly different. They are, by definition, far more risk prone. While there are conservative investment banks - banks that engage largely in advisory and research and do a minimum of trading - there is an expectation that bulge bracket investment banks will produce an outsized return by taking outsized risks. They trade their client's capital as well as their own using complex strategies specifically to generate high yield.

Instead of producing large returns, of course, investment banking can produce large losses. Because a lot of investment banks make proprietary investments with borrowed capital (that is, they make leveraged investments), a projected windfall can quickly become a bottomless pit.

Hence one of the the reasons for separating investment and commercial banking. The utility functions of commercial banking provide a fat pile of capital that can be leveraged for investment banking activity. Trouble is, there's no upside for the utility side of the bank if it allows its deposits to be exposed to outsized risk. It still pays the same rate to depositors, still collects the same rate from borrowers. For the utility, there's only downside: in a universal bank, a severe loss in investment banking puts commercial deposits at risk. Putting explicitly risk-averse capital at high risk undermines the stability of the financial system.

So, that's banking. What does any of this have to do with IT?

Just like the banking system, IT has two sides to it's house: a utility side, and an investment side. Comingling them hasn't done us much good. If it's done anything, it's confused the business mission of IT. We should separate them into independently operating business units.

A significant portion - maybe 70%+ - of IT spend is on utility services, things that keep a business operating. This includes things like data storage, servers, e-mail, office productivity applications, virus protection, security and so forth. Obviously, business is largely conducted electronically today, so a business needs these things. Restated, there's a lot of business that we simply can't conduct today without it.

These utilities don't provide return in and of themselves. They're so ubiquitous in nature, and so fundamental to how business is done, it's not an option to try to operate without them. They're the information technology equivalent of electricity or tap water. A firm does not derive competitive advantage from the type of electricity it uses. Nor do we measure return on tap water.

And like electricity or tap water, you don't typically provide your own. You plug into a utility service that provides it for you. Every volt of electricity and every gallon of water are the same.

It actually would seem a bit strange for most businesses to be providers of their own utilities. Still, most companies are in the business of providing their own IT utilities.

One reason they do is because of the stationary intertia of IT. We've injected technology into companies through captive IT departments. Nobody questions "why do we obtain these services this way", because technology has "always been provided this way."

Another is that IT services have complex properties to them that other utilities don't. Every volt of electricity is the same, but not every byte of e-mail is the same. Some contain proprietary, confidential or sensitive information. It's not enough for a firm to outsource responsibility for the protection of that data. If data confidentiality is compromised, the firm contracting for the utility is compromised. All the commercial and service level agreements in the world won't undo the damage.

Of course, these complex properties don't make them "high value added" services. They're still utilities. They're just a bigger pain in the neck than things like electricity.

It's very likely that a lot of what we do in captive IT today will be obtained as a utility service in the future. We'll buy it like tap water, metered and regulated. Obviously, this is the business model of SAAS and outsourced services. While they're still not robust enough for every business, we're seeing advances in things like networking and encryption technology that provide a greater level of accessibility and assurance. We're getting close to (if not already well past) the inflection point where it's less attractive to underwrite the risk of providing these things captively than to get them metered.

But not everything done by captive IT is utility. The remaining 30% of today's IT spend is investment into proprietary technology that amplifies the performance of the business to increase yield. This is "high value added" because it provides unique, distinct competitive advantage to the host business. Investing in these things is one way we build our businesses, and make life difficult for the competition.

Which brings us back to Glass-Stegall: just as those two forms of banking are vastly different, so are these two forms of IT.

Dividing IT along "utility" and "value added" lines is a departure from where we are today. We've put everything from disks to development under the heading of "technology" in most companies, because we've had no other way of looking at it. Technology is still in its infancy, is still relatively foreign to most people, and we're still figuring out how to apply it in business. So anything involving technology is considered foreign to a business, and attached to it as an appendix, or a tumor.

Nor is the common division of IT into "infrastructure" and "application development" the dividing line between utility and value-add. Not all infrastructure is utility, and not all app dev is value add. Firms dependent on low latency for competitive edge are not likely to get competitive advantage by hosting their applications in the cloud. Similarly, payment processing is perhaps not something that a retail site wants to invest money into development of, so it contracts to get those services.

This is not a separation of IT by the nature of the technology, but into what technology does for the host business. That portion of the business that provides outsized return - the "investment banking" portion - is what should remain captive. The rest - the "utility banking" - should be part of facilities or operations management. The expectation must also be that this division is dynamic: today's captive data center may be tomorrow's CPU cycles obtained through the cloud if there's no performance or reliability to be gained from providing it captively.

Separating utility from value add will make IT a better performing part of the business. Because they're comingled today, we project characteristics of "investment" into what are really utilities, and in the process we squandor capital. Conversely, and to ITs disadvantage, we project a great deal of "utility" into the things that are really investments, which impairs returns.

As a business function, IT has no definition on its own. It only has definition as part of a business, which means it needs to be run as a business. The risk tolerance, management, capabilities, retention risks, governance and business objectives of these two functions are vastly different. Indeed, the "business technologist" of value added IT needs a vastly different set of skills, capability, and aptitude than she or he generally has today. Clearly, they're vastly different businesses, and should be directed accordingly.

Separating the utility from the value add allows us to reduce cost without jeopardizing accessibility to utility functions, and simultaneously build capability to maximize technology investments. Running them as entirely different business units, managed to a different set of hiring expectations, performance goals, incentive and reward systems, will equip each to better fulfill the objectives that maximize their business impact.

Saturday, June 26, 2010

A Portfolio Perspective on Requirements

A software application is not a collection of features that create business value. It is a portfolio of business capabilities that yield a return on the investment made in creating them.

This isn't semantics. There's a big difference between "business impact" and "financial returns."

Some software requirements have a direct business impact. But not all of them do, which we'll explore in a little bit. As a result, the justification for and priority of a lot of requirements are not always clear, because the language of "business value" is one-dimensional and therefore limiting. "Financial returns" is far more expansive concept. It brings clarity - in business terms - why we have to fulfill (and, for that matter, should not fulfill) far more requirements. Thinking about "returns" is also more appropriate than "value" for capital deployment decisions, which is what software development really is.

Why is software development a "deployment of capital"? Because a company really doesn't need to spend money on technology. When people choose to spend on software development, they're investing in the business itself. We elect to invest in the business when we believe we can derive a return that exceeds our cost of capital. That's why we have a business case for the software we write. That business case comes down to the returns we expect to generate from the intangible assets (that is, the software) we produce.

This should affect how we think about requirements. As pointed out above, a lot of requirements have a clear and direct business impact. A business requirement to algorithmically trade based on fluctuations in MACD, volume weighted average price and sunspot activity has a pretty clear business value: analysis before we code it tells us some combination of market and cosmic events leads to some occasional market condition that we expect we can capitalize on. And after the fact, we know how much trading activity actually occurs on this algorithm and how successfully we traded.

But not all requirements fit the business impact definition so nicely. We fulfill some requirements to avoid paying a penalty for violating regulations. Others increase stability of existing systems. Still others reduce exposure to catastrophic events.

This is where "business value" loses integrity as an index for requirements. Calling one activity that increases revenue equivalent to another that reduces exposure to catastrophic loss is comparing apples to high fructose corn syrup. They're sweet and edible, but that's about it.

As anybody who has ever run a business knows, not every dollar of revenue is the same: some contracts will cost more to fulfill, will cause people to leave, will risk your reputation, etc. The same is true in "business value": not every dollar of business value is the same. Translating all economic impact into a single index abstracts the concept of "business value" to a point of meaninglessness. Making matters worse, it's not uncommon for IT departments to sum their "total business value" delivered. Reporting a total value delivered that eclipses the firm's enterprise value impeaches the credibility of the measure.

Business value is too narrow, so we need to have a broader perspective. To get that, we need to think back to what the software business is at its core: the investment of capital to create intangible assets by way of human effort.

The operative phrase here isn't "by way of human effort", which is where we've historically focused. "Minimizing cost" is where IT has put most of its attention (e.g., through labour arbitrage, lowest hourly cost, etc.) In recent years, there's been a movement to shift focus to "maximize value". The thinking is that by linking requirements to value we can reduce waste by not doing the things that don't have value. There's merit in making this shift, but essentially "maximize value" and "minimize cost" are still both effort-centric concepts. Effort does not equal results. The business benefits produced by software doesn't come down to the efficiency of the effort. They come down to the returns produced in the consumption of what's delivered.

Instead of being effort-centric, our attention should be requirements-centric. In that regard, we can't be focused only on a single property like "value." We have to look at a fuller set of characteristics to appreciate our full set of requirements. This is where "financial returns" gives us a broader perspective.

When we invest money to create software, we're converting capital into an intangible asset. We expect a return. We don't get a sustainable return from an investment simply if it generates revenue for us, or even if we generate more revenue than we incur costs. We get a sustainable return if we take prudent decisions that make us robust to risk and volatility.

Compare this to other forms of capital investment. When we invest in financial instruments, we have a lot of options. We can invest at the risk-free rate (traditionally assumed to be US Treasurys). In theory, we're not doing anything clever with that capital, so we're not really driving much of a return. Alternatively, we can invest it in equities, bonds, or commodities. If we invest in a stock and the price goes up or we receive a dividend, we've generated a return.

But financial returns are at risk. One thing we generally do is spread our capital across a number of different instruments: we put some in Treasurys to protect against a market swoon, some in emerging market stocks to get exposure to growth, and so forth. The intent is to define an acceptable return for a prudent level of risk.

We also have access to financial instruments to lock in gains or minimize losses for the positions we take. For example, we may buy a stock and a stop loss to limit our downside should the stock unexpectedly freefall. The put option we purchased may very well expire unexercised. That means we've spent money on an insurance policy that wasn't used. Is this "waste"? Not if circumstances suggest this to be a prudent measure to take.

We also have opportunities to make reasonable long-shot investments in pursuit of outsized returns. Suppose a stock is trading at $45 and has been trading within a 10% band for the past 52 weeks. We could buy 1,000,000 call options at $60. Because these options are out of the money they won't cost us that much - perhaps a few pennies each. If the stock rises to $70, we exercise the call, and we'll have made a profit of $10m less whatever we paid for the 1m calls. If the stock stays at $45, we allow the options to expire unexercised, and we're out only the money we spent on those options. This isn't lottery investing, it's Black Swan investing - betting on extreme events. It won't pay off all that often, but when it does, it pays off handsomely.

These examples - insurance policies and Black Swans - are apt metaphors for a lot of business requirements that we fulfill.

For example, we need to make systems secure against unauthorized access and theft of data. The "value" of that is prevention of loss of business and reputational damage. But implementing non-functional requirements like this isn't "value", it's insurance. The presence of it simply makes you whole if it's invoked (e.g., deters a security threat). This is similar to a mortgage company insisting that a borrower take out fire insurance on a house: the fire insurance won't provide a windfall to the homeowner or bank, it'll simply make all parties whole in the event that a fire occurs. That insurance is priced commensurate with the exposure - in this case, the value of the house and contents, and the likelihood of an incendiary event. In the same way, a portfolio manager can take positions in derivatives to protect against the loss of value. Again, that isn't the same as producing value. This insurance most often goes unexercised. But it is prudent and responsible if we are to provide a sustainable return. To wit: a portfolio manager is a hero if stock bets soar, but an idiot if they crater and he or she failed to have downside protection.

We also have Black Swan requirements. Suppose there is an expectation that a new trading platform will need to support a peak of 2m transactions daily. But suppose that nobody really knows what kind of volume we'll get. (Behold, the CME just launched cheese futures - with no contracts on the first day of trading.) So if we think that there's an outside chance that our entering this market will coincide with a windfall of transactions, we may believe it's prudent to support up to 3x that volume. It's a long shot, but it's a calculated long shot that, if it comes to pass and we're prepared for it, provides an outsized yield. So we may do the equivalent of buying an out-of-the-money call option by creating scalability to support much higher volume. It's a thoughtful long-shot. A portfolio manager is wise for making out of the money bets when they pay off, but a chump if he or she all positions aligned with conventional wisdom and a market opportunity is missed.

Neither of these examples fit the "value" definition. But they do fit well into a "portfolio" model.

Of course, just as determining the business value of each requirement isn't an exact science, neither is defining a projected investment return. Even if we ignore all the factors that impact whether returns materialize or not (largely what happens after the requirement is in production), the cost basis is imprecise. We have precise pricing on liquid financial instruments such as options. We don't have precise pricing in IT. The reason goes back to the basic definition of software development: the act of converting capital into intangible assets by way of human effort. That "human effort" will be highly variable, dependent on skills, experience, domain complexity, domain familiarity, technology, environment, etc. But this isn't the point. The point isn't to be precise in our measurement to strain every ounce of productivity from the effort. We've tried that in IT with industrialization, and it's failed miserably. The point is to provide better directional guidance that maximize returns on the capital, to place very well informed bets and protect the returns.

It's also worth pointing out that going in pursuit of Black Swans isn't license to pursue every boondoggle. Writing the all singing, all dancing login component in this iteration because "we may need the functionality someday" has to withstand the scrutiny of a reasonable probability of providing an outsized return relative to the cost of investment. Clearly, most technology boondoggles won't pass that test. And all our potential boondoggles are still competing for scarce investment capital. If the case is there, and it seems a prudent investment, it'll be justified. If anything, a portfolio approach will make clearer what it is people are willing - and not willing - to invest in.

Because it gives multi-dimensional treatment to the economic value of what we do, "portfolio" is a better conceptual fit for requirements than "value." This helps us to frame better why we do things, and why we don't do things, in the terms that matter most. We'll still make bad investment decisions: portfolio managers make them all the time. We'll still do things that go unexercised. But we're more likely to recognize exposure (are you deploying things without protecting against downside risk?) and more likely to capitalize on outsized opportunities (so what happens if transaction volume is off the charts from day one?) It's still up to us to make sound decisions, but a portfolio approach enables us to make better informed decisions that compensate for risk and capitalize on the things that aren't always clear to us today.

Friday, June 11, 2010

Short Run Robustness, Long Run Resiliency

There is no such thing as a "long run" in practice --what happens before the long run matters. The problem of using the notion of "long run", or what mathematicians call the "asymptotic" property (what happens when you extend something to infinity), is that it usually makes us blind to what happens before the long run. ...
[L]ife takes place in the pre-asymptote, not in some Platonic long run, and some properties that hold in the pre-asymptote (or the short run) can be markedly divergent from those that take place in the long run. So theory, even if it works, meets a short term reality that has more texture. Few understand that there is generally no such thing as a reachable long run except as a mathematical construct to solve equations - to assume a long run in a complex system you need to assume that nothing new will emerge.

Mr. Taleb is commenting on economists and financial modelers, but he could just as easily be commenting on IT planning.

Assertions of long-term consistency and stability are baked into IT plans. For example, people are expected to remain on the payroll indefinately; but even if they don’t, they’re largely interchangeable with new hires. Requirements will be relatively static, specifically and completely defined, and universally understood. System integration will be logical, straightforward and seamless. Everybody will be fully competent and sufficiently skilled to meet expectations of performance.

Asserting that things are fact doesn’t make them so.

Of course, we never make it to the long run in IT. People change roles or exit. Technology doesn't work together as seamlessly as we thought it would. Our host firm makes an acquisition that renders half of our goals irrelevant. Nobody knows how to interface with legacy systems. The historically benign financial instruments we trade have seen a sudden 10x increase in volume and volatility off the charts. A key supplier goes out of business. Our chief rival just added a fantastic new feature that we don't have.

Theoretical plans will always meet a short-term reality that has more texture.

* * *

After the crisis of 2008, [Robert Merton] defended the risk taking caused by economists, giving the argument that “it was a Black Swan” simply because he did not see it coming, hence the theories were fine. He did not make the leap that, since we do not see them coming, we need to be robust to these events. Normally, these people exit the gene pool –academic tenure holds them a bit longer.
- ibid.

The long-term resiliency of a business is a function of how robustly it responds to and capitalizes on the ebbs and flows of a never-ending series of short runs. The long-term resiliency if an IT organization is no different.

This presents an obvious leadership trap, the “strategy as a sum of tactical decisions” problem. Moving with the ebb and flow makes it hard to see the wood for the trees. An organization can quickly devolve into a form of organized chaos, where it reacts without purpose instead of advancing an agenda. Reacting with purpose requires continuous reconciliation of actions with a strong set of goals and guiding principles.

But it also presents a bigger, and very personal, leadership challenge. We must avoid being hypnotized by the elaborate models we create to explain our (assumed) success. The more a person invests in models, plans and forecasts, the more they will believe they see artistic qualities in them. They will hold the models in higher esteem than the facts around them, insisting on reconciling the irrational behavior of the world to their (obviously) more rational model. This is hubris. Obstinance for being theoretically right but factually wrong is a short path to a quick exit.

Theoretical results can't be monetized; only real results can.

Wednesday, May 19, 2010

Webinar: Being an Activist Investor in IT Projects

Please join me on 26 May for a webinar on Activist IT Investing.

An ounce of good governance is worth a pound of project rescue. Agile practices, with their emphasis on transparency, business alignment and technical completion, are enablers of better IT governance. But all the transparency and alignment in the world isn't going to do us any good if we're not equipped to pay attention and act on it.

An Agile organization needs a new approach to governance, one that makes everybody think not as caretakers of a project but investors in a business outcome. This presentation explores the principles of Agile governance, and what it means to be an activist IT investor in a Lean-Agile world.

What you will learn

  • What are the principles of IT governance?
  • What kind of governance does Agile enable and demand?
  • How do we create a culture of activist investors in IT projects?
I hope you can join me on the 26th. Click here to register.

Friday, May 07, 2010

Digital Squalor

In the not too distant past, storage was limited and expensive. As recently as 1980, 1 megabyte of disk storage cost $200. But this is no longer the case. Today, you can buy 8,000 megabytes (a.k.a. 8 gigabytes) for $1. Storage capacity is now so abundant and compact that you can record every voice conversation you’ll ever have in a device that can fit into the palm of your hand.

What this means is that storage is no longer a physical (capacity) challenge, but a logical (organization) one. We’re maximizing the prior, storing everything we can digitize. Unfortunately, we’re not really making a lot of progress on the latter, as “intelligence” eludes us in an ever-expanding swamp of “data.”

Let’s think about the characteristics of data, just on a personal level.

  • We have data everywhere. E-mails contain data. So do documents and spreadsheets. So do various applications, such as a local contact manager. So do subscription services, such as Salesforce.com. So do financial management tools (be it Quickbooks or Oracle Financials.) So does Twitter. So digital photos. So do news feed subscriptions. So do voicemails. So do Podcasts and webinars for that matter.
  • We have a lot of redundant data. How many different booking systems have your frequent flier numbers, know that you prefer an aisle to a window, and know that you prefer a vegetarian meal on long-haul flights? And how much of that has changed since you last edited your profile in each of those systems? Or, think about contact information. How many places do you have your co-worker's (multiple) contact details spread out: in your mobile phone? Corporate directory? Google contacts? Personal e-mail box?
  • There is data in the inter-relationships among data. This document references this spreadsheet, and both were discussed in this meeting on this date with these people. Copies of drafts under discussion at the time may be attached or referenced to the meeting invitation.
  • Our data is inconsistent. We have full contact information for some people who attended a meeting because they’re in the company directory, but perhaps we have only personal data for some because we’re connected to them via LinkedIn, and still for others all we have is an e-mail address.
  • Data has different meaning depending the context. A contract from 2005 between one firm and another is a binding legal document in the context of that relationship. But that document is also a source of language that might be useful when we are drawing up a contract with the same people in that firm, with different people in that firm, or with a different firm all together. Or a specific presentation from 5 years ago may have referenceable content, but at the moment we're only interested in the fact that it encapsulates a template that has elements you want to re-use.
  • We lug this data around with us. Some of it we carry around with us in the file system paradigm, moving it from laptop to laptop. Some we have in our smart phones and media players. Some is stored in a managed service like LinkedIn. Some is managed for us in a service like iTunes. There have been attempts to corral and manage slices of this data: for example, consolidating contact details, e-mail history, proposals in a single CRM system. None have been runaway successes. They’re either incomplete, inadequate, or simply too much work to sustain.

And that’s just a recon of our personal data. The scope of this is amplified several orders of magnitude on a corporate and societal level. To wit: marketing departments seem perpetually engaged in contact list consolidation and clean-up. Then there are all those automatic feeds setup to get everything from bond prices to today’s weather to city council meeting notes.

The fact is, we already live in digital squalor. In a relatively short period of time, we’ve gone from having very little digitally stored, to having a lot digitally stored. Only, along the way we didn’t give much thought to maintaining good hygiene of it all. We have data everywhere. Some structured, some not. Some readily accessible, some long forgotten, and some we’re not entirely certain have integrity any longer. And the bad news is, we’re accumulating data at an exponentially increasing rate.

We tame the data monster through our mental memories and our synaptic processes. A memory or an idea triggers a recollection, so we know to go look for something and roughly where we might find it. Sometimes we're able pull together distinct pieces of data - possibly squirrled away over a period of several years - to derive some useful information. But not all data is created equally, so when we go mining through data, we have to judge whether it has sufficient integrity for our purpose. Is it current enough? Is it from a credible source? Is it a final version or a draft? The bottom line is, it’s human intervention that allows us to bring order out of ever-increasing data chaos.

We're going to be living in digital squalor for quite some time. There are some interesting conclusions we can draw from that.

Our principal tool for managing the data bloat is search. Search is a blunt instrument. Search is really a simple attribute-based pattern matching tool that abdicates results processing to the individual. Meta-tagging is limited and narrow, so we don’t really have much in the way of digital synaptic processes. As the data behemoth grows, search will be decreasingly effective.

But as our digital squalor expands, it presents opportunity for those who can produce a clear, distinct signal from so much noise, e.g., by bringing data and analyses to bear on problems in ways never previously done. One example is FlightCaster, which applies complex analytics on publicly available data (such as the weather, and current flight status and historical flight data) to advise whether you should switch flights or not. It's a decision support tool providing an up-to-date analysis at the moment of decision where none existed previously.

This marks a significant change in IT. We've spent most of the past 60 years in technology creating tools to automate and digitize tasks and transactions. We now have lots of tools. Because of the tools, we also have lots of data. For the first time in history, we can get powerful infrastructure with global reach for rediculously little capital outlay:

  • the internet allows us to access vast amounts of specialized data;
  • cloud computing gives us virtually unlimited, pay-as-you-go computing power to analyze it;
  • smartphones on mobile internet give us an ubiquitous means to deliver our analyses.

Historically, Information Technology has focused on the "technology". Now, it's focused on the "information".

Digital squalor gives us the first broad-based tech-entrepreneurial opportunity of the 21st century. We're now able to pursue information businesses that wouldn't have been viable just a few years ago. We’re limited only by our imagination: what would I really like to know at a specific decision-making moment?

Answer that, and you've found your next start-up.

Monday, April 26, 2010

Mitigating Corporate Financial Risks of Lean IT

It's pretty well established that Agile and Lean IT are more operationally efficient than traditional IT. Agile teams tend to commit fewer unforced errors, and don't defer work. This results in fewer surprises - and with it, fewer surprise costs - in the final stages of delivery. Agile practices unwind the “requirements arms race” between business and IT, while Lean practices reduce waste throughout the delivery cycle. And Agile teams are organized around results as opposed to effort, which enables more prudent business-IT decisions throughout delivery.

This operational efficiency generally translates into significant bottom line benefits. From a financial perspective, Agile IT:
  • Can capitalize a greater proportion of development costs
  • Consumes less cash and manages cash expenditure more effectively
  • Has higher yields and offers better yield protection on IT investments
  • Is less likely to experience a catastrophic correction that takes everybody by surprise (e.g., appear to be a Black Swan event)

While all this sounds good, there’s no such thing as a free lunch. No surprise, then, that fully agile IT brings a new set of risks. A leaned-out IT organization capitalizing a significant proportion of its discretionary spend is highly susceptible to a perfect storm of (a) SG&A contraction, (b) IT project write-off, and (c) suspended IT investments.

The "Perfect Storm" of a Lean-IT Financial Crisis

The meteorology of this perfect storm happens more often than you might think. Consider the following scenario. Suppose at the end of the first half of a fiscal year (H1), we face the following:

  • SG&A spend on data center operations runs higher than forecast because more mainetnance work is done than forecast, and contractor costs rise unexpectedly
  • Effort has to be written off a capital project because the project team didn't achieve anything meaningful (and therefore there's no asset to capitalize)
  • An early stage investment is suspended pending a re-examination of business benefits

Then the CEO pops by to say that H1 results are disappointing, and asks us to cut the IT SG&A budget significantly for H2.

We now have a lot of things competing for our reduced SG&A budget in H2. Meanwhile, our capital investment portfolio is underperforming.

Our Lean IT organization faces a two-phase exposure similar to the credit crisis that struck Wall Street in 2007. Initially, we face a liquidity crisis. This quickly gives rise to a solvency crisis.

Phase 1: Liquidity Crisis

A liquidity crisis is triggered by the contraction of SG&A (also known as Operating Expense, or OpEx). Whether our business is govered by GAAP or IAS, the rules that govern capitalization of intangible assets such as software require us to define our investment intention. That is, we need to be able to explain that we're making a capital investment in software because we expect to achieve this return or business benefit. In IT, investment intention is defined by a project inception phase of some kind. Accounting rules dictate that inception has to be funded out of SG&A. What this means is that before we can spend out of a capital budget, we must spend some SG&A money first. It's also important to bear in mind that the same is true at the other end of the delivery cycle: last mile tasks such as data migration can't be capitalized; they also must be funded out of SG&A.

In effect, our SG&A budget (also known as operating expense, or OpEx) is leveraged with capital expense (CapEx). A contraction of OpEx proportionally reduces the CapEx accessible to us. This puts IT capital investments at risk. If we have less OpEx to spend, we may not be able to start new projects because ramp-up activities like project inception must be funded out of OpEx. We also may not be able to get capital investments into production because the things we need to do to get them across the finish line must be funded out of OpEx. Depending on how highly we're leveraged, even a small loss of OpEx may create liquidity seizure of our IT portfolio. This will force us to make difficult investment decisions to defend our portfolio returns.

Phase 2: Solvency Crisis

Just as happened on Wall Street, a liquidity crisis soon becomes a solvency crisis. In IT, “solvency” is capability. IT departments invest in people to learn how to get stuff done both in the business context (what are the critical business drivers?) and the IT context (how do all these systems actually work together?) IT people master the situational complexity necessary to keep the systems that run the business humming. They know not only which bit to twiddle, but why.

With this in mind, think back to our two funding sources: OpEx and CapEx. Capitalizing development of IT assets is an exercise in funding salaries and contractor costs out of CapEx budgets. As described above, an IT department that experiences a liquidity seizure loses access to its capital budgets. With capital funds inaccessible for payroll, an IT department faces very uncomfortable staff retention decisions. The people who know how to get stuff done may have to be released. If that happens, the very solvency – that is, the ability of the IT department to meet business demands – is in jeopardy.

While transferring from one budget to another may appear to be a simple way to protect IT solvency, it’s an option of last resort. Capitalization distributes the cost of an investment over many years, because the asset is depreciated. Depreciation increases corporate profitability for the year in which an IT investment is made because costs are deferred. Conversely, expensing recognizes the cost of an investment as it occurs, which decreases profitability for the year in which an investment is made. Moving money from CapEx to OpEx, then, will have a negative impact on current FY profitability. “IT Impairs Earnings” is not the sort of headline most CIOs aspire to see in the annual report. In fact, going hat in hand to the CFO is a career limiting move for the CIO.

Mitigation

This “perfect storm” is more common than you might think. Mitigating exposure is done through a variety of different mechanisms.

One is to hedge the project portfolio by bringing several investments into the early stages of delivery and then putting them into operational suspense. This creates a deliberate OpEx expenditure at the beginning of a fiscal cycle (before risks of OpEx impairment are realized over the course of a year) to multiple project inceptions, and then rendering some of those investments dormant. This diversifies the IT project portfolio, allowing IT capability to shift among different projects should one or more of those projects be cancelled.

Another is to align the project financing window with the Agile delivery window. A lot of this risk results from the incongruity of a long budgeting cycle that is matched with short Agile delivery windows. Following the lead of the sustainable business community, businesses should move to adopt micro-finance for IT projects. This is very difficult to achieve. Among other things, it requires an active investment authority (e.g., an investment committee actively involved in portfolio reviews) as well as a hyper-efficient inception process.

Yet another is to encourage people to think like “active traders” as opposed to “passive investors”. Each person must look for “trades” that will improve a team position overall. This can be anything from the priority of specific requirements or the people in the team (e.g., aggressively rooting out net negative contributors).

Finally, and most importantly, we’ve learned that Agile IT requires Agile governance. We may have all the data in the world, but optimal investment decisions don’t happen of their own volition. Just as Agile delivery teams require tremendous discipline, so, too, does the Agile executive. Liquidity and solvency crises are averted not through mechanical processes, but through meta-awareness of the state and performance trajectory of each investment in the portfolio.

Sunday, March 21, 2010

Supplying IT Mercenaries

Last month we took a look at the different types of staffing in IT, using Machievelli’s book The Prince as a guide.

Buyers of forces, be they military or IT, have long been advised against employing mercenaries. Strangely enough, nobody has paid this counsel much mind. The buy side still buys mercenaries, more than ever. Just have a look at your own sales lead list. Lots of demand for short-term specialists.

So what’s an IT supplier to do?

Let’s look at this from the perspective of a “supplier of forces” that wishes to be sustainable business, one that aspires to do business for many years. In the parlance of Machiavelli, we want to look at this from the perspective of a firm that wishes to be an independent “state.”

Supplying Mercenary Forces

For IT firms, there are always mercenary opportunities, because there are always buyers looking to fill highly specialized roles for some period of time: an Oracle financials expert, an iPhone app developer, an interim project manager, a Sharepoint specialist.

To the supplier of forces, a mercenary opportunity might look attractive because it appears to offer short-term placement, outsized income, and few strings attached. This is almost always illusory. In fact, mercenary work can cause more harm to the supplier than it’s worth.

Mercenary work is income, not wealth.

When a supplier firm is in need of income, mercenary work can appear to be especially attractive. But income is not wealth.

Income pays the bills, but most income streams are not sustainable. Wealth sustains a business. The sell-side firm accumulates wealth in many ways, including intellectual property, a well-honed social system for delivery, people who are highly capable or have deep industry knowledge, and referenceable, long-term clients. Mercenary jobs do not contribute to the development of any of these. This stands to reason, as the mercenary buyer is looking to exploit expertise, not contribute to its development. As a result, mercenary opportunities don’t contribute to wealth. They are income, and little else.

Income can be useful depending on the needs that the supplier has. However, it must be recognized for what it is, and not confused with wealth.

The cost of mercenary income is very high.

Buyers of mercenaries rent knowledge and expertise in pursuit of their own agenda. In so doing, they exploit, but do not build, the wealth of suppliers.

Let’s think back to Machiavelli for a moment. Machiavelli wrote in terms of princes and the states they govern. In Machiavellian “sovereign state” terms, if a state dispatches too many of its best forces on mercenary missions, it will be unable to defend its home and advance its agenda. The same applies to the would-be sustainable IT supplier.

Suppose a sell-side firm chooses to retain its best and supply untrained or unskilled forces into a mercenary mission. The buyer of mercenaries will recognize the skill level of the forces supplied and conclude they are not getting value for money. The buyer will complain and even threaten the supplier (seeking damages, seeking not to pay, etc.). This means mercenary work draws down senior staff almost exclusively.

Being forced to dispatch senior staff is costly as it stunts the development of the sell-side business. Mercenary engagements rarely offer opportunities for the supplier to incubate new capability as can be done when deploying on “one’s own” missions. By extension, because they’re on mercenary missions, senior staff are unavailable to the sell side firm for the “one’s own” missions – developing deep customer relationships, industry knowledge or intellectual property – that build a business. This makes the sell-side business vulnerable and weak.

Sacrificing business development to accept jobs on offer is to trade wealth for income. That, by definition, makes it expensive income. It is dangerous for the supplier to get caught up in the attractiveness of income, especially if they lose sight of wealth-producing activities.

The worst mercenary engagements are destroyers of wealth.

The people you send into a mercenary mission may not return from the mission. For example, they may elect to quit and work for somebody else. Re-acquiring that capability is not inexpensive as you can’t just hire senior staff off the street: it takes time to recruit and train, build experiences, and mature somebody considered to be among the most senior staff.

But losing somebody in a mercenary mission is not just a loss of capability, it’s an erosion of wealth. Losing senior staff in which you’ve invested undermines the fabric of the supplying firm, especially if that person was a cultural icon, a strong leader, had many years of accumulated experiences, or was woven into the synaptic social processes of the organization itself. This form of wealth destruction in mercenary missions is particularly damaging because it results not from the pursuit of the supplier’s agenda, but in somebody else's agenda. That is, it’s not lost in pursuit of wealth, it’s lost in pursuit of income. Income doesn’t compensate for a destruction of wealth.

If you are going to put capability at risk (e.g., put people in a situation where they may be pushed beyond their limits), it is far better to have wealth to show for it.

Mercenary engagements encourage defection among the ranks.

Mercenary engagements favour the independent contractor more than they favour a firm that supplies mercenaries.

Suppose a member of your forces recognizes a mercenary situation for what it is, and furthermore that it’s likely to be a long-term mission. Since the situation is not likely to change, he or she might as well make the best of it for themselves. By becoming an independent contractor, they can negotiate more favourable terms with the buyer. This is usually nothing more complex than availing themselves at a lower cost to the buyer than their current host firm - that is, your firm - bills them out for. As an independent contractor, they’ll be well positioned to strike this bargain, especially once the mission is well under way and they’re already part of it.

The individual has the upper hand over the supplier firm because realpolitik trumps principles. Even with contractual covenants designed to deny a person going independent and taking a customer with them, the supplying firm will typically not impose them in mercenary circumstances. The sell-side firm already sacrificed pursuit of its agenda in pursuit of income. Also, the sell-side firm won't value mercenary work as highly as it will "one's own" or "auxiliary" work. Subsequently, the sell side firm won't risk future income from the buyer by playing the "principles of conduct" card with the would-be independent contractor. And both the independent contractor and the buyer know this.

Mercenary engagements can quickly become high maintenance.

If you send inferior or inexperienced forces, the buyer is likely to reject them and demand replacements and possibly seek damages. This can be as little as the refusal to pay for those initially fielded to expenses related to any transition work undertaken.

You may staff experienced forces, but they run into factors that curtail their effectiveness, things such as work environment, tools they must use, and the aptitude of those with whom they must work. The buyer of mercenaries is usually not inclined to respond to the demands of the mercenary, so they are ignored. The mercenary, unhappy and restless, may cause all kinds of disruption directly to you, or to the buyer (which will find its way to you). The mercenary may also resign themselves to the situation and mentally check-out. This will undermine the sense of value the buyer believes they are acquiring ("I expected more leadership from your people...") In either case, you will be spending a lot of time with the people deployed in a mercenary situation. On top of it, the buyer will be reluctant to pay, which will delay cash flow.

Alternatively, you may staff an experienced person, but one with the wrong personality. Personality conflicts are often not recognized for what they are, and personality problems completely obfuscate the landscape. So whether suitable to task or not, the buyer will claim he or she is not receiving value for money. In this case, the buyer may ask the supplier to intervene in integrating the mercenary, and will often communicate barbs or jabs stated by his or her "own" forces to undermine the credibility of the mercenary. The buyer may seek to renegotiate terms.

All of these increase the supplier’s cost of doing business, which erodes the margin on the mercenary income.

Mercenary engagements are very rarely closed-ended as promised.

Rarely does a patient tell a doctor the prognosis, what procedures to perform, what staff to have, what medications and treatments to prescribe, and so forth. In fact, in medicine, we do the opposite: doctors provide independent assessments, and proceed accordingly with the patient.

Yet mercenary engagements are defined by the buyer, not by the supplier. Mercenaries are rarely granted an opportunity to assess the battlefield, because they’re simply signing up to fight in somebody else’s battle.

Successful, clean extraction from the mercenary mission requires that the mercenary perform relative to the buyer expectation of the mission, and that the mercenary can explain how he or she has fulfilled consistent with buyer expectation. This requires the buyer to accurately articulate the problem space and how the mercenary will contribute to resolution. It also requires the buyer to completely articulate the problem space and how the mercenary will fit into that solution. Given the intrinsic optimism and selective amnesia that buyers of mercenaries typically have, it’s a stretch to assume that the buyer’s definition of the mission will have any bearing on the reality of what can, let alone what should, be done. The buyer has self-prescribed treatment, and most often, the prescription is well wide of the mark. This will obfuscate the definition of “success” of the mercenary mission, and make it difficult for the sell-side firm to conclude and collect.

The buy side may also wish to retain a mercenary for indefinite periods. People often buy mercenaries because their own forces are not performing well. The buyer can become unusually attached to a mercenary because this is the one person who speaks with clarity and authority and gets things done. The buyer may be unwilling to let the mercenary go, offering both extension and increased income. They may drag their feet on extracation procedures such as hand-off of mercenary work to their own, even if this comes at a cost to their own business. This positive vibe with a buyer may make the “income” more palatable, but it remains income just the same. And that vibe can just as quickly turn toxic due to some change in the buyer's situation.

Regardless the circumstance, a clean extraction from a mercenary mission is the exception rather than the rule.

Very few IT firms succeed as purely suppliers of mercenary forces.

Gaining a reputation as a “good mercenary” is not necessarily wealth building. It may create more mercenary opportunities. It may allow you to increase your fees. But it will not create wealth generating opportunities.

Mercenary skills tend to be highly specialized. The mercenary must be very fluent with a specific technology, a specific technique, or the specific nuances of how a particular company works. But as technology goes through cycles of obsolescence, as management fads come and go, and as companies are bought and sold, mercenary skills are high value for relatively short periods of time. The mercenary must therefore keep skills current and up to date and be in a position to influence technology cycles and management fads.

Conversely, there are few opportunities for generalist mercenaries. There may be people who can figure out a technology or collection of systems given time, but the buyer of mercenaries isn’t looking for generalist skills. They’re generally looking to have a very specific problem (e.g., involving a very specific collection of technologies) solved.

As mentioned above, mercenary work lacks characteristics of sustainability. Mercenaries are brought in to perform closed-ended jobs. While a job may command a high income, it is temporary by definition. The mercenary is removed from the situation at the first opportunity. Using the parlance of Machiavelli, once the battle is over, the mercenary will not be invited to be a “colonist.” The mercenary must therefore have a secure home to which to return and the opportunity to ply his or her trade elsewhere.

This means that a mercenary must always be on the lookout for the next opportunity, which usually means a different buyer. Because mercenary work tends to be very challenging and consuming, the mercenary can usually only go looking for new opportunities once the job at hand is completed. A mercenary is truly fortunate if he or she has a strong enough network that opportunities seek them out. On top of it, a mercenary must constantly evolve his or her skills to remain current, something difficult to do when deployed as a mercenary as opposed to an “auxiliary” or “one’s own” force.

Didn't the Swiss make it work?

What of Switzerland? Famous throughout the centuries as suppliers of mercenary forces (it wasn’t uncommon for Swiss forces to be engaged on opposite sides of the same battlefield), the Swiss were notoriously good mercenaries and converted mercenary income to wealth. Do they not offer a model for the would-be supplier of mercenaries? The Confederatio Helvetica benefits from a natural defensive geography. It is difficult for an invading force to mount an offensive as it’s tough to win an uphill battle, and then have sufficient forces remaining to sustain the victory. Switzerland has abundant natural resources, notably water, meaning an opposing force isn't going to win a war of attrition. A victor would have the unenviable challenge of administrating a government over the fiercely independent Swiss. All in all, the Swiss could avail a greater percentage of their population to mercenary pursuits without putting their own agenda (rather, the agenda set by each Canton) at great risk. Arguably, this was how the Swiss did advance their agenda.

While this model worked, it was also highly situational. There weren't a lot of other regions of Europe that had so many factors creating a natural invulnerability, let alone a population that sought principally to form a sustainable and symbiotic relationship with it. While it's not out of the realm of the possible, there aren't too many businesses today benefiting from market forces that make them similarly sustainable.

The sell-side has to know what it’s buying in a mercenary opportunity.

Clearly, the sell side buys into a mercenary situation, just as the buyer is buying mercenary forces.

The buyer of forces will often try to represent a mercenary opportunity as something other than “mercenary” to the supplier of forces. This can be innocuous: perhaps they don’t understand the distinction themselves, or this is how procurement has taught them services are contracted. It can also be malicious: a buyer can willfully deceive a supplier, perhaps because they’re engaged in a political battle with other people in their business and simply wish to deceive.

Mercenary work is still the most prevalent form of demand on the landscape. There is internal pressure to enter into these engagements as many people in the sell-side business will champion them for reasons ranging from hitting quarterly numbers to the brand association of doing business with those firms. Just ask yourself going in: where's the line between getting a “foot in the door” to build a business relationship and simply opening a short-term income spigot?

Throughout time, the sage advice to those on the buy side wishing to press forward an agenda as an independent "nation state" has been to avoid mercenaries. This advice is just as applicable to those on the sell side who aspire to be sustainable businesses pursuing an agenda of their own.