I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Thursday, December 24, 2009

Restructuring IT: Anticipatory Responsiveness

Michael Milken, the former junk bond king, has a charitable foundation. His foundation did some research a few years ago that determined that 70% of 7 chronic illnesses - things like diabetes, heart disease, lung cancer and so forth - are preventable through lifestyle change. They concluded that if people took better lifestyle decisions, doctors could focus their energies on enhancing and improving the quality and duration of life. But they can’t, because people create so much “remediation work” for them to do through poor lifestyle choices.

This is an apt metaphor for IT. We do a lot of things, such as introduce situational complexity and technical debt, that pile on the work we have to do.

This is at the heart of what we have to pay attention to in our restructure from industry to profession: if we know that we are not doing things that make more work to be done, and if we also know that we are doing things that consistently result in quality outcomes, we’re much less likely to self-inflict more work.

Restructuring IT requires us have a different set of expectations. When we follow traditional restructuring led with org charts and budgets, we take behaviours for granted. We simply can’t. Like US policy on nuclear disarmament in the 1980’s, we must trust, but verify.

The good news is, behaviours are actionable. There are things we can do to bring about the right behaviours in an organization. The bad news is, this is vastly different from everything we’ve been taught about reorganization.

The purpose of this restructure isn't to assign people into roles, but to build an IT organization that is durably, anticipatorily responsive. That’s a mouthful. “Durable anticipatory responsiveness” is the extent to which we’re consistently able to do things that allow us to get things done – complete things, useful things, not just bit twiddling. But it isn’t enough to just “get things done.” We have to avoid launching any self targeted missiles, like creating high-maintenance code and therefore high cost assets. So being responsive means doing useful things, and it means not making work for ourselves.

The “anticipatory” bit means that we’re doing things that allow us to look ahead to figure quickly out where we’re at risk, or where we have an opportunity. Anticipation is important. We’re not simply waiting around for things to happen. That was how IT applied Rapid Application Development. We tried it in the 1980s, and it let us be better, faster, cheaper… pick any two. We need to anticipate. We need to get in front of things. So, dealing with whatever comes through the door isn’t enough; we also influence the business agenda in a meaningful fashion. That means positioning IT as a peer with the business, not in a power struggle with it or subservient to it.

To restructure this way, we have a model . We look at 10 practices, each decomposed into 6 stages, that allows us to assess a team, department or organization. Each practice is explained in sufficient detail to allow us to identify how a team gets things done today, how people behave. We recognize there are behaviours that inhibit responsiveness (“regressive” behaviours), and there are increasingly aligned behaviours. Especially in the early stages of restructure, we want to establish basic collaborative behaviours, because we’re creating new work patterns among people in a team. We're building “muscle memory” of how people in a cross-functional team must work together.

Fundamental collaborative behaviours cannot be taken for granted. I was working with a program team last year that had hundreds of people working in different technology silos, delivering a solution that impacted 4 distinct lines of business. Originally, they were spread out across the building. The only physical organization they had was by area of technology (client-side devs sat together, server-side devs sat together, QA sat on a completely different floor, etc.) So we organized people into line-of-business teams and co-located them in the same physical space. But changing physical layout wasn’t enough to change behaviours, because somewhere along the way people ended up working on missions divergent from that of "develop a software asset."

One example of this was that teams were in the "defect tracking" business more than they were in the "defect fixing" business. Following the relocation, QA analysts were sitting next to developers in the same team, but rather than talk to each other, developers and QA continued to work independently and communicate exclusively via the defect tracking tool. Defect reports, questions, "non-replicatable" and "ready for retest" and so forth we're all communicated via tracking tool. Not surprisingly, this led to re-opens, failed retests, questions, requests for additional documentation, and all kinds of other hand-offs. That not only made the remediation process inefficient, it created very long histories of defect tracking activity that were more noise than signal as a lot of the data captured was irrelevant to the actual problem.

Because it was more important to show effort as opposed to results, they weren't behaving as a team delivering a software asset, they were behaving as a team that needed to track details about defects. By asking people to collaborate on solving defects at time of discovery, we were able to make defect repairs available sooner. In fact, it wasn't unusual for a repair to be available in a build in less time than it took to submit an initial defect report. Getting into a state where this was the de facto behavior did not happen simply because people were physically co-located and had some collaboration tools. It required active intervention by management working directly with developers and QA solving specific problems in a collaborative manner.

Behaviour change is not a one-time action. Team circumstances change: people come and go, technology is upgraded, business needs change, etc. That means we must constantly pay attention to behaviours if they are to be durable. To do that, we can apply the model as often as we need - annually, quarterly or monthly - to ascertain whether people are making good "lifestyle" decisions." This allows us to assess the extent to which we are restructuring the fundamental organizational behaviours to focus on results as opposed to effort.

However, we must be careful. Competent assessment is a capability, not a task. Assessment is not done by questionnaire or checklist, and "objective self-assessment" - asking teams to critically analyze their own behavioural restructure - is most often wishful thinking. It requires an experienced touch to apply a combination of archeology and sociology to separate the opinion of how people aspire to work from the fact of how they are actually working. Creating a competent assessment capability requires investment, but it is critical to success as the assessments provide the telemetry of restructure.

We now know that to be a results-based organization, we must reorganize around behaviours. We have a good idea of what those behaviours are from an established model. We can use this model as both guide to and instrumentation of our restructure. But before we go off in pursuit of restructure, we must first internalize some guiding principles. We'll cover in the next installment of this series.

Monday, November 09, 2009

Restructuring IT: Organizing for Results

Up to this point in this series on Restructuring IT, we’ve looked at how IT has adopted an industrial model. IT has achieved scale, but at the cost of results, as is clear from the low rate of success of IT investments. We'll now take a look at how we can organize IT for results.

Professional versus Industrial

IT needs to reorganize for results, not scale. Organizing for results requires professionalism as opposed to industrialism.








People in industrial and professional situations each use drills, but you wouldn’t staff an assembly line worker in an operating room to get "capacity" on a surgical team.


To get professionalism, we need to organize less to try to be predictable (which will always escape us) and more to be responsive and accountable.

Let’s think about the things that would professionalize IT.

  • Instead of scrambling just before a deployment, what if every solution build acted as an internal gatekeeper of quality, continuously executing to validate technical, functional and non-functional completeness of everything we do?

  • Instead of trying to “inspect quality” into IT, what if testing were a team responsibility, automated and integrated into what we do daily?

  • Instead of watching big-up-front designs morph from whiteboard masterpiece to over-engineered Frankentecture, what if teams had the flexibility to take design decisions in near-JIT fashion, with a minimum of duplication and the fewest moving parts possible?

  • Instead of a “requirements arms race” between business and IT, what if we were able to accommodate continuous business involvement, facilitated to enable continuous change management and adaptive project management?

  • Instead of standing up volumes of impenetrable and unactionable requirements, what if we had short, business oriented requirements that could be quickly steered through development and QA and into production?

  • Instead of sending round reams of paperwork for PMs to fill out in little more than CYA exercises, what if we were able to govern IT non-invasively, ascertaining from our delivery teams whether they’re delivering value for money, and working in accordance with expectations?

We have the means by which to do all of these things. Today. Right now. But we can’t simply will them into place. To get these things, we need to restructure IT.

What does it really mean to “restructure”?

Before we discuss what restructuring IT is, let’s first understand what it isn’t.

Restructuring IT isn’t a question of org charts. All too often, new reporting lines just rearrange the deck chairs on the Titanic. It also isn’t about budgets, either: we can’t do more with less, because there isn’t that much less with which we can get any more done with.

How about the management cheerleading of recent years, things like “be close to your customer” and “have fewer hand-offs in your work processes.” When folks like Tom Peters started putting those messages forward in the 1980s they were a wake-up call that business had gone adrift. And yes, we need to restructure with this in mind. But we can’t have reorganization by platitude. We need something actionable.

Restructuring IT is about behaviours. Changing behaviours away from industrial thinking to professional thinking is a pretty big shift. Among other things, it means each person is responsible not just for doing the tasks they’ve been assigned, but doing what is necessary for the project to succeed.

Think back to the surgical team metaphor. As my colleague Greg Reiser points out, the surgical team collaborates toward a shared objective, improving the quality of life of the patient. Their primary goal may be to graft blood vessels that bypass a coronary artery blockage, but their objective is to prolong the life of the patient. What does the team do when they discover, as they almost always do, that the condition of the heart and surrounding tissue is not exactly as they expected? They apply their professional expertise and work as a team to adapt in order to achieve the primary objective. This is how professionals behave.

It doesn’t help, of course, that there are so many headwinds to getting things done in IT, ranging from a constant cycle of upgrades that change points of integration to an abundant supply of people without the complete gene in their DNA. Results are hard. Finishing stuff is hard.

This begs a critical behavioural question: why take the risk of getting “results”? Why put yourself on the line, underwriting success with your personal guarantee, especially if you have a built-in scapegoat? I met with a team last year where the PM, BA and dev lead knew a project was going to fail, because the development team simply didn’t have the capability. The project leadership didn’t make any effort to change staff, because the decision of who to staff was somebody else’s: people were supplied to them by procurement. The project was conspicuous because it was late starting as it was, so their priority was to get started. Never mind that they knew it wouldn’t finish. In the end, they could say they simply played the hand they were dealt by the organizational structures – in this case, an industrial-inspired procurement function – that existed to make IT effort “cost effective.”

So rather than work to success, rather than pressing the button and stopping the line and saying “this project is going to fail and we need to do something about it before we make a commitment of capital and time” they simply got on with the work knowing they were going to fail.

This is all too common. There is structural disincentive to achieve results in a lot of IT organizations. This disincentive is supplemented with soft scrutiny. Most measures of “complete” amount to little more than people working to a state where nobody can tell them they’re not done, instead of working to a state where somebody can conclusively tell them they are done.

So what makes us think people will make the effort to get things done?

It also makes you wonder: how much of this is going on in your IT organization today?

In our next installment, we'll take a look at how, specifically, we restructure for results.

Friday, October 16, 2009

Join Your Peers at an Agile Governance or Budgeting Event In October

October is normally a heads-down month. In addition to being in the home stretch of meeting our yearly objectives, we must dedicate cycles to shaping, communicating and justifying our plans for next year.

October is also a busy month for information sharing. The thoughts and ideas that have percolated through the year reach their maturity about this time, and we want to share them before everybody goes on their winter holidays. ThoughtWorks has no shortage of events coming up, and I am privileged to be a part of many of them.

If you’re in Chicago on Tuesday October 20, ThoughtWorks is hosting a panel discussion on the Budgeting and Financial Implications of Agile. On the panel are experienced IT leaders who head strategy, portfolio management and application development for their respective businesses. They've come to terms with balancing short-term operational flexibility with long-range budget forecasting. Stop by the Aon Center at noon if you can, but please register before you do.

Our first webinar in the Agile PMO series, Real Time Metrics, was very well received. We’re pleased to present Real Time Governance, a follow-on webinar that extends the concepts to their next stage of evolution. I’ve had the opportunity to present the Governance material with my ThoughtWorks colleagues Jane Robarts and David Leigh-Fellows in Calgary, Toronto (which was recorded and is available on InfoQ) and San Francisco. We’re broadcasting this next as a webinar with a full Q&A session on Thursday, October 22nd. The response has again been very enthusiastic, but with virtual seating at a webinar there are unlimited spaces, so please feel free to register and attend.

Members of the Project Portfolio Management Professionals association attended our Real Time Governance presentation in San Francisco last month and invited us to present it to their membership. I’ll be presenting to the PPMP via webinar on Friday, 23 October. If you are an IT leader with project portfolio management responsibility today, I strongly encourage you to look into this association. Take a look at their website or reach out directly to them for an invitation.

If you’re in Dallas on the 27th, I’m hosting an executive roundtable on Financing and Budgeting Agile projects. ThoughtWorks hosted a similar event in Chicago back in August. The people who attended that event have self-organized a micro-community. We’ve had a follow-on event and lots of peer-to-peer conversations as we mutually educate and collaborate on the unique funding and forecasting challenges and opportunities presented by Agile. Reach out to me directly if you're interested in attending the Dallas roundtable.

Finally, ThoughtWorks is sponsoring Agile East at the end of October in Philadelphia on the 29th and New York on the 30th. We have a powerhouse lineup of experienced, thoughtful practitioners, including Martin Fowler, Graham Brooks, Premanand Chandrasekharan, Carl Ververs, Joe Zenevitch, Shyam Mohan, Greg Reiser, Andy Slocum, Tiffany Lentz, Manu Tandon and Alla Zollers. This is an outstanding group of people and I’m humbled to be a presenter among them. I’ll be giving the noon keynote on Budgeting and Financial implications of Agile. Specifically, I'll be looking at how Lean and Agile allow us to maximize capex but require us to be extraordinarily diligent to prevent the perfect storm that can create an IT liquidity - and potentially IT solvency - crisis. I can’t stress enough what a stacked lineup of people this is. Make plans to be in Philadelphia or New York at the end of the month, just be sure to register.

Friday, September 25, 2009

Restructuring IT: The Detroitification of IT

In previous installments of this series on restructuring IT, we looked at how IT has adopted industrial practices as it has gone in pursuit of scale. As a result, IT bears striking resemblance to Detroit automakers. Let's look at some common characteristics.

Sub-Optimal Quality

Detroit suffers its periodic crises of quality. There was a joke that made the rounds during the 1970s that you know you have an American car when you get the factory recall notice in the mail. In much the same way, industrial IT is notorious for low technical quality of what it produces.

Long Time-to-Market

Detroit has excessively long product development times. It takes years to go from idea to availability. Former Chrysler CEO Lee Iacocca famously went on a tirade when he wanted to see what the LeBaron would look like as a convertible. His design manager told him something to the effect that they first needed to do a scale model, then a wind tunnel, then the prototype, so they’d have it for him in about 6 months. Iacocca’s reply? “Just take a chain saw to the roof!” How many CEOs look at their IT department and ask, “why is it so hard to get anything done in IT?”

Too Much Leverage

Detroit got by for so long because it could “pull demand forward.” Very few people pay cash for their cars. Instead, they finance their purchases. They're buying a car based on future earnings, not on their accumulated savings. Think about what happens in industrial IT, in terms of capability. There’s far less supply of “bankable capability” than there is demand. Instead, the industrial IT model looks for capacity. It puts people new to the IT industry in a position to put code on the line, learning as they go along. In effect, the industrial IT model borrows against future capability development. There’s even a few firms that have bet their entire business on this model.

As my colleague Greg Reiser pointed out recently in a webinar we produced for cio.com:


The extreme case of this are the large IT outsourcers that have rapidly grown over the past 20 or so years. Many firms that have actually bragged about their ability to add over 1,000 new professionals per month! Now while I can comprehend the ability of the US Marine Corps to add new soldiers at that rate, I personally cannot comprehend how a professional services organization can effectively add, develop and assimilate knowledge workers at such a pace. So I am not surprised when clients of these firms complain about the shallow depth of talent assigned to their projects. Remember, although the Marines put all recruits through a rigorous 12-week training program, every single one of those recruits is still just an entry-level soldier at its conclusion. It can take years to train and groom people for more challenging roles. Why should you expect different for software engineers?

Producing Solutions People Don't Want, but Have No Choice But To Use

Finally – and this one is arguably a bit of a stretch – Detroit has suffered eroding market share for many years. In many cases, they’re turning out cars people don’t want to buy, year after year. Increasingly, the people who do buy the cars are the people who have to buy the cars. E.g., they’re people who are in the automotive ecosystem. Sound familiar? IT stands up solutions that business users don’t want but have no choice but to use because they're in the corporate ecosystem.

A Troika of Trouble

There are three factors accelerating the rate of Detroitification of the IT industry.

  1. We have a capability shortage. For one thing, we’re not developing “heroes” in sufficient numbers to prop up this system, the folks who can jump among silos to do the unperformed tasks lost in the handoffs that happen in specialization. For another, IT isn’t the destination employer for people that it once was. (And by the way, neither are automakers.)
  2. We’re overloading on “situational complexity.” To get something done in most IT shops a developer has to be fluent in an esoteric landscape of existing systems, policies and procedures. Somebody might be the smartest developer in the world, but they'll make no impact unless they invest time in mastering the esoterics. That’s not attractive knowledge for somebody to acquire, because it’s not portable. There's not much value in having “I learned all the weird stuff necessary to get things done in this specific functional area of company x” on a resume. It’s also a frustrating experience, as very often the people alleged to have the right information aren’t really authorities themselves.
  3. Quality is assumed. Solutions delivered today are laden with technical debt, but very rarely is anybody looking for it. Apply metrics over just about any codebase in just about any company, and the odds are that you'll find at least one of the following: excessive complexity, excessive duplication, poor encapsulation or excessively long methods.

An increase in situational complexity, plus an increase in technical debt, combined with a decrease in total capability gives us exponential growth of people expending effort but showing little in the way of business results.

We should obsess about results, not scale. Scale is useless if we can’t get stuff done.

We’ve had a look at the problems with industrialization. Next, we'll look at how we can reorganize IT for results.

Friday, August 21, 2009

Restructuring IT: "Too Big to Fail" Doesn't Equal Success

By going in pursuit of scale, we’ve created an IT function that is “too big to fail” in many businesses. Companies depend on IT – many can’t operate without it – yet they don’t really understand it.

The result is moral hazard. Moral hazard is the proverbial “heads I win, tails you lose” scenario: a person takes boneheaded risks knowing that if they turn sour, somebody will come to their financial rescue. There is tremendous concern that we're witnessing this in financial markets today: banks bought high-risk collateralized debt obligations with borrowed capital to juice returns, only to need the US taxpayer to intervene to rescue them when the value of those assets collapsed and the market for them dried up. The moral hazard of the situation is the disincentive for the banks to be prudent risk-takers in the future: if a person knows somebody will bail them out, there's no reason not to take boneheaded risks again.

This describes IT. We stand up massive projects that fail more often than not. When they fail – 6 out of 10 times mind you – the business sponsoring the initiative bails it out. And there’s no downside risk: those people who were bailed-out continue to hold down their jobs because “they’re the only ones who know the system.” They might even get promoted. This creates moral hazard in IT, as it builds in the expectation that success of the project is optional.

This is the result of industrial thinking. IT defines highly specialized roles that assume people perform repetitive tasks. This allows IT to scale by hiring armies of people, each into a very narrow position, making people expert at one very specific piece of the solution chain. Unfortunately, it also makes the success of a project an abstract goal for each person, and success of each person a relative goal. Restated, in the industrial model each person is less responsible for success of the project, and solely responsible to show that they did exactly as they were told.

Think about an IT project a long-jump team, staffed not with one person who can jump nine feet, but nine people who can jump one foot. Clearly, the results aren't going to be the same.

This is why we’ve hit the limit with industrialization. Industrial IT assumes people are automatons performing specialist tasks in a repetitive fashion. That assumption is diametrically opposed to our business partners' perceptions of IT as a source of innovation. An industrial approach that values specialization and repetitiveness implicitly stifles innovation, invention, initiative and leadership. It bleeds out creativity. I had a colleague tell me the other day that two people on a team he was working with would sit and stare at the wall until told exactly what to do. Once done, they'd go back to staring at the wall. That’s industrial IT in action, and it's devoid of innovation.

We've seen this same industrial phenomenon in other professions. For example, when professional sports leagues expand, the quality of play goes down. Consider baseball. When Major League Baseball expanded a little over a decade ago, players posted outrageous offensive statistics. The reason? There were a lot of pitchers in the major leagues who weren't really "major league." They were just people hired to fill the roster, to hold a spot in the rotation. On the days those people would pitch, the team managers would hope only that they wouldn't hurt them, more than they had any hope that they would actually help.

We have this in IT. They’re called “net negative” people. These are people who create more work for people to do than actually solve the problems we need them to solve. Analysts who write requirements that need to be substantially re-analyzed before they can be turned into code, developers who write code that is of such poor quality that it needs to be rewritten before it can be released to QA, or testers who execute test scripts without being able to pass or fail them because they don't really understand what it is they're testing in the first place all create more work for people to do than they contribute. Overall, they're "net negative" to the project.

Businesses will be restructuring for quite some time. That amplifies the risk of a late-stage project failure to IT, because project bailouts are less likely in this business climate. To come to terms with this new environment, IT needs to be less concerned with industrial scale, and more concerned with bottom-line results.

Before we get into the details of how we need to restructure IT, we'll first take a look at the similarities between the state of mature industrials and the characteristcs IT is showing today. We'll also look at the factors that are accelerating IT toward the same fate as the US automakers.

Wednesday, July 22, 2009

Restructuring IT: A Different Look at the Business-IT Relationship

The relationship between IT and its business partners is notoriously bad. Year after year, surveys by different research organizations report that improving that relationship is a top-10 priority for CIOs. But despite being a high priority for many years running, it hasn’t improved all that much.

Before we can make any headway improving that relationship, we must first understand how IT's pursuit of scale is responsible for a lot of the dysfunction.

Let’s take a look at what the business and what IT each want from the relationship.

First, what does the business want from IT? To put that question in perspective, we need to externalize a bit. Let’s think about what we want from the relationships we have with our key suppliers, such as the place where we go to get our coffee in the morning.

  • We want results from our suppliers. In our example, we want our coffee in whatever configuration we ordered (hot or iced, cow/soy/no milk, etc.) within a reasonable amount of time after we order it.
  • We want to work with professionals: we don’t want to see people scratching their heads wondering how to grind beans or froth milk. We also don’t want to hear somebody tell us that pouring drip coffee into an insulated cup “isn’t their job.”
  • We want a relationship. A couple of months ago, the WSJ ran a story that people are less likely to cut back spending at places where they have a relationship with the firm, even if their household income statement tells them they need to make severe cuts. This certainly rings true in our coffee shop example: we like it when people recognize us, and have our drink ready for us before we order.
  • We want innovation. If somebody offered us a cappo-moca-latte we’d probably give it a try. We want to know people are thinking about our needs and our experience as a customer.
  • We want to trust the people with whom we do business. Certainly, we want to know that we get a full cup of coffee, with ingredients that aren't spoiled.
  • We want value for money. We may very well pay $2.75 for a cup of coffee, but we have to feel that the coffee we're getting is worth that $2.75.

This doesn’t really describe how IT has approached its relationship with its business partners. Historically, it's had a different set of priorities.

  • IT has been in pursuit of “big” as opposed to results. To get “big,” we’ve created specialist roles so we can train and staff armies of people.
  • IT keeps business partners at arm’s-length: it’s been “you do the business, we’ll do the technology.”
  • IT prefers predictability over innovation. In technology, Java is the new Cobol. Also, IT project management is most often an excessively detailed project plan that tries to define every action that people will take far into the future.
  • IT doesn’t look to establish trust, as much as we want people to have faith that we'll overcome whatever problems we may encounter.
  • People in IT are generally more interested in solving interesting technology problems than they are providing value for money.

What we end up with is a pretty big relationship gap. IT is opaque. IT solutions are laden with technical debt. There aren’t enough cross-trained heroes to bridge the gaps that exist among all those specialists. Each partner in this relationship has an unhealthy dependency on the other: business can't function without IT, and IT reacts more than it leads critical business decisions. The bottom line? IT still has a success rate no better than 4 in 10.

So it really shouldn't be a surprise when we hear a CEO, CFO, COO or even a CIO ask, “why is it so difficult to get anything done in IT?”

Monday, June 29, 2009

The Case for Restructuring IT

Business is tough right now, and it’s going to be so for a while. In tough times, you want to be very good at what you do. The more “fighting fit” you are, the more likely you are to survive a challenge.

Unfortunately, IT isn’t all that good at what it does. In fact, on the whole, it’s pretty bad. That means that IT isn’t very well prepared for this downturn.

How bad is it? The research organizations have historically reported a pretty high failure rate of IT projects: about 30% of all IT projects fail outright, while another 30% disappoint their business sponsor (e.g., excessive cost, wrong functionality).1 On the whole, an IT investment has, at best, a 4 in 10 chance of success.

Companies are already reticent to invest in this climate. IT doesn't offer scared capital a safe haven.

It also suggests that IT is on a trajectory of self-destruction. If we want to look ahead to where IT is headed, we need look no further than present day Detroit.

Photo credit: Ben Wojdyla, The Ruins of Detroit Industry


How did this happen? Consider some of the forces that have shaped the current IT landscape in the past 20 years. The steady growth of IT that was accelerating slightly with the advent of client/server gave rise to explosive growth driven by the combination of internet and Y2K. By the mid-1990's, demand for IT was dramatically outstripping supply. To satiate this demand, IT went in pursuit of scale. To get scale, IT took professional jobs and codified them into industrial tasks, because it’s easier to staff vast numbers of people in highly specialized roles than it is to develop professional capability to solve business problems using technology.

Today, businesses buy, recruit, staff, govern, gatekeep, develop, analyze and test following a model that puts a priority on “big.”

Unfortunately, all the time we’ve been in pursuit of scale, we’ve not been in pursuit of results. Results are assumed. We assume armies of specialists will follow an explicitly defined project plan to produce a solution that is technically sound, functionally complete, and financially satisfactory, all with minimal risk of impairment.

With a 4 in 10 batting average, results cannot be assumed.

By placing a priority to scale, IT mistakes effort for results. We often see success expressed as a function of hours to be invested. It isn’t that simple. Successfully delivering an IT solution is a function of a lot of factors, such as clear communication, effective collaboration and capability; well-informed decision making about technology, functionality and commercial viability throughout the life of a project; flexibility and responsiveness; and ultimately, producing meaningful things for our business partners. These can’t be captured in task orders and forecasts of work effort. They’re lifestyle decisions of how IT goes about its business.

It is time to restructure IT, to move away from an effort-centric industrial model, towards result-centric professional one.

1 As an example, the 2009 CHAOS report from The Standish Group shows that things haven't changed all that much, reporting 44% of IT projects were challenged (late, over budget, and / or with less than required features and functions) while another 24% failed.

Wednesday, May 13, 2009

Are You Ready to Restructure?

Global business faces unprecedented changes. Earlier this year, I wrote in an article for alphaITjournal.com:

Revenue forecasts aren’t materializing, capital structures are proving unsustainable, and operations are being scrutinized for inefficiencies. This, in turn, means that businesses are being completely restructured in how they are capitalized, organized, managed and governed. As businesses restructure, so will IT.

The need to restructure will be with us for a while. The financial economy isn't functioning normally, as capital markets remain on life support. The real economy isn't functioning normally either. We're still in the midst of business bailouts. Bailouts will give rise to bankruptcies, which will give way to restructuring. "Restructuring," says Gilles Moec, economist with Bank of America, "is very much the story of 2009." And business leaders seem to expect to face restructuring for some time: in a recent survey, IBM found that 91% of CEOs believe they need to restructure their businesses.

IT won't escape this phemonenon, and it must also restructure. Procurement, management and execution of IT have traditionally been focused on effort. IT has the opportunity to restructure for results, to be a transparent, efficient, responsive and collaborative participant in the business. There is tremendous upside for doing this, as it will make IT less a captive supplier of technical services, and of more an engaged business partner.
ThoughtWorks recently released a webinar I recorded on restructuring IT. I've also posted articles specific to restructuring IT on alphaITjournal.com. One covers challenge of change management during restructure, the other governing restructure. In the coming months, I'll be posting a blog series on IT restructuring that covers the deficiencies in the "industrial IT" mindset, defines a better future state for IT, and ways to go about remaking an IT organization to achieve that.

IT faces unprecedented challenge today. We have our work cut out for us, but we already have many of the solutions at hand that will allow us to meet those challenges. The opportunity is there for IT to get in front of this, and be at the forefront of corporate leadership. In turbulent times, better to be in a leadership role than relegated to a supporting position.

Tuesday, April 14, 2009

The Agile PMO: Becoming a Real Time PMO

In the prior installments of this series, we presented a pattern for how PMOs can better bridge the gap between executive and executor:

By doing these things, we align team execution with the needs of the PMO. This makes us more effective at governing IT:

  • By reconciling actual results with our business cases, we know the in-flight NPV we’re tracking toward. This gives us a continuous assessment of whether we’re getting value for money of our IT investments.
  • By exposing the quality of the application itself, we know whether we’re producing IT assets in accordance with expectation.
From the PMO perspective, the instrumentation we have of every project team looks something like this:




A few years ago, the blue boxes in this diagram were disjointed, independent activities. Today, we can automate and integrate collection to a point where in the PMO we can know what’s going on in any given team. If we're using a product like Mingle, our performance reports are updated every time somebody advances status of a story card. If we're using a product such as Cruise, our quality data is updated every time a build is triggered.


This gives us real-time information on what matters the most to us in the PMO. We have visibility into our project situations: scope changes, performance changes, lags, as well as the quality of the asset being created (and by extension, a limited degree of knowing how they’re going about doing it). We have this in near real time. It’s not burdensome: it’s an extension of what people are doing, and it lives within the tools. It’s not conjecture: it’s a reflection of functionality that works and driven right off the asset.

Becoming a Real Time PMO

So, that sounds like a great future state to target, but it won't happen of its own volition. What can we do today that will help us become a “real time PMO?”

If you have no Agile projects in flight now, stand up a couple. There are different adoption patterns, but one very common approach is to start with requirements shaping and “level 1" Agile practice adoption. As you stand up Agile projects, define each of your gatekeepers to be functional, tested, useful and deployable asset. Avoid having any gatekeeper (aside from the project chartering phase) that consists of deliverables that merely describe an asset.

If you have Agile projects, but adoption is inconsistent or you're not seeing the impact from Agile practices that you anticipated, scrutinize the state of Agile adoption in your project teams. Before building out a PMO, it's important for team fundamentals to be in place. Not all change is created equally, and Agile adoption takes time. Perform an Agile Maturity assessment to identify practice gaps and call out actionable items teams can do to bridge those gaps.

If you do have well-running Agile projects, hone in on metrics collection, automation and consolidation. Take a look at Mingle and Cruise from ThoughtWorks Studios. The build pipeline management features in Cruise are pretty impressive. Be aware that there are ramifications to increasing team visibility. Don't assume that this is simply an exercise in implementing tools, and be prepared to iterate to achieve greater degrees of transparency.

If you have well-running Agile projects today with efficient data collection, take your project tracking to the next level. Don’t just report burn-ups, report projected NPV. As things happen in a team - if scope is changing, dates are changing, quality is slipping and needs to be corrected, etc. - report out the business impact of the change in business terms to your business partners.

We do have to be careful not to put too much stock in numbers. We still have to look at code, look at the quality of requirements, run the actual software, and above all else, talk to IT people and businesspeople in the teams. We have to make sure that tasks aren’t masquerading as stories, that unit test coverage isn’t simply being gamed with meaningless tests, that progress isn't being overstated. The numbers are indicators, but they're no substitute for really understanding what’s happening in a project team. Nothing alleviates the need for fundamental project management.

That said, clear line-of-sight into trouble spots and risk areas early in a project allows us to take better decisions about projects, meaning we can avert the spectacular, late-stage blindside. With responsibility for capital investments in a difficult economic environment, we need to win and retain the trust and confidence of our business partners. We need to be on top of our game. We can do that by being an Agile PMO.

Monday, March 16, 2009

The Agile PMO: Automating Metrics Capture

The last piece of the Agile PMO puzzle is to make the data needs of the PMO non-burdensome to delivery teams. It’s all well and good to be able to get quality and performance data, but it has to be easily accessible. If it isn't, we're just taxing the teams that much more. That means we won't get this data timely or efficiently, if at all.

Automating Metrics Capture

Because they're derived directly from the asset under development, our metrics give us an objective way to index and monitor quality. What’s even better is that these metrics can be automated and run frequently. If we have continuous integration established in our teams (e.g., where the project binary is built as often as every time code is committed) we have the ability to subject the binary just built to a battery of automated quality metrics and tests. Some tests may take a long time to run, while others may run in just a few seconds. This is fine. We can construct a build pipeline to run our metrics and tests in the most efficient manner.

We can collect up-to-the-minute quality data for any project, such as:

  • What extent of unit test coverage do we have, and are all tests passing?
  • What extent of functional test coverage do we have, and are all tests passing?
  • Are we creating code that appears to be high maintenance due to complexity scores, duplication, or other poor coding practices?

And so forth.

This gives us a collection of technical and functional risk indicators that are both comprehensive and current.


By efficiently automating the capture of different quality metrics, we don't need to ask people to generate this data for us: we can lift it right off the binary. This makes data collection non-invasive to the teams, and less prone to collection error.

Tools such as Cruise and Mingle (both from ThoughtWorks Studios) have dashboards that allow people to see first hand current quality and project status across a number of different projects. This allows people in the PMO to look into what's actually happening without burdening the teams, and make far more specific and accurate status reports to project stakeholders.

All told, we're spending less effort and getting greater accuracy of what's happening within each project in the portfolio.

We now have all of the essential components of the agile PMO. Early on in this series, we talked about aligning executive and executor in how we organize, gatekeep and articulate the work to be done. Once we've done that, the data we glean on performance and quality has integrity by virtue of being solidly founded on results achieved, not hope that everything will work out in a future we've mortgaged to late stages of a project. By automating collection of this data, we can see how a project is evolving day-in and day-out with far less effort - and conjecture - than we've traditionally had in IT.

In the next and final installment of this series, we'll recap how all of these concepts and practices fit together, consider some caveats, and present some actionable items you can get started with today.

Monday, February 23, 2009

The Agile PMO: Measuring Quality

In the last installment we took a look at the project management information we get from results-based organization and execution, and how that provides an unambiguous status assessment. But project status data doesn't give us the complete picture: we also need to know that a team is delivering solutions in accordance with all of our expectations, such as quality and maintainability. For the PMO, that means looking at technical and functional quality along side project status.



Quality Measures

We have no shortage of quality metrics. For example, in the Java world we can measure all kinds of attributes of code: overcomplicated expressions, wasteful use of resources, ghost code, duplicate code, complexity, and so forth. That’s both good and bad. Good in the sense that code quality doesn’t need to be taken for granted, because we can measure attributes of the asset we’re creating. Bad in the sense that with all this data, it’s hard to separate signal from noise. Quality measurements are great to have, but a meaningful assessment of quality is a different matter entirely.

There are many authorities on code quality, so we’ll not dive into them here. But there are a few metrics worth pointing out that are strong indicators that code will be difficult and expensive to maintain: the extent of code duplication, the presence of highly complex code (i.e., cyclomatic complexity), the presence of many large methods (in terms of number of lines), and code that is poorly encapsulated. If any of these are present to a large degree, we have indication that we’re taking on excessive amounts of technical debt. This is material to maintaining viability of a business case: the higher the technical debt, the more expensive the cost of the solution, the lower the yield on the IT investment.

Quality Tests

In addition to measuring quality, we must also test for quality. Quality comes in many different forms. There is functional quality (does the application perform the tasks that it needs to perform?) and non-functional quality (is it fast? does it scale? is it secure?) Our tests can include unit tests, integration tests, and functional tests. A unit test exercises a specific piece of code, while integration tests validate round trips to other systems, and functional tests validate complete scenarios. We can perform these tests manually, or, better still, we can code them. If we code our tests, we can execute them automatically. We can build a library of automated tests and subject an asset to them at any time, meaning we can get up-to-the-minute information on the functional and non-functional quality of a solution at any time.

We must be cautious. Our quality testing is only as robust as our underlying tests, so we also have to validate that our scenarios are thoughtful and meaningfully complete. Some tests are destructive, and require disciplined environments management. But testing has come along way. Functional testing tools have improved in recent years, making functional tests less fragile and easier to maintain when software changes.

The greater and more comprehensive the automated test coverage, the less assumption there is about technical and functional quality. This provides reassurance to the PMO that both functional and technical quality is being maintained over the life of the project, and that incremental functionality previously gained is sustained. This will have a material impact on achieving our business case.

Assessing Quality Information

Metrics and tests give us a collection of data points about our code, each in a different unit of measure. To turn that data into information, we need to structure it in a meaningful way. We have several options for doing this.

One alternative is a scorecard, which I wrote about in the Agile Journal some time ago. To create a scorecard, we must first normalize metrics into consistent scores using simple rules or heuristics that give us degrees of "great" to "poor." We can compare the code in question against a representative but consistent baseline of quality, so we have an absolute point of reference. We can then collect our metrics into broad categories of “indicators of good” (hygienic measures) and “indicators of bad” (toxic measures). By doing this, we can ascertain how good or bad our current state is, and whether or not it becomes better or worse over time.

Another alternative is a technical balance sheet, which Brad Cross wrote about in the alphaITjournal. In this approach, the business value and technical debt of every package is scrutinized to determine whether we’re right-side-up or under water on the solutions we've created. By drawing the line between “value” and “debt” it also tells us where our real risks lie, and what our priority and urgency should be.

Performance and Quality: The Complete Picture

Structured quality data described here paired with project performance data in the last installment gives the PMO what it needs to assess current and ongoing performance of a project. The PMO can answer for each project the two governance questions: Am I getting value for money? and, Am I receiving solutions in accordance with expectations? (n.b. It is a minimal answer because customer satisfaction and staff capability must also be considered, but that's for another blog post.) But this is a lot of data, and if we don't have an efficient means by which to collect it we'll be crushed under the weight of reporting and collection. In the next installment, we’ll take a look at what we can do to automate data capture so that collection is non-invasive to our organization, a by-product of day-to-day operations, enabling us to be an Agile PMO.

Friday, January 23, 2009

Come the Hour, Come the Leaders

Earlier this week, I published an article called Come the Hour, Come the Leaders in alphaITjournal.com. In it, I point out the acute need for business leadership in today's economic environment and identify six things we can do today to act on a leadership agenda. Today's Financial Times brought some reinforcement to those messages in the first of a four-part series called Managing in a Downturn.

Stefan Stern's Time for Managers to Stand and Deliver contains some good and practical guidance for management leaders. He also sites a Booz & Co. survey that shows that as many as 40% of senior managers said they doubt their leadership have a credible plan to deal with the current crisis, while 46% doubt the leadership team is capable of carrying out its plans, whether credible or not. This data strongly reinforces my point that "how we act and prepare, and how we explain our actions and preparations, inspires confidence more than all the rah-rah optimism in the world will ever achieve."

In Seizing the Upside of a Downturn, Donald Sull makes the point that it is imperative to be organized for consistency, instead of lurching during good times and seizing during bad ones, making reference to Lean practices as a model. He makes excellent points consistent with one of the actions I recommend: "Cutting costs to withstand a downturn in the hope that recovery is around the corner (and with it a return to business as usual) is not leadership. Being sustainably responsive to whatever the economy, the market, governments and the competition deals us is leadership. We can act very boldly to eliminate situational complexity, unaligned gatekeepers, and any other obstacles that make it difficult to get things done. We can also look very closely at Lean principles to not only eliminate waste but to make sure effort is directed toward results."

It is quite satisfying to see reinforcing articles appear within a matter of days of each other. It suggests it is a timely and important topic. I highly recommend that you read all three.

Wednesday, January 14, 2009

The Agile PMO: Results-Based Execution

In the last installment we took a look at why it’s important that project gatekeepers be consistent in terms of effort and results. To understand how we can execute on this, we need to take a look at how teams are organized and execute.



How we organize determines how effectively we can define results-oriented “gates." Traditionally, IT teams have organized in silos, working on abstract slices of the business goal (as opposed to the end-to-end). When we organize in silos, we assume that what teams create independently will work together with little effort. When it doesn’t – and that’s the case more often than not – our economies of scale are completely obliterated. Consider the Airbus A380: teams working in complete independence of each other discovered after their sections were "complete" that they couldn’t connect up the electronics.1 Were the independent sections of the plane “complete” according to the project plan? Yes. Was the plane nearer to completion by virtue of those sections being done? No. It’s entirely possible that Airbus could have built those sections entirely from scratch (to consistent wiring specs) in the time it took to integrate the finished units they had. The lesson learned is that no matter how well we think we define our APIs, no matter how disciplined we think we can work in a silo, integration is never free.

In addition to how we organize, we also need to have some means by which to execute and therefore measure our progress that is directly rooted in the context of the asset. Functionality that is coded and tested is the best measure of "results" that we have.

To act on this, we need requirements expressed as small, actionable statements of business functionality. Agile Stories lend themselves very well to this. Stories are small, granular statements of business need. For the PMO, Stories provide us with the best measure of progress: a Story is either done, or it’s not done. It’s not partially done or kind-of done. If Stories are written in terms meaningful to the business, and if we track progress to Stories, we can assess progress in terms of results. Compare this to how we measure progress in traditional IT: there are long delays between functional deliveries, so the PMO has to rely on surrogates such as timesheet data and collections of technical tasks. Hours spent and completion of technical tasks are measures of effort. Effort doesn't necessarily translate into results. Stories are a measure of results.

The fundamental true/false nature of a Story's completeness lets us measure progress in some very simple but information rich ways. The most common way is the burn-up chart.



These example charts show that scope (in terms of Stories) may expand or contract as new discoveries are made and work is reprioritized. The projected rate of progress through the Stories - that is, how quickly the team is expected to render business needs code complete - is rooted in a simple formula that includes capacity, distraction management, and estimate weight (complexity, etc.) relative to the team.

By tracking progress through Stories, we're measuring accomplishment of meaningful results. With each iteration, the PMO knows the specific functionality that an asset possesses and how much it cost to get it to that point. We also know what functionality the asset does not have and we have a pretty good means by which to forecast how much it's going to cost to complete that functionality. Best of all, in near real-time we can see if a project is showing signs of trouble, we can see almost immediately the impact of changes that we make, and we can communicate all of this in business (as opposed to IT) terms. Traditional IT cannot provide this level of transparency.

The burn-up is meaningful only if we know that it has integrity. Nothing prevents us from defining and tracking technical tasks. If we do that, we're really no better off than we are in current IT practices: it’s just a burn-up of effort, but not of results achieved. We have to make sure teams are relentlessly focused on satisfying business need, from the time a requirement is captured to the time software is certified as production ready. By writing well formed Stories, we're writing the "completion gene" into the DNA of a project team. This aligns the interests of the business (the buyer), the PMO (the buyer's agent and de-facto underwriter), and the development team.

This gives us direct line-of-sight from the unit of measurable work all the way through to the project reporting level, so that when we report upward we know that there is integrity in what we’re reporting. This is deceptively simple, but is the critical component to the Agile PMO: we know what is done, and what isn’t done, not in IT terms but in the terms of what it is we're buying. By extension, we know our investment yield to date and what returns further investment will yield.

The proliferation of Agile Project Management tools, such as Mingle from ThoughtWorks Studios, make it far easier for the PMO to get project performance data from in-flight projects. We don't need to transcribe data from what a PM has told us into what the project sponsors need to know. We're not e-mailing spreadsheets around. We're drilling into a project dashboard to see what’s going on, and we know what’s going on in very granular terms that relate back to the state of the asset.

The combination of team organization and Story-based requirements allow us to work and measure in terms of results. But the results we're looking for go beyond “completed Stories.” We must also have some appreciation for the quality of the solution delivered. Progress – even in “functionality” terms – may be completely hollow if the expected level of quality (e.g., maintainability, technical quality, etc.) isn’t baked in at the same time. This means we are at risk of over-stating our results. In our next installment we’ll have a look at how we can align quality metrics to get a complete picture of solutions delivered, and we’ll look at how to automate capture to make this simple. Once we’ve done that, we’ve defined the fundamental tools necessary for an Agile PMO.


1 Hollinger, Peggy."In a tangle: How having to wire the A380 by Hand is hampering Airbus." Financial Times. 16 July 2008.