I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Monday, March 31, 2014

Knowledge Versus Wage Work in Software Development

"Increasing numbers of people who had formerly been self-employed in workshops and cottage industry, often on a subcontracting basis, assumed new roles as part of an emerging wage-earning class. Labor increasingly became viewed as a commodity to be bought and sold. And since these changes eliminated earlier systems of production, for the new wage earners the process was irreversible, making them dependent on the wage system."

-- Gareth Morgan, Images of Organization

The separation of design from making bifurcates the labor force into people who design things (products, supply chains, marketing campaigns) from people who build them (assemble product, deliver the merchandise, place the advertisements. This is the separation of primary and secondary labor forces.

A firm's primary labor force consists of highly-skilled people with detailed, company-specific knowledge. They have financial and career ties that bind them to their employers to reduce the attractiveness of leaving. A firm's secondary labor force is generally lower skilled and lower paid. They serve as a buffer that allows a company to expand and contract with prevailing economic conditions without jeopardizing core operations. This gives a company greater control over its business, because the labor associated with "making" is a variable as opposed to a fixed cost of the business.

This separation defines both role expectations and career paths. A person in the primary labor force is a knowledge worker: they are expected to be highly skilled, abstract thinkers who are concerned with systemic issues. A person in the secondary labor force is a line employee: they are expected to be lower skilled, concrete thinkers who are primarily concerned with execution. Of course, in software development, we often see highly skilled, abstract thinkers in the secondary labor market and lower skilled, concrete thinkers in the primary. Although this may be the case, generally speaking a business is less effective if it's primary labor force consists of people who are concrete thinkers, and it is less efficient if its secondary labor are abstract thinkers.

The economics of this arrangement favor the few who design over the many who build. Members of the primary labor force will command higher incomes and their occupations are more likely to be wealth generative (that is, offered equity in their employer). Their positions are less vulnerable to economic downturns and consolidative mergers. Members of the secondary labor force can command outsized incomes - particularly those with scarce skills that are in high demand - but will not generate wealth through their occupation. During periods of economic expansion, they will enjoy stable employment, rising incomes and access to credit facilities. However, they are vulnerable to economic downturns (as mentioned, they buffer the shock of a reduction in demand), productivity investments (automation tends to eliminate jobs in the secondary labor market), consolidation (a significant portion of the "synergistic benefit" of mergers are achieved by reducing secondary labor forces), and labor arbitrage (states and countries create tax incentives for firms to build facilities to house large volumes of lower skilled workers). The primary labor market is less vulnerable to these forces.

An employer's relationship with its labor can involve multiple parties and take many forms. For example, an insurance company contracts with a technology consulting firm with deep insurance domain expertise to develop, maintain, and operate major software applications that run its core business. Although the insurance company is renting a large part of its tech work force, the extent of the dependencies the insurance company has on its provider mean that many members of the consulting firm's staff are, in effect, members of the insurance company's primary labor force. The consulting firm's intermediation doesn't change this fact, it simply changes the economic relationships. Alternatively, one firm's primary labor force may be another's secondary. For example, a retailer contracts with a consulting firm to develop custom software, but the retail firm assumes responsibility for the maintenance and evolution of the asset once delivered. The consulting firm may have architects, project managers and other lead staff it considers part of its primary labor force (the people building its business) that it supplements with a large secondary labor force (people who work on projects). However, the transient nature of the contract means that all members of the consultant organization are part of the retailer's secondary labor market. And, of course, a high and poly-skilled developer may be able to sustain a career as an independent contractor through good times and bad, while a low and narrowly skilled tester may only find work during boom times. Both are, by definition, in the secondary labor market, even if for different reasons.

There have been many attempts to create large, lesser-skilled, secondary labor forces that supplement a core primary one in software development. CASE tools rose to prominence in the 1980s. They promised, among other things, that by structuring, integrating and concentrating system design and analysis into a single repository, code could be produced within strict boundaries set by designers. In the 2000s, industrial IT practices took root: analysts and architects produced detailed specification documents that were to be coded by remote development teams, while test leads designed scripts to be executed to a pass or fail outcome by armies of test executers. And, of course, there is the pursuit of maximizing labor utilization: managers will schedule recurring slivers of time across multiple projects from technology specialists, while procurement departments contract for software developers as interchangeable "resources".

There is an argument to be made that wage work has historically lifted large swaths of humanity out of poverty. Not without costs, such as pollution and abhorrent working conditions. Still, is there a case to be made that justifies the industrialization of software development because it improves quality of life? After all, even if it is wage work, it enables people to work in jobs that are not physically risky (although they can be highly stressful), rewards people for education and ongoing skill development, and tends not to cause environmental damage. Is this not socio-economic advancement?

It is a Faustian exchange. By its very nature, wage work subserviates large populations of laborers, inherently creating a class division. Because of labor arbitrage and automation, compensation is not highly inflationary and the availability of work is subject to volatility. In an era of financial engineering, wage workers are encouraged to make personal bets (in the form of debt against future income streams) based on the appearance of stability in their employment; this makes wage work exploitative.

In addition, the argument that wage work makes software development more economical, therefore leading to more demand and the benefit of more laborers, isn't compelling. For one thing, as more and more software gets injected into existing things and allows us to make entirely new things, it does not stand to reason that demand will support more industrial workers than craftworkers. For another, given the high degree of project failure, it also does not stand to reason that an industrial approach is a more reliable way to make timely delivery of complex projects.

Software development offers greater benefits to society if it is a profession rather than an industry. A profession requires its members to understand not just the what, but the why and how. This demands more intellectual and creative development from each person. This does not mean more education and training (skill possession), but pursuit of professional self actualization (continuous knowledge acquisition). It creates social structures that are flatter, equitable and offer greater mobility because they are based on collaboration, knowledge and capability (peer & mentor relationships) rather than hierarchy (superior / subordinate). It also offers greater individual freedom because it is governed more by principals than by rules. This is a far more liberating for the individual than the industrial alternative, and no less economically beneficial.

There are at least two counter-arguments to this. The first is that it is elitist: not everybody is cut out to be a professional. I.e., that many software "line workers" aren't naturally inquisitive or motivated enough to be professionals; as industrial workers they have a standard of living that they wouldn't otherwise have. However, this argument holds individuals to blame for factors that are out of their control, such as societal class divisions that discourage mobility, industrially-minded education systems that discourage creative thought, economic conditions that crush motivation. Suggesting that entire strata of people have no hope to become craftworkers is a blanket indictment that denies each person their most basic human intellectual characteristics. Ironically, such thinking holds back people's development while professing to enable it.

The second is that this is idyllic: commercial reality is that buyers are too impatient to allow tradecraft to develop, sellers have incentive to create large-scale businesses, and corporate management and procurement are founded on industrial patterns of behaviour. All true. But it falls to those of us in the business of software development to decide our own fate. If we are motivated purely by the lure of lucre - and with demand still outpacing supply of software development labor there's plenty of money going round - then we inherently choose economics over humanity. However, if we are motivated by intellectual rather than income potential, we can choose to create different types of commercial ecosystems.

This isn't wild-eyed idealism, it's sound business. As economist John Kay points out in Obliquity, "the most profitable companies are not the most profit oriented." Economic rewards benefit the firms that set out to be great at what they do, as was the case for ICI chemicals ("to be the world's leading chemical company ... through the innovative and responsible application of chemistry and related science") in the 1980s and Boeing in commercial aviation ("eat, breathe and sleep the world of aeronautics") in the early 1990s. Both enjoyed substantial financial success even though that was not their primary motivation. Once those firms shifted their focus to be principally economic in nature - "The ICI Group's vision is to be the industry leader in creating value for customers and shareholders through market leadership, technological edge, and a world competitive cost base" - their fortunes darkened considerably: ICI ceased to exist as a company ten years after changing their mission. Boeing's once unassailable dominance of commercial aviation eroded within a decade of shifting to "... a value based environment where unit cost, return on investment, shareholder return are the measures..."

It's a choice, and it's one that all of us in the business of software make again and again in the nature of the companies we create, the contracts we enter into, and how we interact with each other. The sum of our choices will determine whether we form a new generation of knowledge workers or train the next generation of wage slaves.

Choose wisely.

Friday, February 28, 2014

Making is Part of Design

I had intended this month's blog to be about how industrialization would expand the secondary labor market and subsequently be a lost opportunity to create a new generation of knowledge workers rather than the next generation of wage laborers. While writing that, it occurred to me that industrialization appears to separate design from making, when in fact it does not, and that separation - real or otherwise - is essential to understanding the separation of labor into primary and secondary strata.

* * *

[P]roducers sought to overcome the uncertainties of output and quality associated with domestic production; to serve the new markets created by expanding world trade and a growing population (certain privileged sectors of which had a rising standard of living); and most important of all, to take advantage of mechanical systems of production.
-- Gareth Morgan, Images of Organization

Prior to industrialization, demand for manufactured goods overwhelmed the capacity of guilds to produce them. This was due in no small part to the fact that in guilds, the process of engineering (designing things) and manufacture (making things) were intertwined. A watch wasn't just a piece of precision engineering, it was also a piece of precision manufacture that required a highly skilled person to make it. Production volume couldn't increase any faster than the rate at which tradespeople acquired the skills necessary to produce things.

Industrialization changed manufacturing by separating engineering from production. Product engineers concentrated on the design of a product - sketching, prototyping, tweaking, and refining - until they got the right combination of features, materials, and configuration that was useful, provided sufficient (but not excessive) durability and performance, and could be built economically. Once the designers had done this, they could turn their creations over to manufacturing operations to produce them at scale.

The separation of design from manufacture subtly disguises the fact that making is an essential part of the design process. Before we mass produce anything - whether a consumer product or an ad campaign - we build a prototype that has very much the same componentry of the finished product we expect to produce. We subject the prototype to a number of stresses & analyses to make sure it will work reliably and consistently in benign and challenging conditions. We also make sure that the design isn't so complex that manufacturing it will be prohibitively expensive. We use the feedback from these analyses to adjust our prototype, and we repeat the cycle until either we know what we are going to build and how we are going to build it or we scuttle the project entirely. We do this because no matter how smart we are at things like materials science, chemistry and physics, we are not omnipotent. We are investigating the what, why and how, and using what we learn to develop a useful product that provides economic value to the customer and profit to the producer. When we are developing complex products, we do not go from engineering drawings directly to mass production. We think, we prototype, and we tinker before we enter a phase of mass production. Design and architecture truly are emergent in most things we make.

People have tried to separate design from making in many different fields. Perhaps the most ambitious was the corporation itself. In the 1960s, firms like Singer, Litton Industries and TRW were presented by management theorists as the triumph of complex corporate strategy derived from analysis and modeling based on "a single comprehensive evaluation of all options in light of defined objectives". In his book Obliquity, economist John Kay contrasts this big up front corporate design to companies that "muddle through": those firms that follow a disciplined process of "...experiment and discovery. Successes and failures and the expansion of knowledge lead to reassessment of our objectives and goals and the actions that result." He points out that the strategies of Singer, Litton and TRW all fell apart rather quickly, whereas firms that muddle through by making "successive limited comparison" tend to be more robust and resilient. Design (in this case, strategy) largely emerges from the experience of lots of incremental changes. Success favors a bias for action over analysis.

There have long been attempts to industrialize software development. The thinking goes that we can separate the design of software from the construction of software. The design consists of business requirements and architecture documents that define the product, a simple proof-of-concept (incorrectly passed off as a prototype), and a detailed project plan that provides the instructions for production. But engineering and making are tightly coupled in software development. As we create code, we learn what users like, what scales, what is secure, what isn't reliable, what is too complicated, and so forth. All of this learning needs to be factored into our design. An industrial approach to software development denies the need for the kind of learning. Given the consistent history of disappointment and failure of large scale software development projects, that denial is very costly indeed.

Industrialization seems an unnatural fit for software, but that's probably what many in the watchmaker's guild thought at the dawn of the industrial age. Tolerances standardized components, production tasks were similarly standardized and made repeatable, and design was successfully separated from production. Yet it remains elusive in software: integration of software components is still a context-heavy and therefore very labor intensive activity, and development remains a creative process of problem solving and not a repetitive act.

Still, as I wrote last month, as long as demand for software developers outstrips their supply, there will continue to be a great deal of pressure to find ways to industrialize software development to achieve scalability. Next month we'll look at the ramifications of industrialization to people and society.

Friday, January 31, 2014

The Persistent Imbalance Between Supply and Demand for Software Development Labor

The growth in demand for software has consistently outpaced the growth in the supply of software developers. This has been the case for well over half a century. It's worth looking at why.

Each major expansion in software development - automation (60s), productivity (80s), internet (90s), mobile (00s) - has been additive to the total stock of software in the world. The old stuff doesn't go away: software is still an enabler of labor productivity (office & communications), and a weapon for market share (online customer interaction). Yet we continue to find new applications for software: it is increasingly a product differentiator (embedded systems) or a product category of its own (social networking). While some segments have retrenched (companies license rather than write their payroll systems), the proliferation of new forms of software has more than compensated for consolidation in others. And the more it proliferates, the greater demand for integration, security, and other ancillary software.

Each new wave represents a structural increase in the demand for labor: the old stuff has to be maintained or replaced, while new applications not only bring new demand, they bring new tools and languages which require new skills and capabilities.

From a labor market perspective, the software economy has been expanding for decades. As a result, it marches to the beat of its own drum. Software development is generally counter-cyclical to the broader economy: it does well when the economy is down because software locks in productivity gains desired after layoffs & cutbacks. Software also makes its own opportunities, because it is inherently a business of invention and innovation. There are peaks and valleys: a structural change in demand for labor can sew the seeds of a bubble and its inevitable collapse. But bubbles in tech are bubbles of tech's own making: the video game / home computer bubble (1983), and the Y2K / dot-com bubble (2000) each resulted from irrational expectations and dubious business models created by people within the tech sector itself. The Y2K / dot-com bubble bursting coincided with increased accessibility to a global labor supply of software developers (offshoring was all the rage in the early 00s). Although the US experienced an acute contraction for software development labor, the global labor pool grew, and the regional contraction in the US proved to be short lived. Today, although the software labor market remains inefficient (context still doesn't travel), there are no easy cost savings to be gained (no large labor pools of skilled labor remain untapped). Global supply has been substantially eclipsed by global demand.

We're currently in the midst of another structural increase in the demand for software development labor, this time being driven by analytics and smart devices (the alleged "internet of things", from cars to coffee pots), with the odd halo application (e.g., wearable tech) thrown in for good measure. Every indication is that for the foreseeable future, demand for software developers will continue to increase at a rate faster than the supply of software developers available to develop it.

What does this mean to the business of software?

1. Ambition will outpace capability. Any business plan that comes down to "hire a bunch of smart engineers" - be it developing a new product or rescuing a distressed IT organization - is doomed. There is too much money chasing too few people. A company's labor timeline has to expand: it will take longer to hire experienced engineers, and firms will increasingly need to invest in incubating new developers. Labor scarcity poses a vulnerability to employers: a company known to have capable engineers is wearing a target for recruiters. When jobs outnumber candidates, jobs become commodities to employees. To differentiate from other employers, a firm must be highly attractive to the specific strata of the labor market that it wishes to employ. It does this by developing unique culture and values, and professional and societal aspirations that make it destination employer for those people. Without these things, it can only compete for labor on comp and career. It's difficult for a firm to maintain competitive advantage for labor solely on the price it is willing to pay for it.

2. Employers will pursue labor industrialization over tradecraft. Software development is labor intensive: the productivity enhancers that do exist such as automated testing and automated build are still poorly implemented, when used at all. Plus, the diversity of programming languages and the complexity of environments encourages labor specialization and task management. Still, people investing in software assets will not take "can't find competent people" for an answer. As the old saying goes, if you can't raise the bridge, lower the water. On a person-by-person basis, it is faster, easier, and cheaper to hire, train, and staff industrial workers to work on a software development "factory floor" where they perform coding tasks in an assembly-line like way than it is to recruit, develop and mentor polyskilled software developers. New labor formation will be largely industrial in nature, not tradecraft.

3. The risk of spectacular software failure will increase. The horrific explosion of the oil tank train that devastated Lac-Mégantic in 2012 was in no small part the result of demand exceeding supply. North American oil production has risen dramatically in the past half-decade. All that oil coming out of shale fields will find its way to refineries. Since there aren't pipelines to carry it, it's going by rail. The rail industry was in decline in North America for many years. A sudden uptick in demand can't be quickly satisfied by skilled labor. The net result is that railroads are hauling increasing volumes of a volatile commodity, but their capability to handle it isn't maturing at the same rate. In software, the demand/supply imbalance increases the risk of significant operating or project failure - that is, massive delivery overruns or post-delivery operating problems - as skills fail to mature in step with demand.

4. As the skills brought to bear on any given software investment deteriorate, software asset quality - particularly technical quality - will deteriorate. Industrial labor produces volume, not quality. The glut of software assets being produced will be toxic by technical quality standards. As it happens, it will go largely unnoticed because neither the concept of technical debt nor its commercial ramifications are well understood by the (average) business buyer of software, and because IT governance remains weak in practice. However, poor asset quality will become visible in maintenance and operating costs, and the occasional write-off. Any firm forced to make too many write-offs due to poor technical quality will cause it to see software as disposable rather than durable. That would create deflationary price pressure for labor and increase the demand for industrialization.

As long as the applications for software continue to expand, insufficient numbers of software engineers come into the work force, and software development remains labor intensive, there will be a fundamental supply / demand imbalance. But demand tends to be impatient. Economic and perhaps even political pressure will intensify to industrialize software development. This implies expansion of the secondary labor market, which is less skilled, educated, compensated and mobile than the primary labor market. That would be a lost opportunity: rather than fostering a global wave of knowledge workers, software development will simply bring the next wave of wage workers. We'll look at the reasons for that in the next post.

Tuesday, December 31, 2013

The Corrosive Effects of Complexity

"Much complexity has been deliberately created, to encourage consumers to pay more than they need, or expected." John Kay, The Wrong Sort of Competition in Energy

Modern software assets are complex in both their technical composition and their means of creation.  They are built with multiple programming languages, are expected to conform to OO standards and SOA principles, make use of automated tests and a progressive build pipeline, require a diverse set of skills (UX, developers, QA analysts, etc.) to produce, are used on a multitude of clients (different browsers or native client apps on PC, tablet and smartphone form factors), and are deployed using automated configuration management languages to a combination of physical and virtual environments (cloud and captive data centers).  Software is more complex today than it was less than a generation ago.

Complexity compromises both buyers and sellers of technology services.

Buyers suffer an information gap.  Few managers and fewer buyers have first-hand experience in one, let alone all, of the technologies and techniques a team is - or should - be using.  This creates information asymmetry between buyer and seller, manager and executor.   The more diverse the technologies, the more pronounced the asymmetry.

Sellers suffer a skill gap. Because the demand for software is outstripping the supply of people who can produce it, experienced people are in short supply.  There are more people writing their first Android app than their second, more people making their first cloud-based deployment than their second.  There are more blog posts on Continuous Delivery than there are people who have practiced it.  There are more are people filling the role of experience design than have designed software that people actually use.  And while long-standing concepts like OO, SOA and CI might be better understood than they were just a few years ago, a survey of software assets in any company will quickly reveal that they remain weakly implemented.  In a lot of teams, the people are learning what to do as much if not more than doing what they already know.

"Such information asymmetry is inevitable. Phone companies have large departments working on pricing strategies. You have only a minute or two to review your bill."

Information asymmetry favours the seller.  Sellers can hide behind complexity more easily than the buyer can wade through it: the seller can weave a narrative of technical and technology factors which the buyer will not understand.  The buyer must first disentangle what they've been told before they can begin to interpret what it actually means. This takes time and persistence that most software buyers are unwilling to make. Even if the buyer suspects a problem with the seller, buyer hasn't any objective means of assessing the seller's competency to perform.  Where complex offering are concerned, the buyer is at an inherent disadvantage to the seller.

"When you shop, you mostly rely on the reputation of the supplier to give you confidence that you are not being ripped off."

Technology buyers compensate by relying on proxies for competency, such as brand recognition and professional references of a selling firm. But these are controlled narratives: brands are aspirational visions created through advertising (although "Go ahead, be a Tiger" didn't end well...), while references are compromised by selection bias.  A buyer may also defer judgment to a third party, hiring or contracting an expert to provide expertise on their behalf. In each case, the buyer is entering into a one-way trust relationship with somebody else to fulfill their duty of competency.

An buyer inexpert in technology can compensate better by staying focused on outcomes rather than means.

Match cash flows to the seller with functionality of the asset they deliver. You're buying an asset.  Don't pay for promises, frameworks and infrastructure for months on end, pay for what you can see, use and verify.

Look under the hood.  There are plenty of tools available to assess the technical quality of just about any codebase that will provide easy-to-interpret analyses.  A high degree of copy-and-paste is bad.  So is a high complexity score.  So are methods with a lot of lines in them.

Spend time with the people developing your software.  Even if you don't understand the terminology, what people say and how they say it will give you a good indication as to whether you've got a competency problem or not.  Competency is not subject-matter-expertise: technology is a business of open-ended learning, not closed-ended knowledge.  But the learning environment has to be progressive, not blind guesswork.

Accept only black-and-white answers to questions.  Most things in software really are black and white.  Software works or it does not.  The build is triggered with every commit or it is not.  Non-definitive answers suggest obfuscation. "I don't know" is a perfectly valid answer, and preferable to a vague or confusing response.

An inexpert buyer is quickly overwhelmed by the seller's complexity. A buyer who stays focused on business outcomes won't dispel the seller's complexity, but will clarify things in favor of the buyer.

Saturday, November 30, 2013

Governing IT Investments

In the previous blog, we looked at common misconceptions of IT governance. We also looked at how corporate governance works to better understand what governance is and is not. In this blog, we'll look at how we can implement more comprehensive governance in tech projects.

For corporate tech investments, the top most governing body is the firm's investment committee. In small companies, the investment committee and the board of directors are one and the same. In large companies, the investment committee acts on behalf of the board of directors to review and approve the allocation of capital to specific projects, usually on the basis of a business case. They also regularly review the investment portfolio and make adjustments to it as the firm's needs change.

The investment committee is composed of senior executives of the firm. Although executives are managers hired by investors to run the business, in this capacity they are making a specific allocation of capital that is generally of too low a level for board consideration. This is not a confusion of responsibilities. The board will have previously approved capital expenditure targets for the year as well as the strategy that makes the investment relevant, and any investment made by the investment committee has to stand up to board scrutiny (e.g., the yield should exceed the firm's cost of capital, or it should substantially remove some business operating risk). The investment decision is left to a capital committee composed of the firm's executives - who always have a fiduciary responsibility to shareholders - for sake of expediency.

The individual shareholders of a company have multiple investments and have limited amounts of time, so they rely on a board of directors to act on their behalf. In the same way, the investment committee members are the shareholders of an IT investment. They invest the firm's capital in a large and diverse portfolio above and beyond just IT investments. They will not have time to hover over each investment they make. So, just as investors form a board to govern a corporation, the investment committee forms a board to govern an individual investment.

In technology projects, we usually associate a "steering committee" with the body that has governance responsibilities for a project. As mentioned in the prior blog, steering committees are too often staffed by senior delivery representatives. This is a mistake. People who govern delivery do so on behalf of investors, not delivery. They must be able to function independently of delivery.

We'll call our governing body a "project board" so as not to confuse it with a traditional "steering committee". A project board that represents investors is composed of:

  • a representative of the corporate investment committee (e.g., somebody from the office of the CFO)
  • a representative from the business organization that will be the principal consumer of the investment (e.g., somebody from the COO's organization)
  • a senior representative of the IT organization (e.g., somebody from the office of the Chief Information Officer or Chief Digital Officer)
  • at least one independent director with experience delivering and implementing similar technology investments.

The program manager responsible for delivery and implementation of the investment is the executive, and interacts with the steering committee in the same way that the CEO interacts with the board of directors.

Again, notably absent from this board are the delivery representatives we normally associate with a steering committee: technical architects, vendors, infrastructure, and so forth. They may be invited to attend, but because they represent the sell side of the investment and not the buy side, they have no authority within the board itself. Investing them with board authority invites regulatory capture, which undermines independent governance.

The project board has an obligation to make sure that an investment remains viable. It does this primarily by scrutinizing project performance data, the assets under development and the people responsible for delivery. In addition, the board is given some leeway by the investment committee to change the definition of the investment itself.

Let's first look at how the board scrutinizes performance. The board meets regularly and frequently, concentrating on two fundamental questions: will the investment provide value for money? and is it being produced in accordance with all of our expectations? The program executive provides data about the performance of the project and the state of the assets being acquired and developed. The board uses this data, and information about the project its members acquire themselves, to answer these two governance questions. It also reconciles the state of the investment with the justification that was made for it - that is, the underlying business case - to assess whether it is still viable or not. The project board does this every time it meets.

The project board is also granted limited authority to make changes to the definition of the investment itself. It does not need to seek investment committee approval for small changes in the asset or minor increases in the cash required to acquire it if they do not alter the economics of the investment. This enables the project board to negotiate with the delivery executive to exclude development of a relatively minor portion of the business case if the costs are too high, or approve hiring specialists to help with specific technical challenges. The threshold of the project board's authority is that the sum of changes it approves must not invalidate the business case that justified the investment.

Scrutinizing performance and tweaking the parameters of the investment are how the board fulfills the three governance obligations presented in the previous blog. It fulfills its duty of verification by challenging the data the executive provides it and asking for additional data when necessary. It also has the obligation and the means to seek its own data, by e.g., spending time with the delivery team or commissioning an independent team to audit the state of the assets. It fulfills its duty of setting expectations by changing the parameters of the investment within boundaries set by the investment committee (e.g., allowing changes in scope that don't obliterate the investment case). It fulfills its duty of hiring and empowering people by securing specialists or experts should the investment run into trouble, and changing delivery leadership if necessary.

If the board concludes that an investment is on a trajectory where it cannot satisfy its business case, the board goes to the investment committee with a recommended course of action. For example, it may recommend increasing the size of the investment, substantially redefining the investment, or suspending investment outright. The board must then wait for the investment committee decision. The presence of a member of the investment committee on the project board reduces the surprise factor when this needs to happen.

This model of governance is applicable no matter how the investment is being delivered. Teams that practice Agile project management, continuous integration and static code analyses lend themselves particularly well to this because of the frequency and precision of the data they provide about the project and the assets being developed. But any team spending corporate capital should be held to a high standard of transparency. Delivery teams that are more opaque require more intense scrutiny by their board. And, while this clearly fits well with traditional corporate capital investment, it applies to Minimum Viable Product investing as well. MVP investments are a feedback-fueled voyage of discovery to determine whether there is a market for an idea and how to best satisfy it. Without independent governance, the investor is at risk of wantonly vaporizing cash on a quixotic pursuit to find anything that somebody might find appealing.

This is the structure and composition of good governance of an IT investment. Good structure means we have the means to perform good governance. But structure alone does not guarantee good governance. We need to have people who are familiar with making large IT investments, how those investments will be consumed by the business, what the characteristics of good IT assets are, and above all know how to fulfill their duty of curiosity as members of a project board. Good structure will make governance less ineffective, but it's only truly effective with the right people in governance roles.

Thursday, October 31, 2013

Can we stop misusing the word "Governance"?

The word "governance" is misused in IT, particularly in software development.

There are two popular misconceptions. One is that it consists of a steering committee of senior executives with oversight responsibility for delivery; it's responsibilities are largely super-management tasks. The other is that it is primarily concerned with compliance with protocols, procedures or regulations, such as ITIL or Sarbanes-Oxley or even coding and architectural standards.

Governance is neither of these things.

The first interpretation leads us to create steering committees staffed with senior managers and vendor reps. This is an in-bred political body of the people who are at the highest levels of those expected to make good on delivery, not an independent body adjudicating (and correcting) the performance of people in delivery. By extension, this makes it a form of self-regulation, and defines governance as nothing more than a fancy word for management. This body doesn't govern. At best, it expedites damage control negotiations among all participants when things go wrong.

The second interpretation relegates governance to an overhead role that polices the organization, searching for violations of rules and policies. This does little to advance solution development, but it does a lot to make an organization afraid of its own shadow, hesitant to take action lest it violate unfamiliar rules or guidelines. Governance is meant to keep us honest, but it isn't meant to keep us in check.

Well, what does it mean to govern?

Let's look at corporate governance. Corporations offer the opportunity for people to take an ownership stake in a business that they think will be a success and offer them financial reward. Such investors are called equity holders or stockholders. In most large corporations, stockholders do not run the business day-to-day. Of course, there are exceptions to this, such as founder-managers who hold the majority of the voting equity (Facebook). But in most corporations, certainly in most large corporations, owners hire managers to run the business.

The interests of ownership and the interest of management are not necessarily aligned. Owners need to know that the management they hired are acting as responsible stewards of their money, are competent at making decisions, and are running the business in accordance with their expectations. While few individual stockholders will have time to do these things, all stockholders collectively have this need. So, owners form a board of directors, who act on all of their behalf. The board is a form of representative government of the owners of the business.

Being a member of a corporate board doesn't require anything more than the ability to garner enough votes from the people who own the business. An activist investor can buy a large bloc of shares and agitate to get both himself and a slate of his choosing nominated to the board (Bill Ackman at JC Penney). People are often nominated to board membership reasons of vanity (John Paulson has Alan Greenspan on his advisory board) or political connections (Robert Rubin at Citibank).

Competent board participation requires more than just being nominated and showing up. Board members should know something about the industry and the business, bring ideas from outside that industry, and have experience at running a business themselves. (As the financial crisis hit in 2008, it became glaringly obvious that few bank directors had any detailed understanding of either banking or risk.) Good boards also have independent or non-executive directors, people who have no direct involvement with the company as an employee or stockholder. Non-executive directors are brought on principally to advise and challenge on questions of risk, people, strategy and performance.

A board of directors has three obligations to its shareholders: to set expectations, to hire managers to fulfill those expectations, and to verify what management says is going on in the business.

The first - setting expectations - is to charter the business and approve the overall strategy for it. In practice, this means identifying what businesses the company is in or is not in; whether it wants to grow organically or through acquisition (or both), or put itself up for sale. The CEO may come up with what she thinks a brilliant acquisition, but it is up to the board to approve it. By the same token, a board that wants to grow through acquisition will part ways with a CEO who brings it no deals to consider. The board may choose to diversify or divest, invest or squeeze costs, aggressively grow or minimize revenue erosion, or any number of other strategies. The CEO, CFO, COO and other executives may propose a strategy and figure out how to execute on it, but it is the board who must approve it.

The second - hiring and empowering managers - is the responsibility to recruit the right people to execute the strategy of the business. The board is responsible for hiring key executives - CEO, CFO, President - and possibly other executive roles like Chief Investment Officer, Chief Technology Officer, or Chief Operating Officer, depending on the nature of the firm. The board entrusts those people to build and shape the organization needed to satisfy the expectations set by the board. They serve at the board's discretion: they must perform and demonstrate competency. The board also approves the compensation of those executives, providing incentives to executives to stay and to reward them for the performance of the firm under their leadership. These divergent interests and obligations is why it is considered poor governance to have the same person be both Chairman of the Board and Chief Executive Officer.

The third - verification - is the duty of the board to challenge what they are being told by the people they have hired. Are management's reports accurate and faithful representations of what's going on in the business? We tend to think of business results as hard numbers. But numbers are easily manipulated. Informal metrics such as weighted sales pipelines are easily fluffed: 100 opportunities of $100,000 each at a 10% close probability yields a sales pipeline of $1,000,000 - but any opportunity without signature on paper is, from a revenue and cash flow perspective, 0% closed. Formal (regulated) metrics such as profitability are accounting phenomenon; it's easy to flatter the P&L with creative accounting. There is an abundance of examples of management misrepresenting results - and boards that rubber stamp what their hired management feeds them (e.g., Conrad Black's board at Hollinger).

Compliance questions are relevant to fulfilling the duty of verification. Management that plays loose and fast with regulatory obligations create risks that the board needs to be aware of, correct, and prevent from happening again (whether a rogue trader at UBS or violation of the Formula 1 sporting rules by employees of McLaren). But compliance is a small part of what Nell Minow calls a "duty of curiosity" that each board member has. The board - acting as a representative of investors - cannot take reported results at face value. It must investigate those results. And, the board must investigate alternative interpretations of results that management may not fully appreciate: an embedded growth business who's value is depressed by a slow-growth parent, a loss leader that brings in customers to the big revenue generator, a minor initiative that provides a halo to a stodgy business.

The confusion about governance in IT is a result of too narrow a focus. People in technology tend to be operationally as opposed to financially focused, so most cannot imagine a board consisting of people other than those with super-responsibilities for delivery, such as executives from vendor partners. Tech people also tend to be more interested in the technology and the act of creating it, rather than the business and it's non-functional responsibilities. Regulations tend to take on a near mystical quality with technology people, and are subsequently given an outsized importance in our understanding of governance.

Good corporate governance requires that we have an independent body that sets expectations, hires and empowers a management team, and verifies that they are delivering results in accordance with our expectations. Good IT governance requires the same. We'll look at how we implement this in IT in part 2.

Monday, September 30, 2013

The Management Revolution that Never Happened

In the 1980s, it seemed we were on the cusp of a revolution in management. American business exited the 1970s in terrible shape. Bureaucracy was discredited. Technocracy was, too: "best practice" was derived from people performing narrowly defined tasks in rigid processes that yielded poor quality products at a high cost. There was a call for more employee participation, engagement, and trust. Tom Peters was strutting the stage telling us about excellence and heroizing the empowered employee (you may remember the yarn about the FedEx employee who called in a helicopter to get packages out of a snowbound location). Behavioural stuff was in the ascendency. We were about to enter a new era in management.

Until we weren't.

Behaviourally centric techniques - including Agile - are still fringe movements in management, not mainstream practice. The two Freds - Frederick the Great (his organization model for the Prussian military defines most modern organizations) & Frederick Taylor (scientific management) - still rule the management roost. Frederick the Great organized his military like a machine: a large line organization of specialists following standardized procedures using specialized tools, with a small staff organization of process & technical experts to make the line people more productive. Frederick Taylor defined and measured performance down to the task level. We see this today, even in tech firms: large, silo'd teams of specialists, sourced to lowest-common-denominator position specs, with their work process optimized by Six Sigma black belts. Large organizations are no different than they were 30, 50, 75 years ago.

What happened?

First, it's worth looking at what didn't happen.

1. The shift from manufacturing jobs to service jobs was supposed to give rise to networks of independent knowledge workers collaborating to achieve business outcomes. It's true that many modern service jobs require more intellectual activity than manufacturing assembly line jobs of the past. However, just like those manufacturing jobs, modern service jobs are still fragmented and specialized. Think about policy renewal operations at insurance companies, or specialized software developers working in silos: they are information workers, but they are on an information assembly line, doing piecework and passing it onto the next person.

"[Big companies] create all these systems and processes - and then end up with a very small percentage of people who are supposed to solve complex problems, while the other 98% of people just execute." Wall Street Journal, 24 December 2007.

The modern industrial service economy has a few knowledge workers, and lots and lots of drones. It's no different from the manufacturing economy of yore.

2. Microcomputing was expected to change information processing patterns of businesses, enabling better analysis and decision support at lower levels of the organization. Ironically, it had the opposite effect. Microcomputers improved the efficiency of data collection and made it easy to consolidate operational data. This didn't erode centralized decision making, this brought it to a new level.

Second, there are things that have reinforced the command-and-control style of management.

1. "Business as a machine" - a set of moving parts working in coordination to consistently produce output - remains the dominant organizational metaphor. If we're going to have organizations of networked information workers, we have to embrace a different metaphor: the organization as a brain. Machines orchestrate a handful of moving parts that interact with each other in predefined, repetitive patterns. Brain cells connect via trillions of synapses in adaptable and complex ways. The "networked organization" functions because its members develop complex communication patterns. Unfortunately, it is much harder to explain how things get done in a network organization than it is in a machine organization: general comprehension of neuroscience hasn't improved much in the past 25 years, whereas it is easy for people to understand the interplay of specialized components in a simple machine.

2. Service businesses grew at scale, and the reaction to scale is hierarchy, process, and command & control. As I've written previously, the business of software development hasn't been immune to these pressures.

3. The appetite for operational data has increased significantly. A 2007 column in the WSJ pointed out that management by objective and total quality management have been replaced by a new trend: management by data. Previous management techniques are derided by the data proponents as "faith, fear, superstition [or] mindless imitation".

4. Service businesses (e.g., business process outsourcing) moved service jobs to emerging market countries where, owing to economic and perhaps even cultural factors, command and control was easily applied and willingly accepted.

5. In the last 12 years, debt has been cheap - cheaper than equity. In 1990, 2 year and 10 year Treasurys were paying 8%. In 2002, the 2 year paid 3.5% and the 10 year paid 5%. Today (September 2013), they're paying < 0.5% and 2.64%, respectively. When debt is cheap, CFOs swap equity for debt. When we issue debt, we are committing future cash flows to interest payments to our bondholders. And unlike household debts, most corporate debt is rolled-over. To make the debt affordable we need to keep the interest rates low, which we influence by having a high credit rating. Stable cash flows and high credit ratings come from predictable business operations. As more corporate funding comes from debt instead of equity, it puts the squeeze on operations to be predictable. With predictability comes pressure for control. Those new management practices that emerged to empower individuals and teams advertise themselves as providing better "flexibility", not "control". They are anathema to businesses with financing that demands precise control.

6. In the past decade, corporate ownership has been concentrated in fewer and fewer hands. This has happened through equity buybacks (in 2005-8 this was usually funded with debt, since 2009 it's just as likely to be funded with excess cash flow) and dual-class share structures (Groupon, Facebook, News Corp, etc.)

7. The external concentration of ownership coincided with internal concentration of decision making. Speaking from experience, around 2006 the ceiling on discretionary spending decisions dropped precipitously in many companies. In most large companies, a v-level manager used to be able to make capital decisions up to $1m. Empirically, I've seen that drop to $100k in most large firms.

8. The notion of "best practice" has been in vogue again for at least a decade.

9. Recessions in 2001 (when businesses reigned in unrestrained tech spending) and 2008 (when businesses reigned in all spending) tightened belts and increased operational scrutiny.

10. I also suspect that business education has shifted toward hard sciences like finance and away from soft sciences like management. The study of org behaviours was core b-school curriculum in the 1980s. It appears this has moved into a human resources class, which emphasize org structures. This treats organizational dynamics as a technical problem, not a behavioural one. I haven't done much formal research in this area, but if it's true, it means we've created a generation of business executors at the cost of losing a generation of business managers.

What does all this mean?

The Freds will continue to dominate management practice for the foreseeable future. Corporate profitability and cash flows have been strong, especially since the 2008 financial crisis. That, twined with ownership and decision-making authority concentrated in fewer hands, means that there is no incentive to change and, more importantly, there is actually disincentive to do so. Among middle managers, the machine metaphor offers the path of least effort and least resistance. It also means that when large companies adopt alternative approaches to management & organization at scale - for example, when large corporates decide to "go Agile" - the fundamental practices will be co-opted and subordinated to the prevailing command-and-control systems.

This isn't to say that alternative approaches to management are dead, or that they have no future. It is to say that in the absence of serious upheaval - the destabilization / disruption of established organizations, or the formation of countervailing power to the trends above - the alternatives to the Freds will thrive only on the margins (in pockets within organizations) and in the emerging (e.g., equity-funded tech start-up firms).

This leads to a more positive way of looking at it: it isn't that the day of post-Fred management & organization has come and gone, it's that it is yet to come. The increasing disruption caused by technology in everything from retail to education to NGOs will defy command and control management.

But that still begs the question: after the disruption, when the surviving disruptors mature and grow, will they eventually return to the Freds?