I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Wednesday, December 31, 2014

The CIO and M&A, Part II

Integrating businesses is no small task.  Established workflows, systems and tools are vigorously defended yet poorly understood.  Fearing for their jobs, people will equate systemic knowledge with job security.  Many in the acquired business will cling to their legacy identity. Organizational politics - and power plays - will alter tactical integration plans.  But it is the business goals that investors signed up for - not the internal special interests - that will determine the fate of the leadership responsible for the integration.  How do we stay focused on these?

Be a business leader, not a technology partner. Technology leadership must be fluent in the broader business context of the integration and be prepared to make decisions on behalf of the business, not just the technology applied to the business.  This means being or bringing in business process analysts to simplify the operations - and with it, the technology - of the business itself.

I wrote last month that most material on the role of IT in M&A are platitudes, and this certainly smacks of one.  But the fact is, this is not something that IT departments have in recent years positioned themselves to do.  The change in moniker from "Information Systems" to "Information Technology" has been a detriment to CIOs: the word "systems" implied responsibilities inclusive of business and technology, whereas the word "technology" suggests it is solely responsible for tech.  As a result, there is less expectation that tech will shape business decisions as much as it will carry them out.  It doesn't help that business analysis skills remain low in captive IT.

M&A gifts captive IT with the opportunity to be the "resident adult" in sorting out intransigent participants in an integration. However, that opportunity exists only if it is prepared to act as a business leader and not merely a technology supplier.

Slowly strangle, don't wholesale replace. Existing systems are complex: they have highly specialized rules that were developed over a number of years, they were developed with very different architectural principles than would be applied today, and the older the underlying technology the scarcer the technological know-how there is to incrementally change them.  This makes it easy to make the case that dueling systems are incompatible with one another, are no less valuable owing to the criticality of the specific edge cases they accommodate, and can only be replaced through a large "enterprise"-scale rewrite.  Thus we have no choice but to maintain the status quo, and only costly and high-risk change can possibly sweep it away.

The headwinds to change blow fiercely; there are always plenty of reasons not to do something.

Unless both organizations have extraordinarily geriatric technology, proposing an enterprise refit will be met with skepticism in the boardroom that will cast doubt on our leadership capability.  Even a big-bang retrofit of one incumbant technology to take the place of another will receive only a grudging endorsement.  Both scenarios also create tactical confusion: should existing systems be modified to meet immediate business needs or do we wait for the big-bang replacement?  And what do we do if that big bang replacment gets delayed?

We avoid this trap by strangling existing software.  In effect, we allow our portfolio of assets to continue to evolve with the business while simultaneously deprecating and retiring them.  We do this gradually, identifying specific functionality that can be integrated and replaced.  We have the practices and technologies today - from continuous integration to feature toggles to branch by abstraction - to make this a matter of will.  It is also palatable to the board because it gives us a means to show how we are structurally reducing our cost of operations in a manner that will support the business in the short-term and sustain it in the long-term, not a slash-and-burn approach that makes it thinner at the cost of making it more sclerotic.

This will mean making some unpleasant decisions. We may have to create new code - a lot of new code - to integrate old code on our way to fully retiring it.  We may have to integrate in unpalatable ways (e.g,. at the database level) where legacy systems do not support modern architectural principles.  And there will be times when the extent of integration will makes our collection of assets very complicated.  This means that our measure of success isn't just getting things deployed, but getting things removed.  To the CIO, the critical measure is a composite "simplicity index" of all IT systems, not "integration progress" in simply making systems work together.

Insist on excellence in engineering.  When the clock is ticking, there will be temptation and pressure to cut corners.  We can create the appearance of integration with quick and dirty solutions, and all that matters in the end is that it works, not how it works.

The phrase "we'll fix it later" probably has the lowest conversion rate of any statement made in business. An implcit expectation in M&A is that we are investing in simplicity and robustness, not complexity and brittleness.  The reality is, we're not going to get money later to pay down technical or operational debt we take on. If the combined landscape has more moving parts and fragmented institutional knowledge than the sum of the parts of the combining companies, we'll have a higher cost of operations and, therefore, have failed.

Investigate, measure, and draw attention to quality of engineering.  Instrument all code, looking specifically for complexity, duplication, testability, and test coverage.  Incentivise good engineering practices and reward teams that make structural and procedural improvements.  Take deliberate action against poor engineering decisions: delay an implementation rather than accept a poor one.  We have to live with the consequences of our decisions; make clear that we have invioable standards of performance.

Nobody is irreplaceable.  Inheriting somebody else's code is never much fun.  We have to deconstruct what other people were thinking at the time they created it, while simultaneously trying to understand the business context that existed at that time versus the context that exists today.  It's much easier to fight for funds to perpetuate a legacy team than it is to take responsibility for cleaning it up.

Two things to remember: it's just code, and the people behind both the code and the business usually don't have as much systemic or contextual knowledge as we project into them that they have.

To the first point, most code is not as algorithmically complex as we are told that it is.  The implementation might be complex, but implementation decisions are generally easy to discern (somebody really liked Java interfaces, so everything is implemented as an interface).  Once we figure that out, it's fairly straightforward to restructure the code and increase test coverage to make it more testable.  This is true for current and legacy languages alike.

To the second point, don't assume that the business leaders have as solid a grip as you'd hope they do as to why they do the things they do.  Some years ago, I was working with a firm to redesign fleet maintenance operations.  The existing suite of software tools were a combination of RPG, Visual Basic, Java and Excel, tied together with a number of manual integration steps.  The business operations leaders could only understand operations in the context that the technology allowed them to do these things.  We had to understand their business operations better than they did to get them to understand the actual value stream.

Do not be held hostage by tribal knowledge or the perception of that tribal knowledge.  Reward people for knowledge sharing and provide career paths for people to move beyond system caretakers to leadership roles that builds on their experience in mission-critical systems and knowledge of how the business itself operates.  Do not be afraid to cut people loose who are obstacles to change, no matter how entrenched they are perceived to be. Best of all, replacing legacy systems will reduce pockets of that knowledge: we start the clock ticking on it the minute we start to retire it.

Put your personal credibility on the line for these things.  A CIO has only as much time as the M&A horizon to create a common culture within the technology organization. Whatever the cultural norms are of the two firms at the start, insisting on engineering excellence, business leadership, and gradual improvement while being willing to accept responsibility for cutting loose tribal knowledge sets a decisive tone of change within an organization.  This creates both a new mission and a new identity for everybody.

Most importantly, we have to make it clear to all and sundry that we are every bit as much on the line for these things as they are.  We will take responsibility for a delay in implementation where quality is sub-standard.  We will develop new leaders in our organization rather than being forced to retain people in existing roles.  Our actions will speak louder than our words.

Nobody is irreplaceable.  If we fail to deliver, we'll find that out to be true for us, too.

Sunday, November 30, 2014

The CIO and M&A, Part I

"It is hard not to be cynical about this. M&A is a great process for creating fees for bankers, and for destroying the value held by shareholders."

-- John Authers, writing in the Financial Times

Industries tend to go through waves of deal-making. Sometimes it is divestiture or separation: sprawling firms that serve different buyers or markets don't achieve much in the way of operating efficiency, and a "conglomerate discount" priced into their equity means there is value that can be released by dividing a firm into multiple businesses. This is something H-P did in the late 1990s, and is about to do again. But usually, deals are acquisitions: competitors merge to gain more power over costs and prices (United Airlines merging with Continental); large firms acquire smaller ones to enhance their core (Yahoo has been on an acquisition tear in recent years), diversify their markets, or simply to prevent a firm from falling into the hands of competitors (Microsoft's acquisition of Skype).

The justification for a merger or acquisition usually involves some quantification of synergistic benefit: the two businesses have so much in common they can achieve greater profitability together far sooner than they would be able to on their own. This can be achieved through sales: Company A and Company B sell complimentary products to the same buyers; a merger of the two would allow for cross selling, resulting in larger and more lucrative sales. It can also be achieved through operating efficiencies: Company A and Company B can operate just as effectively with, say, 70% or less of their combined procurement, finance, accounting, HR and IT organizations.

The expected synergistic benefits to revenue and costs are calculated, then taxed and capitalized, to come up with a hard economic value to doing a deal. This makes them important to the CEOs involved because they help them sell their respective boards - and shareholders - on doing a deal. Their importance increases in direct proportion to the premium an acquirer is willing to pay to buy another firm. Synergies can be substantial: the proposed synergies of the merger between Office Max and Office Depot exceeded the combined market capitalization of the two firms.

* * *

"Most deals fail to create value because the buyer paid too much, or because the acquirer failed in the difficult task of sticking two companies together. Glossy proclamations of new strategic visions often boil down to a prosaic cost-cutting exercise, or into a failure of implementation."

IT is at the center of deal synergy. Obviously, we don't want to pay to maintain multiple e-Commerce sites or pay licensing fees for multiple ERP systems. But redundant IT systems can increase the cost of doing business: if we need people in finance to write custom reports to combine financial reporting across the two businesses, the merger has increased our total cost of operations. We need to combine systems, and do so quickly.

There are plenty of cookie-cutter frameworks for combining businesses, even their technology systems and operations. This also means there are plenty of platitudes to go round: "Involve the CIO as part of the executive team from the start" and "IT doesn't work in isolation". True, but not very helpful. Rubber hits the road in M&A in the actual combination - and reduction - of systems. Platitudes will not change an ugly operating reality.

IT in M&A can be a very messy business. For example, suppose Company A acquires Company B and intends to move Company B - running a highly customized & partially proprietary ERP - over to Company A's similarly customized, but commercial-off-the-shelf, ERP system. Company B has very different business processes and communication channels from Company A. The new divisional leader for that part of the combined company is from Company B and decides he wants those processes applied to the combined business. IT must now make changes to Company A's ERP system and dependent code to accommodate this change, in addition to migrating data. Costs just went up and the consolidation timeline just got longer - and depending on your point of view, it looks like an IT problem.

This also applies to the mundane stuff. For example, IT learns that the data and data structures in Company B don't exactly line up with Company A, so data migration is going to take more effort than originally expected. IT responds by creating data warehouses to house consolidated data so that Finance can run its consolidated reports. Costs just went up, as did operational complexity: those warehouses - and the ETL that refreshes them - have to be maintained and updated.

When companies pay a premium to fair value of net assets for a business they acquire, the excess is recorded on the balance sheet as goodwill. In theory, the value of the combined business should increase as synergies are realized, obviating the need for goodwill. The reality - and core to Mr. Auther's comments above - is that companies have a tendency to pay too much in acquisitions and end up taking a writedown. One study found that between 2003 and 2009, some 4,600 firms wrote-down goodwill due to impairment, amounting to 20% of total recorded goodwill. The study went on to report that there are some serious ramifications to this. For one thing, "the news of goodwill write-off [...] precede[s] CEO resignation and can trigger shareholder lawsuit." For another, "Firms with goodwill write-offs significantly under-perform in future." (Feng Gu, Goodwill and Goodwill Write-off: Economic and Accounting Implications)

So, in a M&A situation, there's a lot on the line for the CIO: you don't want to be the reason the boss loses his job, and you don't want to be a reason why the stock price underperforms. But your operating reality is messy: you're beholden to tribal knowledge of systems you've inherited through the acquisition, you're at the mercy of business decisions that are made for local optimization or simply local convenience, and you're under the gun to enable finance and accounting to create the patina of a combined business for the benefit of the people who approved the deal. As CIO, you'll be under pressure to extend and even bump your payroll to prevent loss of knowledge, create teams to chase business decisions with new software, and take on technical and operational debt to make good on immediate needs.

There is no playbook for this.

Next month, we'll look at how a CIO can square this circle.

Friday, October 31, 2014

Can a Business Rent a Core Capability?

Tech utilities - things that automate administration, enable communication or improve employee productivity - started as a labor expense, became a capital expense, and have now become a rent payment. This final state is an efficient economic relationship for buyer and seller. The buyer has more flattering financial statements and can negotiate for non-core services at a gross level (e.g., a single cost per employee). The seller's income is the rent they can extract from buyers. Utility sellers tend to enjoy monopolistic or oligopolistic market conditions, but there is still room for optimization and even disruption that drives prices down.

But disruptive tech is not a utility. It needs to be developed, and developing it requires a capability in technology (design, coding, testing, etc.) In recent years the trend has been toward renting that capability rather than owning it. This begs a question: can we rent the capability needed to deliver disruptive tech? If today's disruptive tech becomes tomorrow's status quo, doesn't that mean it needs to be part of a firm's core competency?

Two significant factors stand out when considering this question: the state of evolution of the (would-be) disruptive tech, and the extent to which it is genuinely disruptive.

Let's look at the latter part - the extent of disruption - first. New technologies disrupt by creating new behaviours and expectations among its users. In the process, it siphons market share by shifting market participants from one activity to another. Obtaining a book changed from making a trip to a bookstore to an online purchase that triggered a package shipment to an electronic distribution. Social media is a form of entertainment that shifts people's allocation of their leisure time.

Creating new behaviours is more disruptive than being the first to apply technologies established in one market segment into another: streaming video to personal technology on airplanes is interesting, and doubtless it will allow airlines to eliminate in-seat entertainment systems that add weight and burn jet fuel, but it brings established behaviours into a different context. Still, this is more disruptive than developing technologies that mimic existing functionality in the same segment: being late to the game with a "me, too" strategy does not generate much in the way of behaviour change.

With this in mind, let's consider the other dimension, the state of evolution of that technology: is it in research, is it an arms race, or is it a mature solution?

A company investigating a disruptive technology for its potential doesn't have to own the means by which it does that investigation. An exploratory investment is generally developed rapidly and deployed frequently to accelerate the rate of exploration. The differentiating value of effective exploration are speed, adaptability, and the ability to interpret the feedback from the experiment. It may succeed, or be a mild success, or be a complete bust. It's safe to rent as this is a non-operating capability. That a firm does this suggests the firm in question is slow growth and run for efficiency and lacks an R&D capability, but this describes a lot of firms: oil majors have separated into refiners and E&P, and pharma firms have similarly split into generics and growth / R&D firms.

However, a disruptive technology that rapidly gains adoption must become a core competency, and quickly at that. This is a phase when a firm is learning new rules for competition. Firms must learn what works and what doesn't (what we do and don't do), and what matters and what doesn't (what we measure and pay attention to, and what is just a distraction). Successful firms have to rapidly master new business operations under the pressure of scale and growth. Success is equal parts business and tech: the business is changing and the tech is brand new. Renting the tech capability puts a company at a disadvantage because it will not develop core competencies, fundamental skills, communication patterns, and organizational leaders critical to it's "new normal". In a tech arms race, it's not safe to rent.

The extent of disruption determines impacts the feasibility of renting. It is safer to rent capability where the tech follows established patterns. When a firm consumes established technologies to create products and solutions for a specific vertical, there is greater value in the business knowledge because the tech contributes less value to the solution. This makes it safer to rent the tech capability. The less disruptive, the less the risk: there's little point in owning a capability with a mission to mimic somebody else's tech.

Of course, the economics of renting or owning are muddled by other market forces. A start-up compensating employees with equity is not paying market value for its labor and is therefore renting, similar to how a lender owns a house that a borrower lives in. And tech buyers have no choice but to rent tech labor from services firms because of labor scarcity.

A company might have to rent because of prevailing labor market conditions, or because renting gives it a shot-in-the-arm that allows it to catch up when it is caught unprepared by a technology shift. But as Machiavelli counseled, one holds conquered territory with one's own forces, not mercenaries. A company has to own its core.

Tuesday, September 30, 2014

Tech: From Owning to Renting - to Owning Again?

In the 1970s, the predominant business strategy was vertical integration: own the value chain from raw materials to retail outlets. The research of the time supported this. The Profit Impact of Market Strategy database produced by the Strategic Planning institute concluded that diverse & vertically integrated businesses were significantly more profitable than narrow and focused businesses (PIMS 1977, slide 67). Michael Porter argued in Competitive Strategy that vertical integration enabled cost leadership, which was more likely to win market share than a strategy of differentiation. Vertical integration also created a barrier to entry to competitors, and provided defense against powerful buyers & suppliers and the threat of substitutes (Porter 1980, pages 9-15, 35-37). Corporate strategies assumed long-term existence and growth; this made employer-employee relationships more durable, so a firm could be a destination employer for people across a diverse range of roles. Companies like American Telephone & Telegraph and General Electric could pursue diversification and vertical integration in no small part because they could be all things to all people.

Business strategy changed in the 1990s, insisting that companies were better off focusing on "core competencies" while renting anything deemed non-core. A retailer, for example, should concentrate on sourcing merchandise to sell and developing the outlets through which to sell it, but rent the accountants, IT, back-office staff and real estate to operate and administrate the business. The thinking was that a firm could not be expert at doing everything, it would have cost bloat in non-core areas and lack the expertise to contain it, and that firms needed to pay ruthless attention to their core as competition would only intensify. It also became accepted that a firm could not be a destination employer for non-core employees and therefore could not expect to be attractive to top flight people across the board. Outsourcing for business and technology services reduced the number of employees and associated costs, allowed significant operating costs to be negotiated on a gross basis rather than an individual one, and made labor arbitrage accessible to firms for which it would have been too risky and difficult to pursue by themselves.

This change happened quickly. Perceptions of corporate durability imploded in less than a decade through consolidation (increase in M&A) and bankruptcy (over-leveraged with junk-grade debt and unable to make debt service payments). This eroded the employer-employee relationship and made any single firm less broadly appealing. Kodak outsourced IT to IBM in 1989, ushering in large-scale IT outsourcing that fueled the rapid growth of firms like Accenture and TCS over the ensuing two decades. GE created a business process outsourcer - Genpact - in the late 1990s, spinning them off as an independent company within 10 years. Firms separated asset ownership from asset usage, creating holding companies that own the real estate and rent it to subsidiaries that run business that occupy it. In a relatively short span of time, companies went from owning everything and renting nothing, to owning little and renting everything, with lots of financial intermediaries springing up to minimize tax burdens and squeeze rents.

In technology, cloud computing extends this story arc. Prior to the advent of computers, business was labor intensive. When companies first invested in computer technology by buying mainframes and hiring programmers, they did so to create efficiencies in their administrative operations. Companies reduced their labor expense, and the hardware and software they acquired appeared on the balance sheet as capital investments in the business. In the process, they also made business application development a "core competency" of their business. Within 40 years, most of those administrative processes were standardized by commercial-off-the-shelf ERP products. But that ERP solution still appeared as an asset on the balance sheet because software licenses, customization, and server infrastructure were capitalized assets of the firm, even if the people who led the customization and implementation were rented and not employees.

This is beginning to change. Cloud, Saas, and BYOD allow firms to rent technology rather than own it. As businesses have consumed increasing amounts of computer technology over the years - communication tools, productivity tools, business administrative software, servers, routers and end-user devices - company balance sheets have become increasingly "tech asset heavy". Renting makes their balance sheets "tech asset light". Rent payments put a dent in cash flow from operations, and the cost of renting can be higher than the cost of owning. However, renting improves performance ratios such as "return on assets" and "capital intensity". This flatters the CEO and the CFO.

Early computer technology transferred labor intensity of business activity (lots of clerks on the payroll, performing manual chores) to capital intensity (computers & software automating these chores, booked as capital assets), but it was still accounted for as something the business owned rather than rented. Cloud, SaaS and BYOD will drive out the lingering capital intensity by shifting the technology assets from "own" to "rent". A business still has costs associated with these things (it has to generate invoices and collect from customers), but as the underlying functions become more and more commoditized they offer no strategic advantage, and are instead treated as a tax on doing business. There is still room for innovation - new firms will emerge to offer new ways of providing these services to minimize this "tax" - but these are commodity offerings competing in a race to the bottom on price.

Businesses originally owned their tech capability, because that was the prevailing way that businesses operated, computers and computer skills were scarce, and firms derived significant competitive advantage from being early adopters. That changed because strategic thinking changed, technology became commonplace, and a lot of business technology became utilitarian. But what's true for utility tech is not true for disruptive tech, that is, tech that disrupts business models. Businesses are no longer consumers of tech, they are becoming tech. If every firm is a software firm, does technology need to return to the core? Will the business practices that developed and evolved with renting be adaptable, irrelevant, or an outright encumbrance? We'll look at those questions in the next post.

Sunday, August 31, 2014

Why Commercial Contracts Matter In Agile Software Development

"We value customer collaboration over contract negotiation." -- The Agile Manifesto

Contracts for software development have historically included language that specifically defines the software being developed. This protects the buyer from paying for an asset that does not serve its business needs, the seller from requirements drift or expansion, and allows both parties to agree to duration and cost.  Traditional development contracts also stipulate the process by which the software is to be developed and tested, and how changes will be accommodated.  Since the way something is produced has direct bearing on the quality of what gets produced, specifying the process protects the buyer from slipshod work practices, and gives the seller a formal framework to control the development lifecycle.  The parties are trading a long-dated asset for a series of short-dated cash flows; being specific about the work being produced and the means of production is a means of protecting each party's economic interests throughout.

But with Agile becoming more and more widespread, contracts that stipulate requirements, team composition and change control processes lock both buyer and seller into a commercial arrangement that is an encumbrance (requiring constant amendment to accommodate changes) and may actually interfere with delivery.  Agile development requires more flexible contracting.  So it comes little surprise that a core tenet of the Agile manifesto is that people involved in developing software should constructively collaborate with one another to deliver a valuable business asset, not conform to a strict protocol that defines allowable behaviours.

The rigidity of traditional contracts has led firms in the Agile development business to experiment with looser contractual language.  In principle, this makes sense.  Agile teams deliver more frequently, so the lag between the buyer's cash and the seller's delivery isn't as great as it is with traditional software development.  Frequent showcases and deliveries improve trust and confidence between buyer in seller in ways that can't be codified in the language of a contract.  The benefit of looser language is that as teams learn more about the actual business needs and technical complexity of what they are developing, they have more freedom to act (and react) as the situation warrants.  There should also be limited downside. Even if the contract is vague, a buyer won't pay if she doesn't like what the seller has produced, and a seller will suspend work if he has an unwilling or incompetent buyer.  The people involved have maximum leeway to get stuff done and they'll let each other know in the most direct means possible when they're not happy.

In place of precisely defined language, it isn't uncommon to see development contracts that capture none of the intent whatsoever, and define only a supply of an unspecified number of people for an indefinite period of time.  If the understanding ex-contract is that the seller is there to develop software for a particular purpose in a particular manner, the contract is, in theory, at best a formality and at worst takes too much time.  If the development work is treated as R&D rather than a capital investment, doing this doesn't flaunt accounting practices (which require strict definition of capital work).  And if one party is disappointed with another during development, they'll make that abundantly clear by suspending performance until they are happy.  This is the triumph of the desire to get stuff done over the formality of contract law.

"That is, while there is value in the items on the right, we value the items on the left more."

But contracts do matter.

A contract expresses the value each party places in the other and the respect they have for one another.  The preamble language defines who the parties are or think they are as an acknowledgement of the strengths that each bring to the relationship: they are disruptors ("re-imagining the classroom for the 21st century"), or market leaders ("the nation's largest provider of financial services to retirees"), or specialists ("the leading provider of software development services to municipal governments in the tri-state area").  This underscores goals of the buyer (the disruptor is buying an innovative solution; the market leader wants cost efficiencies) and the capability of the seller (technical, process or subject matter expertise) and why the two parties want to work together.  In the contract, if the buyer is just a business no different from any other, and the seller is just a provider of people, the relationship will eventually come to reflect this, too.

A contract communicates the outcome the parties are working toward.  If the buyer wants an asset, they are committing to developing an asset in conjunction with a partner, and both buyer and seller are drivers in achieving that goal.  If the buyer wants only to rent capacity from the seller, the seller will be a passenger in the buyer's goals.

A contract defines a bond between two organizations, a bond that is meant to be durable in good times and in bad.  The more closely twined buyer and seller, the more likely they are to resolve their differences and difficulties. The more disposable a relationship, the more likely one party will dispense with the other when greener pastures beckon.

We don't want contracts that are ignorant of the need for flexibility.  But convenience erodes commitment: flexibility achieved through ambiguity undermines a sense of partnership. It is better to achieve flexibility through provisions that define the parties, define the mission, and define a bond.  This creates a commitment to the principles of a relationship.

Contracts exist between companies; relationships exist among people.  A relationship will always trump what's written in a contract.  But people come and go, and relationships are constantly tested.  A contract easily exited undermines the commitment of a relationship.  Good contracts are not an encumbrance to delivery: they strengthen the commercial ecosystem through which delivery happens.

Thursday, July 31, 2014

The Fine Line Between "Stretch Role" and "Unqualified", Part II: The Growth Mask

When a business or a profession grows faster than the labor market it draws from, it suffers a capability deficiency: there simply aren't enough experienced people to go round. It also suffers a leadership deficiency: there aren't enough people with cross-discipline experience to make competent business decisions. When there are more leadership jobs than there are qualified leaders to fill them, people will be given responsibilities they would not otherwise have. Even though hiring decisions are made independently, macro forces can be responsible for people landing in stretch roles.

Volume Cures All Ills

Growth - be it a function of runaway demand or insatiable investor appetite - increases a business' tolerance for leaders who are coming to terms with their responsibilities. In no small part, this is because the performance of a rapidly growing businesses can be difficult to measure, while its business decisions - most importantly, where they concern cash - are blatantly obvious.

For example, early stage tech businesses tend to lack revenue and profitability but attract increasing numbers of users. They are measured on indicators such as total number of user accounts and number of active users. These are non-financial measures that are calculated differently across firms (e.g., a single person may have multiple accounts, while "active" is a relative term), making comparisons difficult. Because there is little history of tracking these types of metrics in business, it isn't clear how they truly relate to the long-term valuation of a business. Although the performance measures of a growth firm in an emergent industry are a bit foggy, the business decisions are crystal clear. The most important decision - what to do with cash - is cut and dried: plow it back into the business to fuel growth.

By comparison, well established businesses in industries like air travel or retail banking are expected to be predictable. They are meticulously measured on established accounting metrics such as earnings and cash flow, measures that are easy to understand and comparable within and across industries. But their business decisions - again, particularly those to do with cash distribution - are more complex: do we invest in the core for efficiency, diversify for growth, or distribute cash to shareholders? Stakeholders - employees, investors, customers - in a growing business will be tolerant of novice leadership; stakeholders in a mature one will not.

By way of example, social media firms had the benefit of time to adjust their products to be mobile centric rather than desktop centric. Although the chattering classes raised concerns, the total growth of social media prevented a sense of crisis from cratering equity values or inciting mass employee exodus. In contrast, retail firms haven't been so fortunate: ecommerce cannibalizes existing retail sales more than it increase them (to wit: Amazon's growth in retail has come substantially to the detriment of traditional retailers). Retail firms are not seeing their businesses grow rapidly because of technology, they're seeing their businesses change underneath them because of it. These firms don't have an abundance of time because their core businesses are vulnerable to rapid erosion. They are far less tolerant to leaders learning their trade.

Growth Makes Everybody Look Good

Although growth make it safer for people in stretch roles, they also make everybody look good, deservedly or not. The greater the success achieved by multiple businesses in the same sector, the less clear the contribution of the leadership to any firm's success. A rising tide simply lifts all boats. As Jeff Immelt famously quipped about commercial conditions during the 1990s: "A dog could have run a business".

Too often, we never really know the difference between a savvy business leader and person who simply got lucky. Many years ago, I sat on a panel with a renoun dot-com investor who had retained his fortune post-bubble by getting out just in time. Prima facie, he appeared to be the sage of Silicon Valley. On interrogation, it turned out that he'd cashed out several investments to free capital for a new round of leveraged bets on internet businesses, just as the bottom fell out of dot-com equities. He happened to be out of the market at precisely the right moment because he hadn't finished negotiating his new placements. It wasn't deep market insight that enabled him to call the market peak: he intended to be long the entire time, and was short only because dumb luck that had him cash out and head to the sidelines at just the right moment. The only sage advice he was qualified to give was to "be in the right place at the right time".

What If Everybody is Stretching?

In overheated sectors, we can easily end up with leadership teams who are reaching beyond their capability. The more the froth on the business, the more concern there is with fast action and the less concern there is with meaningful qualification of the people running it. We end up with an explosion of title inflation (a rise in the number of people with double-barreled titles beginning with words like "chief", "strategic" or "senior") without the concomitant increase in the number of experienced board members and executives to mentor these freshly minted leaders. It isn't uncommon for a high growth firm to build an entire leadership chain of stretchies - people in the wrong weight class, from the most senior executives right down to management on the line. This renders mentoring relationships irrelevant, and potentially damaging.

Explosive business growth can yield a new class of leaders. But in the absence of a strong foundation, it is just as likely to foment destructive organizational pathologies of paranoia and denial. The pinnacle of organizational absurdity is when employees, clients and investors are told quarter after quarter that every leader is "awesome", yet mysteriously, the overall business performance is disappointing. This isn't a business on the rise, it's a well funded frat party.

Where's your business?

Monday, June 30, 2014

The Fine Line Between "Stretch Role" and "Unqualified", part I

Everybody wins when somebody is put in a stretch role. Whoever does the hiring - a manager naming a first time tech lead or a board hiring a first time CEO - has propelled somebody's career. The investment in that person's success implies a commitment to a very active mentoring relationship. The person being asked to stretch is being given the opportunity to learn and mature, with a tacit expectation that they have freedom to try and fail while they are honing new skills.

We like seeing people in stretch roles, we like what it says about us as leaders that we put people in them, and we like the possibility that one day we, too will be given an opportunity to stretch.

A stretch role creates a halo effect for the person doing the hiring and the person being hired. For the management doing the hiring, it's a sign that the company is investing in the next generation of leaders, and that the firm demonstrably offers opportunity for advancement. For the individual being hired, it signals satisfaction of performance and expectations for great things. And, because everybody is taking a chance, it communicates an element of "risk", even in otherwise risk-averse corporate cultures.

But the halo reflects little brilliance. Stretch roles are too often made because there aren't any other viable choices. We rarely get to hire our ideal candidate, so we're going to have to settle in one way or another. Scarcity is a factor: in tight labor markets, availability becomes a skill. Plus, the longer a position goes without being filled, the worse the manager responsible for filling it looks - and the more likely that somebody higher up will conclude the position isn't all that necessary and will remove it from the budget. The decision to put somebody in a stretch role is very often simply that we can't think of any reason not to put you in this job. This is easily rationalized: emotionally, the benevolence of offering somebody a stretch role more than compensates for the risk that the person will not work out.

In the right circumstances, stretch roles grow people and businesses. They give a person license to test his or her boundaries, the freedom to experiment, and the opportunity to develop a unique style at something. But success depends on the circumstances: a short honeymoon period, being kept on a short leash by management, pressure to underwrite risks they don't completely understand, a "we never fail" corporate culture, no critical assessment of the person's areas of weakness, an absentee mentor or, worse, an incapable mentor, each stack the deck against the stretch candidate. A newbie in the job will not recognize the factors working against their success.

Throwing somebody into the deep end of the pool in their first swim lesson isn't enabling, it's overwhelming. Having somebody claw their way into a state where they can perform at a rudimentary level isn't professional development: it risks developing the wrong "muscle memories" for the job, and denies them the opportunity to achieve the meta-awareness they need to master their new role. It's also short-term career fatal: perpetually chasing responsibilities, constant drama and few successes alter the perception of the person from "aspirant stretchie" to "unqualified leader".

There is also the potential for long-term career damage. Over-promote somebody into a leadership role and they'll forever think they're leadership material. It may be that they simply aren't, but the stretch candidate is not likely to recognize this before or after being asked to take a stretch role; once invited, they've made the grade. The person who was on a stable career path ends up making frequent job-hops across firms just to maintain the same level of seniority.

Teams and departments suffer, too, not only from weak leadership at the helm but from the damage done to the confidence and trust among everybody else in the business. Plus, it sews seeds of doubt with the management who put the person in that role in the first place, often with damaging consequences (recall that Bill Ackman was ousted from the JC Penney board for having hired Ron Johnson as CEO).

Worse, the time spent with the wrong person in the role is time the business prolongs its people problems. Some years ago, I worked with a firm that had a strong tech culture but a weak sales one. There was high turnover of salespeople, and frequent vacancies in the sales team. A manager in the tech organization asked for a position in business development. Because of his credibility in tech delivery twined with an extrovert personality, management saw no reason not to give him the job. His lack of knowledge of business development, his lack of empathy for non-technical business buyers, and the absence of any strong sales leaders to mentor the new hire contributed to a disappointing year, culminating with his being asked to leave the sales organization and return to tech. Tainted by this failure, the would-be BDM left the company soon after. The company had not only engineered the loss of a respected member of the tech organization, it was no further along solving it's sales problem one year on.

(As to the person in question, his resume ticked technology, management and sales boxes, he already had a general management position at another tech firm in hand at the time he left. It was a short-lived gig as it became obvious very quickly that his capability did not live up to his resume.)

There's a simple litmus test we can apply to any organizational leader: would this person hold a comparable position in a comparable organization? An established leader capable of redefining and reshaping role clearly passes this test. An emerging leader who quickly takes to their new role while also disrupting conventional understanding of it will also pass this test. An aspirant leader being chased by demands and relegated to rote execution under constant direction of his or her superiors will not.

Every business has people in stretch roles. What are you doing with yours?

Friday, May 30, 2014

Deflation and Technical Debt

Technical debt is a useful metaphor for explaining why some code is faster to complete but more expensive to maintain. It is also helpful in explaining design decisions made for sake of expediency, or because of an outright lack of knowledge. Tech debt can be a real burden on development, particularly as it takes away time that would otherwise be directed toward productive investment in new feature development. This makes it tempting to interpret tech debt as a quantifiable economic or financial phenomenon. It is not. If we respect the accounting treatment of it as indirect, and extend the metaphor toward team dynamics and away from financial statements, it also helps us understand our ongoing ability to service tech debt.

Debt and Deflation

Edward Hadas argues that debt is an antiquated form of finance, "unnecessarily distant from economic reality". Fixed interest rates, maturity mismatches (banks borrow short and lend long) and variability in borrower's cash flows create unnecessary risks to borrower and lender alike. Financial institutions create all kinds of provisions and capital structures to underwrite uncertainty and absorb losses. If we set out to create finance today, we wouldn't use such a rigid structure as a primary investment vehicle.

Debt is a wager on future interest rates. The debtor is making an income backwardation play: that inflation will rise faster than the rate reflected in the interest rate, or that their real wages rise over the life of the loan. For example, in simple housing finance, a wage earner takes out a loan to buy a house. If inflation rises faster than expected (per the interest rate on the loan) and their wages keep pace with that higher inflation rate, the debt is easier to service because their income is higher. In this case, they've been inflated out of their debt. In addition, the debtor's wages can rise faster than inflation through salary increases (e.g., due to job promotions). This makes the debtor's real income higher, which also makes it easier to service the debt. The backwardation is that the projected future income at the time of the loan is less than what the spot income turns out to be in the future.

The lender is not entirely betting on the opposite. It is true that the lender comes out ahead if inflation rises more slowly than the rate reflected in the interest rate. But they also stand to gain from the debtor's ability to service a loan. An increase in real wages increases the debtor's ability to service the debt; this is reflected in the borrower's credit worthiness (rating or score). Lower credit risk increases the value of the debt instrument. A lender can sell the loan to somebody else for a higher price, which is their reward for underwriting the risk of the borrower at the time of origination.

For the borrower, deflation increases the real cost of debt. The less money a household earns, the harder it is for that household to service its debt. This is why central banks in highly indebted economies will pull out all the stops to fight deflation. Falling consumer prices reduce revenues. Falling asset prices reduce people's perception of their wealth, which reduces their willingness to spend. Deflation increases the burden of debt and intensifies contractionary forces on an economy. Mature economies - Europe, Japan, US - are debt-financed more than they are equity financed, and there are far more borrowers than lenders. It comes as no surprise that European Central Bank chief Mario Draghi committed to fight low inflation, Shinzo Abe's government in Japan expanded money supply to juice asset prices, and former US Fed Chairman Ben Bernanke committed to Quantitative Easing.

Technical Debt is an Indirect Economic Phenomenon, not a Direct One

In finance, the person using debt to finance an asset purchase is the person who is responsible for the debt. This makes sense in the business of software, because the person paying to acquire and operate software may be "borrowing" against future cash flows in the form of costs to service technical debt. The buyer of a software asset is the person footing the bill for people to service that debt.

But technical debt is an indirect economic phenomenon, not a direct one. That is, technical debt does not necessarily finance a software asset. Technical debt only has economic impact if costs required to service that debt are realized. For example, an asset with a lot of tech debt may not be subject to much maintenance activity, or suffer production instability requiring a lot of attention, or suffer performance problems (directly resulting from that tech debt) requiring additional investment to scale. In each of these cases, a tech-debt-heavy asset may have the potential to be high cost, but those costs may never be realized. If a cost is not realized, it is not a real economic cost. It only becomes an economic phenomenon if excess labor or infrastructure is needed to compensate for its presence.

This brings us to the limits of the tech debt metaphor. There is no bank offering technical debt loans: tech debt is conjured by people during moments of development. One can argue that tech debt borrows against the equity of the asset, but that does not hold up in accounting terms. With or without tech debt, we still carry the asset on the balance sheet at the same economic value: the total capital outlay less accumulated depreciation. The cost of servicing technical debt is a function of people's time, which is an operating cost. The expectation of future payroll costs that may be incurred to service technical debt is not a balance sheet liability that reduces an asset's equity, it's reported in the future period when it is incurred as an operating expense on our income statement and a drag on cash flow. Tech debt may or may not lead to reduced current and future profitability and cash flows; it does not intrinsically reduce the accounting value of an asset.

Only if the software itself is truly impaired - for example, owing to poor design decisions, our software doesn't scale beyond a single user and a significant portion of the asset is written off - is technical debt a direct economic phenomenon. However, in that case, "debt" is the wrong moniker: the asset is well and truly impaired, not merely leveraged by debt. If a business takes out a usurious loan to buy a truck, the truck isn't impaired by the loan or the high interest payments. If the truck is severely damaged in an accident, it is impaired and written down. Because of the optics, impaired software is more likely to receive additional investment than it is to be written off.

Since tech debt is an indirect economic phenomenon, we have to look at our principal actors differently. In technology, the people who are responsible for servicing the asset (e.g., the code) are the people who are responsible for tech debt. Remember, in finance, we don't borrow against an asset. We borrow against future cash flows of a household or company, using an asset as collateral should the borrower not have the cash to service the debt. In tech, our code may be the asset against which we have borrowed, but we're really borrowing against the time of the team (e.g analogous to future cash flows) responsible for it to service that debt. Tech debt is a call option on people's time at some point in the future, not the asset itself.

Thinking about it this way allows us to extend the debt metaphor a bit further.

Capability Deflation Increases The Cost of Servicing Technical Debt

We saw earlier that deflation impairs people's ability to service debt. The same applies to tech debt. The inflationary or deflationary forces that impact our ability to service tech debt are related to our capability. We "inflate" our capability - and thus reduce our burden of servicing tech debt - through skill development and productivity enhancement. Our skills improve through study and experience. Our productivity improves with knowledge, tools and process. Holding our tech debt static - that is, assuming our tech debt merely rolls over - our ability to service that debt improves with the inflation of our skills and productivity. Capability "inflation" is the same as a household seeing real wages increase. The stronger our capability, the less impact that tech debt has on a team, the more "value" it can produce.

The converse is also true: capability deflation increases the real cost of servicing tech debt. An erosion of skills, loss of situational knowledge, and reduction in productivity all contribute to capability deflation which increases the burden of technical debt. Again, holding our technical debt static, our ability to service it declines with the deflation of our skills and productivity. Just as deflation intensifies contractionary forces on an economy, so, too, does deflation intensify contractionary forces on a team: the greater the deflation, the greater the burden of servicing tech debt, the less "value" produced by a team (the equivalent of economic contraction). In extreme cases, as happens in financial markets, debt servicing "crowds out" our ability to invest in our business through software creation.

Whip Deflation Now?

In 1974, fighting what would become runaway inflation in the post-Bretton-Woods currency world, the United States launched a grassroots campaign encouraging all citizens to curtail consumption - and share their ideas for doing so - as a way to contain inflation. The campaign was entitled "Whip Inflation Now", or "WIN". In the immortal words of Alan Greenspan, "this [campaign] is unbelievably stupid".

Leaders of any tech organization are in a constant battle against capability deflation. People will quit and work somewhere else, taking their skills and situational knowledge with them. People's skills will erode as they become content maintaining a "software annuity" that pays them a high salary for low-effort maintenance work. Organizational memory erodes as people leave, and those who remain forget why specific design decisions were made. Demand outpaces supply for skilled software engineers, forcing firms to hire less skilled developers if they are to have developers at all. New technology obsoletes old technology, and old software becomes trapped in legacy infrastructure. As evolution gives rise to pure maintenance, the work becomes less and less engaging and attractive. All of these things are deflationary forces on team capability.

To keep capability deflation in check, we have to burn the candle from both ends: refresh skills, and refresh assets. Have "refactoring" hack nights to attack tech debt, run spikes to introduce new technologies into legacy code, and rotate people through different teams to disseminate situational knowledge. We can also make investment cases to retire legacy software assets, which we can help by drawing attention to the "tech currency" of the software assets in our production portfolio. We can also use the strangler pattern to incrementally retire legacy systems, reducing the economic cost - and uncertainty - of replacement.

There will always be tech debt, if for no other reason than one person's engineering masterpiece is another's code hairball. And there will always be the threat of capability deflation. We can fight capability deflation - we have no choice, we have to fight it - but we'll never defeat it. To keep it in check, have a corporate culture that values knowledge acquisition and collaboration, and an investment strategy that constantly reinvents the software that runs the business.

Wednesday, April 30, 2014

Are We Aligning the Portfolio with Labor, or Labor with the Portfolio?

Corporate IT has long been a paradox: as a source of competitiveness it is expected to be responsive and flexible; as a cost center it is kept on a tight budgetary leash.

To satisfy the prior, we use some form of portfolio management to reconcile competing - and sometimes confusing - priorities to direct IT effort toward the highest priority business needs. To satisfy the latter, we manage supply - people, infrastructure, solutions - to maximize productivity and throughput, and minimize idle time. Generally, we manage these independently of each other, but success lies in the balance of the two.

In software development, it is alluring to think that achieving this balance is a technical management problem that can be solved through process. We think of software development as a value-generative activity: the features we add have value, so if we can determine which features offer the best value at the smallest risk in the shortest time-to-market, we can maximize our business impact. If this is the case, the problem of balancing supply with demand should be primarily one of granularity of requirement: something fine enough to be deliverable in a short period of time, but coarse enough to be of business value. No surprise, then, that we've seen a rise in popularity of Agile, and particularly of Agile Stories, in corporate IT: being short and compact, they should make it possible to prioritize across diverse business demands, while minimizing work in progress because any given Story requires a short delivery horizons. In theory, we're making supply more efficient and simultaneously maximizing yield of the IT portfolio.

In theory. In application, we quickly run into two problems.

One is on the demand side. It's appealing to think of a portfolio of development projects, each of which will yield assets that provide some return to the business investing in them. In practice, a corporate IT portfolio isn't so tidy. It's got a little bit of everything, including major projects, minor modifications, packages of bugfix masquerading as enhancements, library version upgrades, and various & sundry other things. For many of the investments in the portfolio, the economics are neither crisp nor clear: "increasing customer satisfaction" and "reducing risk of failure" are worthy but cannot be rationally denominated in financial terms comparable to things like "increase revenue" or "decrease cost by eliminating jobs". Very often, things in the portfolio are things that just have to get done (e.g., upgrade or lose vendor support) or that people want to get done for appearance sake (e.g., halo projects). On top of it, it's not always clear who the business "sponsor" or "owner" or even the "user" of these things are. Demand is messy.

The other problem is on the supply side. The theory is that if everybody is working to satisfy a Story with business value, then we've achieved a state where throughput is optimized and people are always working on The Most Important Thing. But The Most Important Thing expressed as a Story will, by definition, always be some business goal or need - and that isn't how work gets done in IT. Corporate IT is overwhelmingly populated by monoskilled specialists: an ETL developer, a UI developer, and a middleware developer might all cut code, but they all do different things and they cannot do what each other does. Their "unit of work" is a very specific and very narrow task that, at best, contributes toward meeting a business need.

For the process wonk, this is inconvenient. It's a void, a "last mile" to bridge. The automatic response is to decompose Stories into a collection of technical tasks and assign those tasks to different specialists. We still have line of sight from tasks that satisfy a Story, and Stories that satisfy an Epic, and Epics that satisfy a business case, so we're still aligning supply with demand. That works, doesn't it?

It does not. When we cast portfolio needs in terms of technical tasks, we're subordinating demand to inadequacies of supply.

  • Technical tasks are interesting to technical people, not business people. The Most Important Thing to somebody running a business will never be a technical task. The CEO is interested in fulfillment of a tech task only during times when IT is damaging the business.
  • Tasking reinforces the biases and behaviours of tech that we want to change. The point of a Story is to have people think from a business perspective. When we task, we have to swarm specialist labor to solve a single business problem. They speak fundamentally different technology languages and will see the problem they're out to solve through different technological lenses.
  • A "whole business solution" is more than the sum of technical tasks. Omnipotence for up front design twined with multiple hand-offs among myopic specialists has not historically been a formula for success in IT.
  • Tasking creates local optimization at the cost of systemic efficiency. Because it is easy for specialists to complete tasks, tasking creates the appearance of efficiency. Somebody fluent in a specific area of code or with a specific business area can complete a task faster than somebody who is not. This efficiency comes at the cost of systemic responsiveness and resiliency: the greater the degree of specialization, the more bottlenecks we create and the less resilient we are to a loss of those specialists.
  • Tasking is the triumph of effort over results. Keeping an army of specialists busy requires that we have a large backlog of work that they can tap into, otherwise we fail to maximize utilization of supply. This is done either by pulling demand forward - initiating multiple projects from the portfolio to fill the backlogs - or generating backlogs of purely technical things that we may or may not ever get round to doing. Either way, we are increasing work in process as a means of maximizing labor utilization (or, in simpler terms, keeping people busy).

Tasking might make supply a little less inefficient, but it does nothing to improve the performance of our IT portfolio.

However, it is acquiescence to the systemic inefficiencies inherent in an effort-centric operating model, largely because it is the path of least resistance. It's easier to administrate an effort-centric business than a results-oriented one. It's easier to define position specs and salary grades for specialists. It's easier to hire & rent specialists. It's easier to define task orders for specialists. It's easier to learn one technology well enough to get a job at it. It's easier box-tick work done by specialists. It's easier to do a defined task. It's just easier.

Effort is easy. Results are hard.

If the goal is to improve the performance of our IT portfolio, then we need to bring labor closer to the portfolio instead of bringing the portfolio closer to labor. We need each person to able to deliver a meaningful business result. We need polyskilled generalists who can bring skills and capability to bear on solving business problems through technology. We need people who are knowledge acquisitive and disciplined in how they work.

This goes against the grain. Companies are optimized to minimize how much they spend against gross revenue goals, not how well they maximize return against discreet, incremental uses of capital. Procurement and HR will have no means of compartmentalizing, grading and costing this. The CFO will chafe at the unit cost of generalist labor.

Results don't always speak for themselves. In IT, we need to become less concerned with measuring effort and more adept at framing results.

Monday, March 31, 2014

Knowledge Versus Wage Work in Software Development

"Increasing numbers of people who had formerly been self-employed in workshops and cottage industry, often on a subcontracting basis, assumed new roles as part of an emerging wage-earning class. Labor increasingly became viewed as a commodity to be bought and sold. And since these changes eliminated earlier systems of production, for the new wage earners the process was irreversible, making them dependent on the wage system."

-- Gareth Morgan, Images of Organization

The separation of design from making bifurcates the labor force into people who design things (products, supply chains, marketing campaigns) from people who build them (assemble product, deliver the merchandise, place the advertisements. This is the separation of primary and secondary labor forces.

A firm's primary labor force consists of highly-skilled people with detailed, company-specific knowledge. They have financial and career ties that bind them to their employers to reduce the attractiveness of leaving. A firm's secondary labor force is generally lower skilled and lower paid. They serve as a buffer that allows a company to expand and contract with prevailing economic conditions without jeopardizing core operations. This gives a company greater control over its business, because the labor associated with "making" is a variable as opposed to a fixed cost of the business.

This separation defines both role expectations and career paths. A person in the primary labor force is a knowledge worker: they are expected to be highly skilled, abstract thinkers who are concerned with systemic issues. A person in the secondary labor force is a line employee: they are expected to be lower skilled, concrete thinkers who are primarily concerned with execution. Of course, in software development, we often see highly skilled, abstract thinkers in the secondary labor market and lower skilled, concrete thinkers in the primary. Although this may be the case, generally speaking a business is less effective if it's primary labor force consists of people who are concrete thinkers, and it is less efficient if its secondary labor are abstract thinkers.

The economics of this arrangement favor the few who design over the many who build. Members of the primary labor force will command higher incomes and their occupations are more likely to be wealth generative (that is, offered equity in their employer). Their positions are less vulnerable to economic downturns and consolidative mergers. Members of the secondary labor force can command outsized incomes - particularly those with scarce skills that are in high demand - but will not generate wealth through their occupation. During periods of economic expansion, they will enjoy stable employment, rising incomes and access to credit facilities. However, they are vulnerable to economic downturns (as mentioned, they buffer the shock of a reduction in demand), productivity investments (automation tends to eliminate jobs in the secondary labor market), consolidation (a significant portion of the "synergistic benefit" of mergers are achieved by reducing secondary labor forces), and labor arbitrage (states and countries create tax incentives for firms to build facilities to house large volumes of lower skilled workers). The primary labor market is less vulnerable to these forces.

An employer's relationship with its labor can involve multiple parties and take many forms. For example, an insurance company contracts with a technology consulting firm with deep insurance domain expertise to develop, maintain, and operate major software applications that run its core business. Although the insurance company is renting a large part of its tech work force, the extent of the dependencies the insurance company has on its provider mean that many members of the consulting firm's staff are, in effect, members of the insurance company's primary labor force. The consulting firm's intermediation doesn't change this fact, it simply changes the economic relationships. Alternatively, one firm's primary labor force may be another's secondary. For example, a retailer contracts with a consulting firm to develop custom software, but the retail firm assumes responsibility for the maintenance and evolution of the asset once delivered. The consulting firm may have architects, project managers and other lead staff it considers part of its primary labor force (the people building its business) that it supplements with a large secondary labor force (people who work on projects). However, the transient nature of the contract means that all members of the consultant organization are part of the retailer's secondary labor market. And, of course, a high and poly-skilled developer may be able to sustain a career as an independent contractor through good times and bad, while a low and narrowly skilled tester may only find work during boom times. Both are, by definition, in the secondary labor market, even if for different reasons.

There have been many attempts to create large, lesser-skilled, secondary labor forces that supplement a core primary one in software development. CASE tools rose to prominence in the 1980s. They promised, among other things, that by structuring, integrating and concentrating system design and analysis into a single repository, code could be produced within strict boundaries set by designers. In the 2000s, industrial IT practices took root: analysts and architects produced detailed specification documents that were to be coded by remote development teams, while test leads designed scripts to be executed to a pass or fail outcome by armies of test executers. And, of course, there is the pursuit of maximizing labor utilization: managers will schedule recurring slivers of time across multiple projects from technology specialists, while procurement departments contract for software developers as interchangeable "resources".

There is an argument to be made that wage work has historically lifted large swaths of humanity out of poverty. Not without costs, such as pollution and abhorrent working conditions. Still, is there a case to be made that justifies the industrialization of software development because it improves quality of life? After all, even if it is wage work, it enables people to work in jobs that are not physically risky (although they can be highly stressful), rewards people for education and ongoing skill development, and tends not to cause environmental damage. Is this not socio-economic advancement?

It is a Faustian exchange. By its very nature, wage work subserviates large populations of laborers, inherently creating a class division. Because of labor arbitrage and automation, compensation is not highly inflationary and the availability of work is subject to volatility. In an era of financial engineering, wage workers are encouraged to make personal bets (in the form of debt against future income streams) based on the appearance of stability in their employment; this makes wage work exploitative.

In addition, the argument that wage work makes software development more economical, therefore leading to more demand and the benefit of more laborers, isn't compelling. For one thing, as more and more software gets injected into existing things and allows us to make entirely new things, it does not stand to reason that demand will support more industrial workers than craftworkers. For another, given the high degree of project failure, it also does not stand to reason that an industrial approach is a more reliable way to make timely delivery of complex projects.

Software development offers greater benefits to society if it is a profession rather than an industry. A profession requires its members to understand not just the what, but the why and how. This demands more intellectual and creative development from each person. This does not mean more education and training (skill possession), but pursuit of professional self actualization (continuous knowledge acquisition). It creates social structures that are flatter, equitable and offer greater mobility because they are based on collaboration, knowledge and capability (peer & mentor relationships) rather than hierarchy (superior / subordinate). It also offers greater individual freedom because it is governed more by principals than by rules. This is a far more liberating for the individual than the industrial alternative, and no less economically beneficial.

There are at least two counter-arguments to this. The first is that it is elitist: not everybody is cut out to be a professional. I.e., that many software "line workers" aren't naturally inquisitive or motivated enough to be professionals; as industrial workers they have a standard of living that they wouldn't otherwise have. However, this argument holds individuals to blame for factors that are out of their control, such as societal class divisions that discourage mobility, industrially-minded education systems that discourage creative thought, economic conditions that crush motivation. Suggesting that entire strata of people have no hope to become craftworkers is a blanket indictment that denies each person their most basic human intellectual characteristics. Ironically, such thinking holds back people's development while professing to enable it.

The second is that this is idyllic: commercial reality is that buyers are too impatient to allow tradecraft to develop, sellers have incentive to create large-scale businesses, and corporate management and procurement are founded on industrial patterns of behaviour. All true. But it falls to those of us in the business of software development to decide our own fate. If we are motivated purely by the lure of lucre - and with demand still outpacing supply of software development labor there's plenty of money going round - then we inherently choose economics over humanity. However, if we are motivated by intellectual rather than income potential, we can choose to create different types of commercial ecosystems.

This isn't wild-eyed idealism, it's sound business. As economist John Kay points out in Obliquity, "the most profitable companies are not the most profit oriented." Economic rewards benefit the firms that set out to be great at what they do, as was the case for ICI chemicals ("to be the world's leading chemical company ... through the innovative and responsible application of chemistry and related science") in the 1980s and Boeing in commercial aviation ("eat, breathe and sleep the world of aeronautics") in the early 1990s. Both enjoyed substantial financial success even though that was not their primary motivation. Once those firms shifted their focus to be principally economic in nature - "The ICI Group's vision is to be the industry leader in creating value for customers and shareholders through market leadership, technological edge, and a world competitive cost base" - their fortunes darkened considerably: ICI ceased to exist as a company ten years after changing their mission. Boeing's once unassailable dominance of commercial aviation eroded within a decade of shifting to "... a value based environment where unit cost, return on investment, shareholder return are the measures..."

It's a choice, and it's one that all of us in the business of software make again and again in the nature of the companies we create, the contracts we enter into, and how we interact with each other. The sum of our choices will determine whether we form a new generation of knowledge workers or train the next generation of wage slaves.

Choose wisely.

Friday, February 28, 2014

Making is Part of Design

I had intended this month's blog to be about how industrialization would expand the secondary labor market and subsequently be a lost opportunity to create a new generation of knowledge workers rather than the next generation of wage laborers. While writing that, it occurred to me that industrialization appears to separate design from making, when in fact it does not, and that separation - real or otherwise - is essential to understanding the separation of labor into primary and secondary strata.

* * *

[P]roducers sought to overcome the uncertainties of output and quality associated with domestic production; to serve the new markets created by expanding world trade and a growing population (certain privileged sectors of which had a rising standard of living); and most important of all, to take advantage of mechanical systems of production.
-- Gareth Morgan, Images of Organization

Prior to industrialization, demand for manufactured goods overwhelmed the capacity of guilds to produce them. This was due in no small part to the fact that in guilds, the process of engineering (designing things) and manufacture (making things) were intertwined. A watch wasn't just a piece of precision engineering, it was also a piece of precision manufacture that required a highly skilled person to make it. Production volume couldn't increase any faster than the rate at which tradespeople acquired the skills necessary to produce things.

Industrialization changed manufacturing by separating engineering from production. Product engineers concentrated on the design of a product - sketching, prototyping, tweaking, and refining - until they got the right combination of features, materials, and configuration that was useful, provided sufficient (but not excessive) durability and performance, and could be built economically. Once the designers had done this, they could turn their creations over to manufacturing operations to produce them at scale.

The separation of design from manufacture subtly disguises the fact that making is an essential part of the design process. Before we mass produce anything - whether a consumer product or an ad campaign - we build a prototype that has very much the same componentry of the finished product we expect to produce. We subject the prototype to a number of stresses & analyses to make sure it will work reliably and consistently in benign and challenging conditions. We also make sure that the design isn't so complex that manufacturing it will be prohibitively expensive. We use the feedback from these analyses to adjust our prototype, and we repeat the cycle until either we know what we are going to build and how we are going to build it or we scuttle the project entirely. We do this because no matter how smart we are at things like materials science, chemistry and physics, we are not omnipotent. We are investigating the what, why and how, and using what we learn to develop a useful product that provides economic value to the customer and profit to the producer. When we are developing complex products, we do not go from engineering drawings directly to mass production. We think, we prototype, and we tinker before we enter a phase of mass production. Design and architecture truly are emergent in most things we make.

People have tried to separate design from making in many different fields. Perhaps the most ambitious was the corporation itself. In the 1960s, firms like Singer, Litton Industries and TRW were presented by management theorists as the triumph of complex corporate strategy derived from analysis and modeling based on "a single comprehensive evaluation of all options in light of defined objectives". In his book Obliquity, economist John Kay contrasts this big up front corporate design to companies that "muddle through": those firms that follow a disciplined process of "...experiment and discovery. Successes and failures and the expansion of knowledge lead to reassessment of our objectives and goals and the actions that result." He points out that the strategies of Singer, Litton and TRW all fell apart rather quickly, whereas firms that muddle through by making "successive limited comparison" tend to be more robust and resilient. Design (in this case, strategy) largely emerges from the experience of lots of incremental changes. Success favors a bias for action over analysis.

There have long been attempts to industrialize software development. The thinking goes that we can separate the design of software from the construction of software. The design consists of business requirements and architecture documents that define the product, a simple proof-of-concept (incorrectly passed off as a prototype), and a detailed project plan that provides the instructions for production. But engineering and making are tightly coupled in software development. As we create code, we learn what users like, what scales, what is secure, what isn't reliable, what is too complicated, and so forth. All of this learning needs to be factored into our design. An industrial approach to software development denies the need for the kind of learning. Given the consistent history of disappointment and failure of large scale software development projects, that denial is very costly indeed.

Industrialization seems an unnatural fit for software, but that's probably what many in the watchmaker's guild thought at the dawn of the industrial age. Tolerances, components, and production tasks were standardized and made repeatable, and design was successfully separated from production. Yet it remains elusive in software: integration of software components is still a context-heavy and therefore very labor intensive activity, and development remains a creative process of problem solving and not a repetitive act.

Still, as I wrote last month, as long as demand for software developers outstrips their supply, there will continue to be a great deal of pressure to find ways to industrialize software development to achieve scalability. Next month we'll look at the ramifications of industrialization to people and society.

Friday, January 31, 2014

The Persistent Imbalance Between Supply and Demand for Software Development Labor

The growth in demand for software has consistently outpaced the growth in the supply of software developers. This has been the case for well over half a century. It's worth looking at why.

Each major expansion in software development - automation (60s), productivity (80s), internet (90s), mobile (00s) - has been additive to the total stock of software in the world. The old stuff doesn't go away: software is still an enabler of labor productivity (office & communications), and a weapon for market share (online customer interaction). Yet we continue to find new applications for software: it is increasingly a product differentiator (embedded systems) or a product category of its own (social networking). While some segments have retrenched (companies license rather than write their payroll systems), the proliferation of new forms of software has more than compensated for consolidation in others. And the more it proliferates, the greater demand for integration, security, and other ancillary software.

Each new wave represents a structural increase in the demand for labor: the old stuff has to be maintained or replaced, while new applications not only bring new demand, they bring new tools and languages which require new skills and capabilities.

From a labor market perspective, the software economy has been expanding for decades. As a result, it marches to the beat of its own drum. Software development is generally counter-cyclical to the broader economy: it does well when the economy is down because software locks in productivity gains desired after layoffs & cutbacks. Software also makes its own opportunities, because it is inherently a business of invention and innovation. There are peaks and valleys: a structural change in demand for labor can sew the seeds of a bubble and its inevitable collapse. But bubbles in tech are bubbles of tech's own making: the video game / home computer bubble (1983), and the Y2K / dot-com bubble (2000) each resulted from irrational expectations and dubious business models created by people within the tech sector itself. The Y2K / dot-com bubble bursting coincided with increased accessibility to a global labor supply of software developers (offshoring was all the rage in the early 00s). Although the US experienced an acute contraction for software development labor, the global labor pool grew, and the regional contraction in the US proved to be short lived. Today, although the software labor market remains inefficient (context still doesn't travel), there are no easy cost savings to be gained (no large labor pools of skilled labor remain untapped). Global supply has been substantially eclipsed by global demand.

We're currently in the midst of another structural increase in the demand for software development labor, this time being driven by analytics and smart devices (the alleged "internet of things", from cars to coffee pots), with the odd halo application (e.g., wearable tech) thrown in for good measure. Every indication is that for the foreseeable future, demand for software developers will continue to increase at a rate faster than the supply of software developers available to develop it.

What does this mean to the business of software?

1. Ambition will outpace capability. Any business plan that comes down to "hire a bunch of smart engineers" - be it developing a new product or rescuing a distressed IT organization - is doomed. There is too much money chasing too few people. A company's labor timeline has to expand: it will take longer to hire experienced engineers, and firms will increasingly need to invest in incubating new developers. Labor scarcity poses a vulnerability to employers: a company known to have capable engineers is wearing a target for recruiters. When jobs outnumber candidates, jobs become commodities to employees. To differentiate from other employers, a firm must be highly attractive to the specific strata of the labor market that it wishes to employ. It does this by developing unique culture and values, and professional and societal aspirations that make it destination employer for those people. Without these things, it can only compete for labor on comp and career. It's difficult for a firm to maintain competitive advantage for labor solely on the price it is willing to pay for it.

2. Employers will pursue labor industrialization over tradecraft. Software development is labor intensive: the productivity enhancers that do exist such as automated testing and automated build are still poorly implemented, when used at all. Plus, the diversity of programming languages and the complexity of environments encourages labor specialization and task management. Still, people investing in software assets will not take "can't find competent people" for an answer. As the old saying goes, if you can't raise the bridge, lower the water. On a person-by-person basis, it is faster, easier, and cheaper to hire, train, and staff industrial workers to work on a software development "factory floor" where they perform coding tasks in an assembly-line like way than it is to recruit, develop and mentor polyskilled software developers. New labor formation will be largely industrial in nature, not tradecraft.

3. The risk of spectacular software failure will increase. The horrific explosion of the oil tank train that devastated Lac-Mégantic in 2012 was in no small part the result of demand exceeding supply. North American oil production has risen dramatically in the past half-decade. All that oil coming out of shale fields will find its way to refineries. Since there aren't pipelines to carry it, it's going by rail. The rail industry was in decline in North America for many years. A sudden uptick in demand can't be quickly satisfied by skilled labor. The net result is that railroads are hauling increasing volumes of a volatile commodity, but their capability to handle it isn't maturing at the same rate. In software, the demand/supply imbalance increases the risk of significant operating or project failure - that is, massive delivery overruns or post-delivery operating problems - as skills fail to mature in step with demand.

4. As the skills brought to bear on any given software investment deteriorate, software asset quality - particularly technical quality - will deteriorate. Industrial labor produces volume, not quality. The glut of software assets being produced will be toxic by technical quality standards. As it happens, it will go largely unnoticed because neither the concept of technical debt nor its commercial ramifications are well understood by the (average) business buyer of software, and because IT governance remains weak in practice. However, poor asset quality will become visible in maintenance and operating costs, and the occasional write-off. Any firm forced to make too many write-offs due to poor technical quality will cause it to see software as disposable rather than durable. That would create deflationary price pressure for labor and increase the demand for industrialization.

As long as the applications for software continue to expand, insufficient numbers of software engineers come into the work force, and software development remains labor intensive, there will be a fundamental supply / demand imbalance. But demand tends to be impatient. Economic and perhaps even political pressure will intensify to industrialize software development. This implies expansion of the secondary labor market, which is less skilled, educated, compensated and mobile than the primary labor market. That would be a lost opportunity: rather than fostering a global wave of knowledge workers, software development will simply bring the next wave of wage workers. We'll look at the reasons for that in the next post.