I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Monday, June 30, 2014

The Fine Line Between "Stretch Role" and "Unqualified", part I

Everybody wins when somebody is put in a stretch role. Whoever does the hiring - a manager naming a first time tech lead or a board hiring a first time CEO - has propelled somebody's career. The investment in that person's success implies a commitment to a very active mentoring relationship. The person being asked to stretch is being given the opportunity to learn and mature, with a tacit expectation that they have freedom to try and fail while they are honing new skills.

We like seeing people in stretch roles, we like what it says about us as leaders that we put people in them, and we like the possibility that one day we, too will be given an opportunity to stretch.

A stretch role creates a halo effect for the person doing the hiring and the person being hired. For the management doing the hiring, it's a sign that the company is investing in the next generation of leaders, and that the firm demonstrably offers opportunity for advancement. For the individual being hired, it signals satisfaction of performance and expectations for great things. And, because everybody is taking a chance, it communicates an element of "risk", even in otherwise risk-averse corporate cultures.

But the halo reflects little brilliance. Stretch roles are too often made because there aren't any other viable choices. We rarely get to hire our ideal candidate, so we're going to have to settle in one way or another. Scarcity is a factor: in tight labor markets, availability becomes a skill. Plus, the longer a position goes without being filled, the worse the manager responsible for filling it looks - and the more likely that somebody higher up will conclude the position isn't all that necessary and will remove it from the budget. The decision to put somebody in a stretch role is very often simply that we can't think of any reason not to put you in this job. This is easily rationalized: emotionally, the benevolence of offering somebody a stretch role more than compensates for the risk that the person will not work out.

In the right circumstances, stretch roles grow people and businesses. They give a person license to test his or her boundaries, the freedom to experiment, and the opportunity to develop a unique style at something. But success depends on the circumstances: a short honeymoon period, being kept on a short leash by management, pressure to underwrite risks they don't completely understand, a "we never fail" corporate culture, no critical assessment of the person's areas of weakness, an absentee mentor or, worse, an incapable mentor, each stack the deck against the stretch candidate. A newbie in the job will not recognize the factors working against their success.

Throwing somebody into the deep end of the pool in their first swim lesson isn't enabling, it's overwhelming. Having somebody claw their way into a state where they can perform at a rudimentary level isn't professional development: it risks developing the wrong "muscle memories" for the job, and denies them the opportunity to achieve the meta-awareness they need to master their new role. It's also short-term career fatal: perpetually chasing responsibilities, constant drama and few successes alter the perception of the person from "aspirant stretchie" to "unqualified leader".

There is also the potential for long-term career damage. Over-promote somebody into a leadership role and they'll forever think they're leadership material. It may be that they simply aren't, but the stretch candidate is not likely to recognize this before or after being asked to take a stretch role; once invited, they've made the grade. The person who was on a stable career path ends up making frequent job-hops across firms just to maintain the same level of seniority.

Teams and departments suffer, too, not only from weak leadership at the helm but from the damage done to the confidence and trust among everybody else in the business. Plus, it sews seeds of doubt with the management who put the person in that role in the first place, often with damaging consequences (recall that Bill Ackman was ousted from the JC Penney board for having hired Ron Johnson as CEO).

Worse, the time spent with the wrong person in the role is time the business prolongs its people problems. Some years ago, I worked with a firm that had a strong tech culture but a weak sales one. There was high turnover of salespeople, and frequent vacancies in the sales team. A manager in the tech organization asked for a position in business development. Because of his credibility in tech delivery twined with an extrovert personality, management saw no reason not to give him the job. His lack of knowledge of business development, his inability to emphasize with non-technical business buyers, and the absence of any strong sales leaders to mentor the new hire contributed to a disappointing year, culminating with his being asked to leave the sales organization and return to tech. Tainted by this failure, the would-be BDM left the company soon after. The company had not only engineered the loss of a respected member of the tech organization, it was no further along solving it's sales problem one year on.

(As to the person in question, his resume ticked technology, management and sales boxes, he already had a general management position at another tech firm in hand at the time he left. It was a short-lived gig as it became obvious very quickly that his capability did not live up to his resume.)

There's a simple litmus test we can apply to any organizational leader: would this person hold a comparable position in a comparable organization? An established leader capable of redefining and reshaping role clearly passes this test. An emerging leader who quickly takes to their new role while also disrupting conventional understanding of it will also pass this test. An aspirant leader being chased by demands and relegated to rote execution under constant direction of his or her superiors will not.

Every business has people in stretch roles. What are you doing with yours?

Friday, May 30, 2014

Deflation and Technical Debt

Technical debt is a useful metaphor for explaining why some code is faster to complete but more expensive to maintain. It is also helpful in explaining design decisions made for sake of expediency, or because of an outright lack of knowledge. Tech debt can be a real burden on development, particularly as it takes away time that would otherwise be directed toward productive investment in new feature development. This makes it tempting to interpret tech debt as a quantifiable economic or financial phenomenon. It is not. If we respect the accounting treatment of it as indirect, and extend the metaphor toward team dynamics and away from financial statements, it also helps us understand our ongoing ability to service tech debt.

Debt and Deflation

Edward Hadas argues that debt is an antiquated form of finance, "unnecessarily distant from economic reality". Fixed interest rates, maturity mismatches (banks borrow short and lend long) and variability in borrower's cash flows create unnecessary risks to borrower and lender alike. Financial institutions create all kinds of provisions and capital structures to underwrite uncertainty and absorb losses. If we set out to create finance today, we wouldn't use such a rigid structure as a primary investment vehicle.

Debt is a wager on future interest rates. The debtor is making an income backwardation play: that inflation will rise faster than the rate reflected in the interest rate, or that their real wages rise over the life of the loan. For example, in simple housing finance, a wage earner takes out a loan to buy a house. If inflation rises faster than expected (per the interest rate on the loan) and their wages keep pace with that higher inflation rate, the debt is easier to service because their income is higher. In this case, they've been inflated out of their debt. In addition, the debtor's wages can rise faster than inflation through salary increases (e.g., due to job promotions). This makes the debtor's real income higher, which also makes it easier to service the debt. The backwardation is that the projected future income at the time of the loan is less than what the spot income turns out to be in the future.

The lender is not entirely betting on the opposite. It is true that the lender comes out ahead if inflation rises more slowly than the rate reflected in the interest rate. But they also stand to gain from the debtor's ability to service a loan. An increase in real wages increases the debtor's ability to service the debt; this is reflected in the borrower's credit worthiness (rating or score). Lower credit risk increases the value of the debt instrument. A lender can sell the loan to somebody else for a higher price, which is their reward for underwriting the risk of the borrower at the time of origination.

For the borrower, deflation increases the real cost of debt. The less money a household earns, the harder it is for that household to service its debt. This is why central banks in highly indebted economies will pull out all the stops to fight deflation. Falling consumer prices reduce revenues. Falling asset prices reduce people's perception of their wealth, which reduces their willingness to spend. Deflation increases the burden of debt and intensifies contractionary forces on an economy. Mature economies - Europe, Japan, US - are debt-financed more than they are equity financed, and there are far more borrowers than lenders. It comes as no surprise that European Central Bank chief Mario Draghi committed to fight low inflation, Shinzo Abe's government in Japan expanded money supply to juice asset prices, and former US Fed Chairman Ben Bernanke committed to Quantitative Easing.

Technical Debt is an Indirect Economic Phenomenon, not a Direct One

In finance, the person using debt to finance an asset purchase is the person who is responsible for the debt. This makes sense in the business of software, because the person paying to acquire and operate software may be "borrowing" against future cash flows in the form of costs to service technical debt. The buyer of a software asset is the person footing the bill for people to service that debt.

But technical debt is an indirect economic phenomenon, not a direct one. That is, technical debt does not necessarily finance a software asset. Technical debt only has economic impact if costs required to service that debt are realized. For example, an asset with a lot of tech debt may not be subject to much maintenance activity, or suffer production instability requiring a lot of attention, or suffer performance problems (directly resulting from that tech debt) requiring additional investment to scale. In each of these cases, a tech-debt-heavy asset may have the potential to be high cost, but those costs may never be realized. If a cost is not realized, it is not a real economic cost. It only becomes an economic phenomenon if excess labor or infrastructure is needed to compensate for its presence.

This brings us to the limits of the tech debt metaphor. There is no bank offering technical debt loans: tech debt is conjured by people during moments of development. One can argue that tech debt borrows against the equity of the asset, but that does not hold up in accounting terms. With or without tech debt, we still carry the asset on the balance sheet at the same economic value: the total capital outlay less accumulated depreciation. The cost of servicing technical debt is a function of people's time, which is an operating cost. The expectation of future payroll costs that may be incurred to service technical debt is not a balance sheet liability that reduces an asset's equity, it's reported in the future period when it is incurred as an operating expense on our income statement and a drag on cash flow. Tech debt may or may not lead to reduced current and future profitability and cash flows; it does not intrinsically reduce the accounting value of an asset.

Only if the software itself is truly impaired - for example, owing to poor design decisions, our software doesn't scale beyond a single user and a significant portion of the asset is written off - is technical debt a direct economic phenomenon. However, in that case, "debt" is the wrong moniker: the asset is well and truly impaired, not merely leveraged by debt. If a business takes out a usurious loan to buy a truck, the truck isn't impaired by the loan or the high interest payments. If the truck is severely damaged in an accident, it is impaired and written down. Because of the optics, impaired software is more likely to receive additional investment than it is to be written off.

Since tech debt is an indirect economic phenomenon, we have to look at our principal actors differently. In technology, the people who are responsible for servicing the asset (e.g., the code) are the people who are responsible for tech debt. Remember, in finance, we don't borrow against an asset. We borrow against future cash flows of a household or company, using an asset as collateral should the borrower not have the cash to service the debt. In tech, our code may be the asset against which we have borrowed, but we're really borrowing against the time of the team (e.g analogous to future cash flows) responsible for it to service that debt. Tech debt is a call option on people's time at some point in the future, not the asset itself.

Thinking about it this way allows us to extend the debt metaphor a bit further.

Capability Deflation Increases The Cost of Servicing Technical Debt

We saw earlier that deflation impairs people's ability to service debt. The same applies to tech debt. The inflationary or deflationary forces that impact our ability to service tech debt are related to our capability. We "inflate" our capability - and thus reduce our burden of servicing tech debt - through skill development and productivity enhancement. Our skills improve through study and experience. Our productivity improves with knowledge, tools and process. Holding our tech debt static - that is, assuming our tech debt merely rolls over - our ability to service that debt improves with the inflation of our skills and productivity. Capability "inflation" is the same as a household seeing real wages increase. The stronger our capability, the less impact that tech debt has on a team, the more "value" it can produce.

The converse is also true: capability deflation increases the real cost of servicing tech debt. An erosion of skills, loss of situational knowledge, and reduction in productivity all contribute to capability deflation which increases the burden of technical debt. Again, holding our technical debt static, our ability to service it declines with the deflation of our skills and productivity. Just as deflation intensifies contractionary forces on an economy, so, too, does deflation intensify contractionary forces on a team: the greater the deflation, the greater the burden of servicing tech debt, the less "value" produced by a team (the equivalent of economic contraction). In extreme cases, as happens in financial markets, debt servicing "crowds out" our ability to invest in our business through software creation.

Whip Deflation Now?

In 1974, fighting what would become runaway inflation in the post-Bretton-Woods currency world, the United States launched a grassroots campaign encouraging all citizens to curtail consumption - and share their ideas for doing so - as a way to contain inflation. The campaign was entitled "Whip Inflation Now", or "WIN". In the immortal words of Alan Greenspan, "this [campaign] is unbelievably stupid".

Leaders of any tech organization are in a constant battle against capability deflation. People will quit and work somewhere else, taking their skills and situational knowledge with them. People's skills will erode as they become content maintaining a "software annuity" that pays them a high salary for low-effort maintenance work. Organizational memory erodes as people leave, and those who remain forget why specific design decisions were made. Demand outpaces supply for skilled software engineers, forcing firms to hire less skilled developers if they are to have developers at all. New technology obsoletes old technology, and old software becomes trapped in legacy infrastructure. As evolution gives rise to pure maintenance, the work becomes less and less engaging and attractive. All of these things are deflationary forces on team capability.

To keep capability deflation in check, we have to burn the candle from both ends: refresh skills, and refresh assets. Have "refactoring" hack nights to attack tech debt, run spikes to introduce new technologies into legacy code, and rotate people through different teams to disseminate situational knowledge. We can also make investment cases to retire legacy software assets, which we can help by drawing attention to the "tech currency" of the software assets in our production portfolio. We can also use the strangler pattern to incrementally retire legacy systems, reducing the economic cost - and uncertainty - of replacement.

There will always be tech debt, if for no other reason than one person's engineering masterpiece is another's code hairball. And there will always be the threat of capability deflation. We can fight capability deflation - we have no choice, we have to fight it - but we'll never defeat it. To keep it in check, have a corporate culture that values knowledge acquisition and collaboration, and an investment strategy that constantly reinvents the software that runs the business.

Wednesday, April 30, 2014

Are We Aligning the Portfolio with Labor, or Labor with the Portfolio?

Corporate IT has long been a paradox: as a source of competitiveness it is expected to be responsive and flexible; as a cost center it is kept on a tight budgetary leash.

To satisfy the prior, we use some form of portfolio management to reconcile competing - and sometimes confusing - priorities to direct IT effort toward the highest priority business needs. To satisfy the latter, we manage supply - people, infrastructure, solutions - to maximize productivity and throughput, and minimize idle time. Generally, we manage these independently of each other, but success lies in the balance of the two.

In software development, it is alluring to think that achieving this balance is a technical management problem that can be solved through process. We think of software development as a value-generative activity: the features we add have value, so if we can determine which features offer the best value at the smallest risk in the shortest time-to-market, we can maximize our business impact. If this is the case, the problem of balancing supply with demand should be primarily one of granularity of requirement: something fine enough to be deliverable in a short period of time, but coarse enough to be of business value. No surprise, then, that we've seen a rise in popularity of Agile, and particularly of Agile Stories, in corporate IT: being short and compact, they should make it possible to prioritize across diverse business demands, while minimizing work in progress because any given Story requires a short delivery horizons. In theory, we're making supply more efficient and simultaneously maximizing yield of the IT portfolio.

In theory. In application, we quickly run into two problems.

One is on the demand side. It's appealing to think of a portfolio of development projects, each of which will yield assets that provide some return to the business investing in them. In practice, a corporate IT portfolio isn't so tidy. It's got a little bit of everything, including major projects, minor modifications, packages of bugfix masquerading as enhancements, library version upgrades, and various & sundry other things. For many of the investments in the portfolio, the economics are neither crisp nor clear: "increasing customer satisfaction" and "reducing risk of failure" are worthy but cannot be rationally denominated in financial terms comparable to things like "increase revenue" or "decrease cost by eliminating jobs". Very often, things in the portfolio are things that just have to get done (e.g., upgrade or lose vendor support) or that people want to get done for appearance sake (e.g., halo projects). On top of it, it's not always clear who the business "sponsor" or "owner" or even the "user" of these things are. Demand is messy.

The other problem is on the supply side. The theory is that if everybody is working to satisfy a Story with business value, then we've achieved a state where throughput is optimized and people are always working on The Most Important Thing. But The Most Important Thing expressed as a Story will, by definition, always be some business goal or need - and that isn't how work gets done in IT. Corporate IT is overwhelmingly populated by monoskilled specialists: an ETL developer, a UI developer, and a middleware developer might all cut code, but they all do different things and they cannot do what each other does. Their "unit of work" is a very specific and very narrow task that, at best, contributes toward meeting a business need.

For the process wonk, this is inconvenient. It's a void, a "last mile" to bridge. The automatic response is to decompose Stories into a collection of technical tasks and assign those tasks to different specialists. We still have line of sight from tasks that satisfy a Story, and Stories that satisfy an Epic, and Epics that satisfy a business case, so we're still aligning supply with demand. That works, doesn't it?

It does not. When we cast portfolio needs in terms of technical tasks, we're subordinating demand to inadequacies of supply.

  • Technical tasks are interesting to technical people, not business people. The Most Important Thing to somebody running a business will never be a technical task. The CEO is interested in fulfillment of a tech task only during times when IT is damaging the business.
  • Tasking reinforces the biases and behaviours of tech that we want to change. The point of a Story is to have people think from a business perspective. When we task, we have to swarm specialist labor to solve a single business problem. They speak fundamentally different technology languages and will see the problem they're out to solve through different technological lenses.
  • A "whole business solution" is more than the sum of technical tasks. Omnipotence for up front design twined with multiple hand-offs among myopic specialists has not historically been a formula for success in IT.
  • Tasking creates local optimization at the cost of systemic efficiency. Because it is easy for specialists to complete tasks, tasking creates the appearance of efficiency. Somebody fluent in a specific area of code or with a specific business area can complete a task faster than somebody who is not. This efficiency comes at the cost of systemic responsiveness and resiliency: the greater the degree of specialization, the more bottlenecks we create and the less resilient we are to a loss of those specialists.
  • Tasking is the triumph of effort over results. Keeping an army of specialists busy requires that we have a large backlog of work that they can tap into, otherwise we fail to maximize utilization of supply. This is done either by pulling demand forward - initiating multiple projects from the portfolio to fill the backlogs - or generating backlogs of purely technical things that we may or may not ever get round to doing. Either way, we are increasing work in process as a means of maximizing labor utilization (or, in simpler terms, keeping people busy).

Tasking might make supply a little less inefficient, but it does nothing to improve the performance of our IT portfolio.

However, it is acquiescence to the systemic inefficiencies inherent in an effort-centric operating model, largely because it is the path of least resistance. It's easier to administrate an effort-centric business than a results-oriented one. It's easier to define position specs and salary grades for specialists. It's easier to hire & rent specialists. It's easier to define task orders for specialists. It's easier to learn one technology well enough to get a job at it. It's easier box-tick work done by specialists. It's easier to do a defined task. It's just easier.

Effort is easy. Results are hard.

If the goal is to improve the performance of our IT portfolio, then we need to bring labor closer to the portfolio instead of bringing the portfolio closer to labor. We need each person to able to deliver a meaningful business result. We need polyskilled generalists who can bring skills and capability to bear on solving business problems through technology. We need people who are knowledge acquisitive and disciplined in how they work.

This goes against the grain. Companies are optimized to minimize how much they spend against gross revenue goals, not how well they maximize return against discreet, incremental uses of capital. Procurement and HR will have no means of compartmentalizing, grading and costing this. The CFO will chafe at the unit cost of generalist labor.

Results don't always speak for themselves. In IT, we need to become less concerned with measuring effort and more adept at framing results.

Monday, March 31, 2014

Knowledge Versus Wage Work in Software Development

"Increasing numbers of people who had formerly been self-employed in workshops and cottage industry, often on a subcontracting basis, assumed new roles as part of an emerging wage-earning class. Labor increasingly became viewed as a commodity to be bought and sold. And since these changes eliminated earlier systems of production, for the new wage earners the process was irreversible, making them dependent on the wage system."

-- Gareth Morgan, Images of Organization

The separation of design from making bifurcates the labor force into people who design things (products, supply chains, marketing campaigns) from people who build them (assemble product, deliver the merchandise, place the advertisements. This is the separation of primary and secondary labor forces.

A firm's primary labor force consists of highly-skilled people with detailed, company-specific knowledge. They have financial and career ties that bind them to their employers to reduce the attractiveness of leaving. A firm's secondary labor force is generally lower skilled and lower paid. They serve as a buffer that allows a company to expand and contract with prevailing economic conditions without jeopardizing core operations. This gives a company greater control over its business, because the labor associated with "making" is a variable as opposed to a fixed cost of the business.

This separation defines both role expectations and career paths. A person in the primary labor force is a knowledge worker: they are expected to be highly skilled, abstract thinkers who are concerned with systemic issues. A person in the secondary labor force is a line employee: they are expected to be lower skilled, concrete thinkers who are primarily concerned with execution. Of course, in software development, we often see highly skilled, abstract thinkers in the secondary labor market and lower skilled, concrete thinkers in the primary. Although this may be the case, generally speaking a business is less effective if it's primary labor force consists of people who are concrete thinkers, and it is less efficient if its secondary labor are abstract thinkers.

The economics of this arrangement favor the few who design over the many who build. Members of the primary labor force will command higher incomes and their occupations are more likely to be wealth generative (that is, offered equity in their employer). Their positions are less vulnerable to economic downturns and consolidative mergers. Members of the secondary labor force can command outsized incomes - particularly those with scarce skills that are in high demand - but will not generate wealth through their occupation. During periods of economic expansion, they will enjoy stable employment, rising incomes and access to credit facilities. However, they are vulnerable to economic downturns (as mentioned, they buffer the shock of a reduction in demand), productivity investments (automation tends to eliminate jobs in the secondary labor market), consolidation (a significant portion of the "synergistic benefit" of mergers are achieved by reducing secondary labor forces), and labor arbitrage (states and countries create tax incentives for firms to build facilities to house large volumes of lower skilled workers). The primary labor market is less vulnerable to these forces.

An employer's relationship with its labor can involve multiple parties and take many forms. For example, an insurance company contracts with a technology consulting firm with deep insurance domain expertise to develop, maintain, and operate major software applications that run its core business. Although the insurance company is renting a large part of its tech work force, the extent of the dependencies the insurance company has on its provider mean that many members of the consulting firm's staff are, in effect, members of the insurance company's primary labor force. The consulting firm's intermediation doesn't change this fact, it simply changes the economic relationships. Alternatively, one firm's primary labor force may be another's secondary. For example, a retailer contracts with a consulting firm to develop custom software, but the retail firm assumes responsibility for the maintenance and evolution of the asset once delivered. The consulting firm may have architects, project managers and other lead staff it considers part of its primary labor force (the people building its business) that it supplements with a large secondary labor force (people who work on projects). However, the transient nature of the contract means that all members of the consultant organization are part of the retailer's secondary labor market. And, of course, a high and poly-skilled developer may be able to sustain a career as an independent contractor through good times and bad, while a low and narrowly skilled tester may only find work during boom times. Both are, by definition, in the secondary labor market, even if for different reasons.

There have been many attempts to create large, lesser-skilled, secondary labor forces that supplement a core primary one in software development. CASE tools rose to prominence in the 1980s. They promised, among other things, that by structuring, integrating and concentrating system design and analysis into a single repository, code could be produced within strict boundaries set by designers. In the 2000s, industrial IT practices took root: analysts and architects produced detailed specification documents that were to be coded by remote development teams, while test leads designed scripts to be executed to a pass or fail outcome by armies of test executers. And, of course, there is the pursuit of maximizing labor utilization: managers will schedule recurring slivers of time across multiple projects from technology specialists, while procurement departments contract for software developers as interchangeable "resources".

There is an argument to be made that wage work has historically lifted large swaths of humanity out of poverty. Not without costs, such as pollution and abhorrent working conditions. Still, is there a case to be made that justifies the industrialization of software development because it improves quality of life? After all, even if it is wage work, it enables people to work in jobs that are not physically risky (although they can be highly stressful), rewards people for education and ongoing skill development, and tends not to cause environmental damage. Is this not socio-economic advancement?

It is a Faustian exchange. By its very nature, wage work subserviates large populations of laborers, inherently creating a class division. Because of labor arbitrage and automation, compensation is not highly inflationary and the availability of work is subject to volatility. In an era of financial engineering, wage workers are encouraged to make personal bets (in the form of debt against future income streams) based on the appearance of stability in their employment; this makes wage work exploitative.

In addition, the argument that wage work makes software development more economical, therefore leading to more demand and the benefit of more laborers, isn't compelling. For one thing, as more and more software gets injected into existing things and allows us to make entirely new things, it does not stand to reason that demand will support more industrial workers than craftworkers. For another, given the high degree of project failure, it also does not stand to reason that an industrial approach is a more reliable way to make timely delivery of complex projects.

Software development offers greater benefits to society if it is a profession rather than an industry. A profession requires its members to understand not just the what, but the why and how. This demands more intellectual and creative development from each person. This does not mean more education and training (skill possession), but pursuit of professional self actualization (continuous knowledge acquisition). It creates social structures that are flatter, equitable and offer greater mobility because they are based on collaboration, knowledge and capability (peer & mentor relationships) rather than hierarchy (superior / subordinate). It also offers greater individual freedom because it is governed more by principals than by rules. This is a far more liberating for the individual than the industrial alternative, and no less economically beneficial.

There are at least two counter-arguments to this. The first is that it is elitist: not everybody is cut out to be a professional. I.e., that many software "line workers" aren't naturally inquisitive or motivated enough to be professionals; as industrial workers they have a standard of living that they wouldn't otherwise have. However, this argument holds individuals to blame for factors that are out of their control, such as societal class divisions that discourage mobility, industrially-minded education systems that discourage creative thought, economic conditions that crush motivation. Suggesting that entire strata of people have no hope to become craftworkers is a blanket indictment that denies each person their most basic human intellectual characteristics. Ironically, such thinking holds back people's development while professing to enable it.

The second is that this is idyllic: commercial reality is that buyers are too impatient to allow tradecraft to develop, sellers have incentive to create large-scale businesses, and corporate management and procurement are founded on industrial patterns of behaviour. All true. But it falls to those of us in the business of software development to decide our own fate. If we are motivated purely by the lure of lucre - and with demand still outpacing supply of software development labor there's plenty of money going round - then we inherently choose economics over humanity. However, if we are motivated by intellectual rather than income potential, we can choose to create different types of commercial ecosystems.

This isn't wild-eyed idealism, it's sound business. As economist John Kay points out in Obliquity, "the most profitable companies are not the most profit oriented." Economic rewards benefit the firms that set out to be great at what they do, as was the case for ICI chemicals ("to be the world's leading chemical company ... through the innovative and responsible application of chemistry and related science") in the 1980s and Boeing in commercial aviation ("eat, breathe and sleep the world of aeronautics") in the early 1990s. Both enjoyed substantial financial success even though that was not their primary motivation. Once those firms shifted their focus to be principally economic in nature - "The ICI Group's vision is to be the industry leader in creating value for customers and shareholders through market leadership, technological edge, and a world competitive cost base" - their fortunes darkened considerably: ICI ceased to exist as a company ten years after changing their mission. Boeing's once unassailable dominance of commercial aviation eroded within a decade of shifting to "... a value based environment where unit cost, return on investment, shareholder return are the measures..."

It's a choice, and it's one that all of us in the business of software make again and again in the nature of the companies we create, the contracts we enter into, and how we interact with each other. The sum of our choices will determine whether we form a new generation of knowledge workers or train the next generation of wage slaves.

Choose wisely.

Friday, February 28, 2014

Making is Part of Design

I had intended this month's blog to be about how industrialization would expand the secondary labor market and subsequently be a lost opportunity to create a new generation of knowledge workers rather than the next generation of wage laborers. While writing that, it occurred to me that industrialization appears to separate design from making, when in fact it does not, and that separation - real or otherwise - is essential to understanding the separation of labor into primary and secondary strata.

* * *

[P]roducers sought to overcome the uncertainties of output and quality associated with domestic production; to serve the new markets created by expanding world trade and a growing population (certain privileged sectors of which had a rising standard of living); and most important of all, to take advantage of mechanical systems of production.
-- Gareth Morgan, Images of Organization

Prior to industrialization, demand for manufactured goods overwhelmed the capacity of guilds to produce them. This was due in no small part to the fact that in guilds, the process of engineering (designing things) and manufacture (making things) were intertwined. A watch wasn't just a piece of precision engineering, it was also a piece of precision manufacture that required a highly skilled person to make it. Production volume couldn't increase any faster than the rate at which tradespeople acquired the skills necessary to produce things.

Industrialization changed manufacturing by separating engineering from production. Product engineers concentrated on the design of a product - sketching, prototyping, tweaking, and refining - until they got the right combination of features, materials, and configuration that was useful, provided sufficient (but not excessive) durability and performance, and could be built economically. Once the designers had done this, they could turn their creations over to manufacturing operations to produce them at scale.

The separation of design from manufacture subtly disguises the fact that making is an essential part of the design process. Before we mass produce anything - whether a consumer product or an ad campaign - we build a prototype that has very much the same componentry of the finished product we expect to produce. We subject the prototype to a number of stresses & analyses to make sure it will work reliably and consistently in benign and challenging conditions. We also make sure that the design isn't so complex that manufacturing it will be prohibitively expensive. We use the feedback from these analyses to adjust our prototype, and we repeat the cycle until either we know what we are going to build and how we are going to build it or we scuttle the project entirely. We do this because no matter how smart we are at things like materials science, chemistry and physics, we are not omnipotent. We are investigating the what, why and how, and using what we learn to develop a useful product that provides economic value to the customer and profit to the producer. When we are developing complex products, we do not go from engineering drawings directly to mass production. We think, we prototype, and we tinker before we enter a phase of mass production. Design and architecture truly are emergent in most things we make.

People have tried to separate design from making in many different fields. Perhaps the most ambitious was the corporation itself. In the 1960s, firms like Singer, Litton Industries and TRW were presented by management theorists as the triumph of complex corporate strategy derived from analysis and modeling based on "a single comprehensive evaluation of all options in light of defined objectives". In his book Obliquity, economist John Kay contrasts this big up front corporate design to companies that "muddle through": those firms that follow a disciplined process of "...experiment and discovery. Successes and failures and the expansion of knowledge lead to reassessment of our objectives and goals and the actions that result." He points out that the strategies of Singer, Litton and TRW all fell apart rather quickly, whereas firms that muddle through by making "successive limited comparison" tend to be more robust and resilient. Design (in this case, strategy) largely emerges from the experience of lots of incremental changes. Success favors a bias for action over analysis.

There have long been attempts to industrialize software development. The thinking goes that we can separate the design of software from the construction of software. The design consists of business requirements and architecture documents that define the product, a simple proof-of-concept (incorrectly passed off as a prototype), and a detailed project plan that provides the instructions for production. But engineering and making are tightly coupled in software development. As we create code, we learn what users like, what scales, what is secure, what isn't reliable, what is too complicated, and so forth. All of this learning needs to be factored into our design. An industrial approach to software development denies the need for the kind of learning. Given the consistent history of disappointment and failure of large scale software development projects, that denial is very costly indeed.

Industrialization seems an unnatural fit for software, but that's probably what many in the watchmaker's guild thought at the dawn of the industrial age. Tolerances standardized components, production tasks were similarly standardized and made repeatable, and design was successfully separated from production. Yet it remains elusive in software: integration of software components is still a context-heavy and therefore very labor intensive activity, and development remains a creative process of problem solving and not a repetitive act.

Still, as I wrote last month, as long as demand for software developers outstrips their supply, there will continue to be a great deal of pressure to find ways to industrialize software development to achieve scalability. Next month we'll look at the ramifications of industrialization to people and society.

Friday, January 31, 2014

The Persistent Imbalance Between Supply and Demand for Software Development Labor

The growth in demand for software has consistently outpaced the growth in the supply of software developers. This has been the case for well over half a century. It's worth looking at why.

Each major expansion in software development - automation (60s), productivity (80s), internet (90s), mobile (00s) - has been additive to the total stock of software in the world. The old stuff doesn't go away: software is still an enabler of labor productivity (office & communications), and a weapon for market share (online customer interaction). Yet we continue to find new applications for software: it is increasingly a product differentiator (embedded systems) or a product category of its own (social networking). While some segments have retrenched (companies license rather than write their payroll systems), the proliferation of new forms of software has more than compensated for consolidation in others. And the more it proliferates, the greater demand for integration, security, and other ancillary software.

Each new wave represents a structural increase in the demand for labor: the old stuff has to be maintained or replaced, while new applications not only bring new demand, they bring new tools and languages which require new skills and capabilities.

From a labor market perspective, the software economy has been expanding for decades. As a result, it marches to the beat of its own drum. Software development is generally counter-cyclical to the broader economy: it does well when the economy is down because software locks in productivity gains desired after layoffs & cutbacks. Software also makes its own opportunities, because it is inherently a business of invention and innovation. There are peaks and valleys: a structural change in demand for labor can sew the seeds of a bubble and its inevitable collapse. But bubbles in tech are bubbles of tech's own making: the video game / home computer bubble (1983), and the Y2K / dot-com bubble (2000) each resulted from irrational expectations and dubious business models created by people within the tech sector itself. The Y2K / dot-com bubble bursting coincided with increased accessibility to a global labor supply of software developers (offshoring was all the rage in the early 00s). Although the US experienced an acute contraction for software development labor, the global labor pool grew, and the regional contraction in the US proved to be short lived. Today, although the software labor market remains inefficient (context still doesn't travel), there are no easy cost savings to be gained (no large labor pools of skilled labor remain untapped). Global supply has been substantially eclipsed by global demand.

We're currently in the midst of another structural increase in the demand for software development labor, this time being driven by analytics and smart devices (the alleged "internet of things", from cars to coffee pots), with the odd halo application (e.g., wearable tech) thrown in for good measure. Every indication is that for the foreseeable future, demand for software developers will continue to increase at a rate faster than the supply of software developers available to develop it.

What does this mean to the business of software?

1. Ambition will outpace capability. Any business plan that comes down to "hire a bunch of smart engineers" - be it developing a new product or rescuing a distressed IT organization - is doomed. There is too much money chasing too few people. A company's labor timeline has to expand: it will take longer to hire experienced engineers, and firms will increasingly need to invest in incubating new developers. Labor scarcity poses a vulnerability to employers: a company known to have capable engineers is wearing a target for recruiters. When jobs outnumber candidates, jobs become commodities to employees. To differentiate from other employers, a firm must be highly attractive to the specific strata of the labor market that it wishes to employ. It does this by developing unique culture and values, and professional and societal aspirations that make it destination employer for those people. Without these things, it can only compete for labor on comp and career. It's difficult for a firm to maintain competitive advantage for labor solely on the price it is willing to pay for it.

2. Employers will pursue labor industrialization over tradecraft. Software development is labor intensive: the productivity enhancers that do exist such as automated testing and automated build are still poorly implemented, when used at all. Plus, the diversity of programming languages and the complexity of environments encourages labor specialization and task management. Still, people investing in software assets will not take "can't find competent people" for an answer. As the old saying goes, if you can't raise the bridge, lower the water. On a person-by-person basis, it is faster, easier, and cheaper to hire, train, and staff industrial workers to work on a software development "factory floor" where they perform coding tasks in an assembly-line like way than it is to recruit, develop and mentor polyskilled software developers. New labor formation will be largely industrial in nature, not tradecraft.

3. The risk of spectacular software failure will increase. The horrific explosion of the oil tank train that devastated Lac-Mégantic in 2012 was in no small part the result of demand exceeding supply. North American oil production has risen dramatically in the past half-decade. All that oil coming out of shale fields will find its way to refineries. Since there aren't pipelines to carry it, it's going by rail. The rail industry was in decline in North America for many years. A sudden uptick in demand can't be quickly satisfied by skilled labor. The net result is that railroads are hauling increasing volumes of a volatile commodity, but their capability to handle it isn't maturing at the same rate. In software, the demand/supply imbalance increases the risk of significant operating or project failure - that is, massive delivery overruns or post-delivery operating problems - as skills fail to mature in step with demand.

4. As the skills brought to bear on any given software investment deteriorate, software asset quality - particularly technical quality - will deteriorate. Industrial labor produces volume, not quality. The glut of software assets being produced will be toxic by technical quality standards. As it happens, it will go largely unnoticed because neither the concept of technical debt nor its commercial ramifications are well understood by the (average) business buyer of software, and because IT governance remains weak in practice. However, poor asset quality will become visible in maintenance and operating costs, and the occasional write-off. Any firm forced to make too many write-offs due to poor technical quality will cause it to see software as disposable rather than durable. That would create deflationary price pressure for labor and increase the demand for industrialization.

As long as the applications for software continue to expand, insufficient numbers of software engineers come into the work force, and software development remains labor intensive, there will be a fundamental supply / demand imbalance. But demand tends to be impatient. Economic and perhaps even political pressure will intensify to industrialize software development. This implies expansion of the secondary labor market, which is less skilled, educated, compensated and mobile than the primary labor market. That would be a lost opportunity: rather than fostering a global wave of knowledge workers, software development will simply bring the next wave of wage workers. We'll look at the reasons for that in the next post.

Tuesday, December 31, 2013

The Corrosive Effects of Complexity

"Much complexity has been deliberately created, to encourage consumers to pay more than they need, or expected." John Kay, The Wrong Sort of Competition in Energy

Modern software assets are complex in both their technical composition and their means of creation.  They are built with multiple programming languages, are expected to conform to OO standards and SOA principles, make use of automated tests and a progressive build pipeline, require a diverse set of skills (UX, developers, QA analysts, etc.) to produce, are used on a multitude of clients (different browsers or native client apps on PC, tablet and smartphone form factors), and are deployed using automated configuration management languages to a combination of physical and virtual environments (cloud and captive data centers).  Software is more complex today than it was less than a generation ago.

Complexity compromises both buyers and sellers of technology services.

Buyers suffer an information gap.  Few managers and fewer buyers have first-hand experience in one, let alone all, of the technologies and techniques a team is - or should - be using.  This creates information asymmetry between buyer and seller, manager and executor.   The more diverse the technologies, the more pronounced the asymmetry.

Sellers suffer a skill gap. Because the demand for software is outstripping the supply of people who can produce it, experienced people are in short supply.  There are more people writing their first Android app than their second, more people making their first cloud-based deployment than their second.  There are more blog posts on Continuous Delivery than there are people who have practiced it.  There are more are people filling the role of experience design than have designed software that people actually use.  And while long-standing concepts like OO, SOA and CI might be better understood than they were just a few years ago, a survey of software assets in any company will quickly reveal that they remain weakly implemented.  In a lot of teams, the people are learning what to do as much if not more than doing what they already know.

"Such information asymmetry is inevitable. Phone companies have large departments working on pricing strategies. You have only a minute or two to review your bill."

Information asymmetry favours the seller.  Sellers can hide behind complexity more easily than the buyer can wade through it: the seller can weave a narrative of technical and technology factors which the buyer will not understand.  The buyer must first disentangle what they've been told before they can begin to interpret what it actually means. This takes time and persistence that most software buyers are unwilling to make. Even if the buyer suspects a problem with the seller, buyer hasn't any objective means of assessing the seller's competency to perform.  Where complex offering are concerned, the buyer is at an inherent disadvantage to the seller.

"When you shop, you mostly rely on the reputation of the supplier to give you confidence that you are not being ripped off."

Technology buyers compensate by relying on proxies for competency, such as brand recognition and professional references of a selling firm. But these are controlled narratives: brands are aspirational visions created through advertising (although "Go ahead, be a Tiger" didn't end well...), while references are compromised by selection bias.  A buyer may also defer judgment to a third party, hiring or contracting an expert to provide expertise on their behalf. In each case, the buyer is entering into a one-way trust relationship with somebody else to fulfill their duty of competency.

An buyer inexpert in technology can compensate better by staying focused on outcomes rather than means.

Match cash flows to the seller with functionality of the asset they deliver. You're buying an asset.  Don't pay for promises, frameworks and infrastructure for months on end, pay for what you can see, use and verify.

Look under the hood.  There are plenty of tools available to assess the technical quality of just about any codebase that will provide easy-to-interpret analyses.  A high degree of copy-and-paste is bad.  So is a high complexity score.  So are methods with a lot of lines in them.

Spend time with the people developing your software.  Even if you don't understand the terminology, what people say and how they say it will give you a good indication as to whether you've got a competency problem or not.  Competency is not subject-matter-expertise: technology is a business of open-ended learning, not closed-ended knowledge.  But the learning environment has to be progressive, not blind guesswork.

Accept only black-and-white answers to questions.  Most things in software really are black and white.  Software works or it does not.  The build is triggered with every commit or it is not.  Non-definitive answers suggest obfuscation. "I don't know" is a perfectly valid answer, and preferable to a vague or confusing response.

An inexpert buyer is quickly overwhelmed by the seller's complexity. A buyer who stays focused on business outcomes won't dispel the seller's complexity, but will clarify things in favor of the buyer.