I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Tuesday, March 31, 2020

Autonomy Now

Distributed software development has been practiced for decades. Companies with global footprints were experimenting with this at least as far back as the 1970s. Skilled labor, global communication networks and collaborative tools made "offshore development" possible at scale from the mid-1990s onward. Improved skills, faster networks and more sophisticated collaboration tools have made distributed development practical for very complex software initiatives.

There can be significant differences in the way a team collaborates internally, and the way it collaborates with other teams across a program. Consider a distributed Agile program consisting of multiple teams based in different countries around the world. Under normal circumstances, individual teams of 8 to 12 people work day-in and day-out with colleagues in the same physical location. Intra-team events take advantage of the team's close proximity: the team room, collaborative practices like pair programming and desk checks, team ceremonies like stand-ups, and low-fidelity information radiators such as card walls are all high-bandwidth collaboration techniques. In-person communication is robust, spontaneous and fluid, so it makes sense to take full advantage of it. Conversely, inter-team events such as a Scrum-of-Scrums involve only key team members such as the project manager and lead developer, and are scheduled to take advantage (or at least minimize the inconvenience) of time zone overlap. In practice, any single team in a large program - even an Agile team - can function internally in a tightly coupled manner even though it is loosely coupled to other teams in the same program of work.

The COVID-19 pandemic has a lot of the global work force working in physical isolation from one another; this pushes distributed work models to their extreme. Yes, of course, it has long been possible for teams of individuals to work completely remotely from one another: e.g., tenured experts in the relevant technology who are also fluent in the business context and familiar with one another. But most teams don't consist of technology experts who know the domain and one another. In the commoditized IT industry, people are are staffed as "resources" who are qualified based on their experience with relevant technologies. Domain expertise is a bonus, and interpersonal skills (much less familiarity with team-mates) never enter the equation. A good line manager and competent tech lead know how to compensate for this through spontaneous, high-bandwidth interaction: if somebody's work is going adrift, pull them aside, ask the business analyst or product owner to join you, whiteboard and code together for a bit, and get it fixed. A good line manager and tech lead compensate for a lot of the messiness intrinsic to a team of commodity-sourced people. The physical isolation much of the world is experiencing makes this compensation more difficult.

There are lots of companies and individuals self-publishing practical advice for remote working. Many are worth reading. Although the recommendations look hygienic, good remote collaboration hygiene reduces individual frustration and maximizes the potential communication bandwidth. An "everyone is 100% remote" from one another model has scale limitations, and poor hygiene will quickly erode whatever scale there is to be had.

My colleague Martin Fowler posted a two-part series on how to deal with the new normal. The posts have a lot of practical advice. But the concluding paragraphs of his second post address something more important: it is imperative to change management models.

Being independent while working remotely is not working remotely in an independent manner. The more tightly coupled the team, the more handoffs among team members; the more handoffs, the more people will have to engage in intra-team communication; the lower the fidelity of that communication, the higher the propensity for mistakes. More mistakes means lower velocity, lower quality, and false positive status reports. In practice, the lower the fidelity of intra-team collaboration of a tightly coupled team, the lower the fidelity of inter-team collaboration regardless they are tightly or loosely coupled.

This is where a distributed program of truly Agile teams has a resiliency that Agile-in-name-only teams, command-and-control SAFe teams, and waterfall cannot intrinsically possess by their very nature. A requirement written as a Story that fulfills the INVEST principle is an autonomous unit of production. A development pair that can deliver a Story with minimal consultation with others in the team and minimal dependencies on anybody else in the team is an autonomous delivery team. A Quality Assurance Analyst working from clear acceptance criteria for a Story can provide feedback to the development pair responsible for the development of the Story. Stories that adhere to the INVEST principle can be prioritized by a product owner and executed in a Kanban-like manner by the next available development pair.

A tightly coupled team operating in a command-and-control style of management doesn't scale down to a more atomic level of the individual or pair. The program manager creates a schedule of work, down to the individual tasks that will fulfill that work and the specialist roles that will fulfill those tasks. Project managers coordinate task execution among individual specialists in their respective teams. One project manager is told by three people working on tasks for a requirement that their respective tasks are complete, yet the whole of their work is less than the sum of the parts. Now the manager must chase after them to crack their skulls together to get them to realize they are not done, and needs to loop in the tech lead to figure out where the alignment problems(s) are. This is difficult enough to do when people are in distributed teams in a handful of office buildings; it's that much more difficult when they are working in isolation of one another. Product quality, delivery velocity, and costs all suffer.

Command-and-control management creates the illusion of risk-managed delivery at large scale with low overheads. Forget about scaling up with efficiency; to be robust, a management paradigm needs to be able efficiently to scale down to deliver meaningful business results at the atomic level of the individual or pair. Micromanagement does not efficiently scale down because of the inherently high overheads. Self-directed autonomous teams do efficiently scale down because of the inherently low overheads.

In 2013, I spilled a few photons on the management revolution that never happened: for a variety of reasons in the 1980s, we believed we were on the cusp of a devolution of authority; instead, we got much denser concentration of authority. In 2018, I spilled a lot of photons on autonomous teams at enterprise scale being an undiscovered country worth the risk of exploring.

The COVID-19 pandemic is creating intense managerial challenges right now. It is important to note that there are likely to be long-term structural effects on businesses as well. Perhaps companies will encourage employees to work from home more regularly so the company can permanently reduce office square footage and therefore lease expense. Perhaps a new generation of secure mobile technologies will make it seem idiotic that large swaths of workers are office rather than home based. Perhaps companies will revise their operating models and position specs, requiring greater individual role autonomy to maintain high degrees of productivity in regular and irregular operating conditions. Perhaps metrics for contract labor - metrics that are not attendance based - will emerge to satisfy expectations of value delivery.

Perhaps, with the potential for long-term effects looming, it is time to go explore that undiscovered country of autonomy.

Saturday, February 29, 2020

To Transform, Trade Ego for Humility

Ten years ago, when the mobile handset wars were in full swing, I wrote a blog analyzing the differences among the leaders in the space. Each had come to prominence in the handset market differently: Nokia was a mobile telephony company, Blackberry a mobile email company, Apple a personal technology company, Google an internet search and advertising company.

With the benefit of hindsight, we know how it played out. Nokia hired a manager from Microsoft to wed the handset business to any alternative mobile operating system to iOS that wasn't made by Google. RIM initially doubled down on their core product, but eventually scotched their proprietary OS in favor of Android. Neither strategy paid off. Nokia exited the handset business in 2013. RIM exited the handset business in 2016. Both companies burned through billions of dollars of investor capital on losing strategies in the handset market.

There has been evidence published over the years to suggest that the self-identity of the losing firms worked against them: interactions via voice call and email had less overall share time on mobile devices, overtaken by emerging interactions such as social media. By providing a platform for independent software development, an entirely new category of software - the mobile app - was created. iOS and Android were well positioned to create and exploit the change in human interaction with technology. Nokia and Blackberry were not.

* * *

Earlier this week, Wolfgang Münchau posited that the European Union is at a cultural disadvantage to the United States and China in the field of Artificial Intelligence. Instead of finding ways to promote AI through government and private sector development and become a leader in AI technology, the EU seems intent on defending itself from AI through regulation. For that to be effective, as Mr. Münchau writes, technology would have to stop evolving. Since regulators tend not to be able to imagine a market differently than it is today, new AI developments will be able to skirt any regulation when they enter the market. It seems to be a Maginot Line of defense.

When it comes to technology, Mr. Münchau writes that the European mindset is still very much rooted in the analogue age, despite the fact that the digital age began well back in the previous century. This is somewhere on a spectrum of a lack of imagination to outright denial.

That begs the question: why does this happen? In the face of mounting evidence, why do people get their ostrich on and bury their heads in the sand? Why does a company double down instead of facing its new competitive landscape? Why does the leadership of a socio-economic community of nearly 450 million people simply check out?

Mr. Münchau points out three phenomenon behind cultural barriers to adaptability.

The dominant sentiment in modern-day Europe is anxiety. Its defining need is protection. And the defining feature of its collective mindset is complacency. In the European Commission’s white paper on artificial intelligence all three come together in an almost comical manner: the fear of a high-tech digital future; the need to protect oneself against it; and the complacency inherent in the belief that regulation is the solution.

What stands in the way of change? Fear. Resistance. Laziness.

* * *

Some executive at some company believes the company needs to change in response to some existential threat. That which got it here will not take it forward. Worse still, its own success is stacked against it. What we measure, how we go to market, what we make, how we make, all of that and more needs a gigantic re-think. Unleash the dogs of transformation.

In any business transformation, there is re-imagining and there is co-option. Wedding change to your current worldview - your go-to-market, your product offering, your ways of working - impairs your outcomes. At best, it will make your current state a little less bad. Being less bad might satiate your most loyal of customers, it might improve your production processes around the margins, but it won't yield a transformative outcome.

Transformation that overcomes fear, resistance, and laziness requires doing away with corporate ego. "As a company, we are already pretty good at [x]." Well, good for you! Being good in the way you are good might have made you best in class for the industry you think you're in. What if instead we took the position, "we're not very good at [x]?" General Electric's industrials businesses grew in the 1990s once they inverted their thinking on market share: instead of insisting on being the market share leader, GE redefined those markets so that no business unit had more than 10% market share. That meant looking for adjacent markets, supplemental services, things like that. It's hard to grow when you've boxed yourself in to a narrow definition of the markets you serve; it's easier to grow when you give yourself a bigger target market. That strategy worked for GE in the 1990s.

Re-imagining requires more than just different thinking. It requires humility and a willingness to learn. From everybody. The firm's capital mix (debt stifles change, equity does not), capital allocation processes (waterfall gatekeeping stifles adaptability), how it sees the products it makes (software and data are more lucrative than hardware), how it operates (deploy many times a day), must all change. That means giving up allegiance to a lot of things we accept as truth. This is not easy: creating a learning organization embraced by investors and labor alike is very difficult to do. But if you're truly transforming, this is the price of admission if you're going to overcome resistance and laziness.

What about fear? Those who truly understand the need to transform will face their deepest fear: can we compete?

In the span of just a couple of years, two deep pocketed firms with healthy growth trajectories introduced mobile handset products and services that eclipsed the functionality of incumbent offerings by 99%. The executive who understood the sea change taking place would not concoct a strategy to fight the battle on their terms. That executive would try to understand what the terms of competition are going to become, and ask if the firm had the balance sheet to scale up to compete on terms set by others.

Mr. Münchau points out that the same phenomenon may be repeating itself among Europe's automakers. They got a late start developing electronic vehicle technology. With governments mandating electrification of auto fleets, the threat is not only real, it's got a specific future date on it. Hence there has been increased consolidation (proposed and real) in the automotive industry in the past decade: an automaker needs scale to develop EV technologies to compete. Those automakers that have consolidated are accepting at least some of the reality that they face: automakers as national champions that create a lot of high-paying industrial jobs struck a balance among public policy, societal interests, and corporate interests for many decades. The change to EV technology is challenging the sustainability of that policy. If the enormity of fighting outdated public policy weren't enough, carmakers moving from internal combustion to electricity also face the transition from hardware to more of a software mindset. The ways of working are radically different.

The firm that truly needs to transform doesn't have the luxury of doubling down on what it knows. It must be willing to give up on long-held beliefs, change its course of action when the data tells it that it must, and face the future with a confidence borne of facts and not conjecture. It must trade ego for humility.

Friday, January 31, 2020

Lost Productivity or Found Hyperefficiency?

Labor productivity creates economic prosperity. Increasingly productive labor results in lower cost products (greater output from the same number of employees == lower labor input costs), higher salaries (productive workers are valuable workers), greater purchasing power (labor productivity allows households to keep monetary inflation in check), increasing sophistication (skill maturity to take on greater challenges), and higher returns on capital. The more productive a nation's workforce, the higher the average standard of living of its population.

In recent years, economists have drawn attention to low productivity growth in western economies as a key factor restraining economic growth and perpetuating low inflation and low interest rates. In particular, they cite the lack of breakthrough technologies - e.g, the emergence of the personal computer in the 1980s - to spur labor productivity and with it, more rapid economic growth. By traditional economic measures, things do not appear to be getting much better.

There is an alternative perspective that is far more optimistic: digital companies drive down costs through hyper-efficiency (speed, automation and machine scale) and price transparency. Algorithms are cheaper than humans and can be networked to perform complex collections of tasks at a speed, and subsequently a scale, that humans cannot achieve. Twined with the radical reduction of information asymmetry (particularly with regard to product price data), it stands to reason that there has been significant productivity growth in western economies: supply chains have never been so optimized, retail and wholesale transactions so price-fair and friction-free. This stands to reason: it is considerably less time- and energy-intensive to ask an Echo to order more Charmin toilet paper than it is to drive to a grocery store or pharmacy, walk in, price compare to justify those few extra pennies for softness, queue, pay, and drive home. The argument for this invisible efficiency is that economic models have simply failed to change in ways that reflect this phenomenon. The productivity is there, and will intensify with technologies such as AI and ML; the instrumentation simply doesn't exist to measure it.

In this definition, productivity through technology is a deflationary force that makes products more affordable. Even if real wages remain stagnant, the standard of living increases because people can afford more goods and services as they cost less today than they did yesterday. In theory, the increasing standard of living will occur regardless the cost of capital: because retail prices are going down, interest rates could move higher with no ill effects to the economy, juicing returns on capital. The bigger the tech economy, the better off everybody is.

There is truth to this. Consider healthcare: although medical costs are much higher today in nominal terms than they were in 1970, they are much lower in real terms when adjusted both for monetary inflation and medical-technological innovation. If medicine were still practiced today as it was 50 years ago, the cost of delivery would be lower in real terms, but the standard of care would be much, much lower than what it is today. Would you want to receive cardiac treatment at a 1970 standard, pulmonology treatment at a 1980 standard, or HIV treatment at a 1990 standard? Or would you rather be treated for all of these to a standard of care available in 2020? Technology is clearly a deflationary force that increases individual prosperity.

Still, there are three factors that should temper enthusiasm for an unmeasurable tech-led labor productivity bonanza.

The first has to do with the real price of and the real payers for tech-generated benefits. Ride sharing services have added driver/fleet capacity and accelerated speed-of-access for local transportation service. However, the individual consumer isn't fully picking up the tab; the ride is heavily subsidized by private capital. That makes the price affordable to the user. The question is, how sustainable is the price without the private-capital subsidy?

Economic subsidies are a common practice, typically sponsored by governments to protect or advance economic development. Sometimes a subsidy is direct, as is often the case with agricultural commodity price supports: if depressed crop prices drive farmers out of business, a nation loses its ability to feed itself, so in years of commodity gluts governments will offer direct assistance to make farmers whole. And, sometimes a subsidy is indirect. The United States was dependent on oil from foreign countries for much of the past 60 years. The price of petroleum products in the US did not reflect the cost of US military bases as well as having the Fifth Fleet patrol the Persian Gulf. The federal government prioritized energy security to guarantee supply and reduce the risk to energy prices of supply shocks. The immediate cost of that security and stabilization was borne by the US taxpayer; the policy was founded on the expectation that the federal government would be made whole over the long term through increasing tax receipts from economic growth that resulted from cheap energy.

There are subsidies that are sustainable and subsidies that are not sustainable. In theory the US projecting military power to secure Middle Eastern oil was a sustainable economic subsidy: containing energy prices while your nation gives birth to the likes of Microsoft and Apple and many other companies seems a good economic bargain (exclusive of carbon emissions, which did not historically factor into economic policy). By comparison, productivity in the Soviet Union grew in lock-step with direct government investment in industry (primarily steel production) through the 1950s and 60s, Trouble was, when the Soviet government pulled back investment, labor productivity growth flatlined. Labor productivity was entirely dependent on outside (e.g., government) financial injection. The lack of organic productivity growth translated into stagnation of economic prosperity of the masses. A standard of living that was competitive with the United States and Western Europe in the 1950s was hopelessly trailing by the 1980s. Turns out Maggie was right: eventually you really do run out of other people's money.

The investment case for the ride sharing companies is that there will eventually be one dominant player with monopolistic pricing power. A market for on-demand transportation is now established, so a single surviving ridesharing firm will reap the winner-take-all benefit of that market, giving it scale. Being the only game in town, the surviving firm will have pricing power. In theory, the surviving firm should have access to a larger labor pool spanning Subaru drivers to software developers, thus depressing wages, and thus the cost of service. Lower input costs twined with scale should mean a lower price increase is needed for the firm to become profitable.

But there are a lot of variables in play here. Ridesharing firms are carrying billions of dollars of losses they accreted over many years that they need to make up for their investors to be made whole; that will create pressure to raise prices. There are other industries competing for the labor of these firms (especially those software developers), so input costs will not necessarily decline. Because drivers work for multiple ridesharing services, their utilization is already high, meaning economies of scale that will temper price increases passed on to consumers.

If or when a monopolistic competitor triumphs, prices are going to rise and individual consumer's "productivity" will be impaired by the withdrawal of the price subsidy. Consolidation and scale will not perpetuate the subsidy, so the price of service is going to rise. The subsidy is only sustained if a new entrant with deep-pocketed backers emerges to challenge what will by then be a "legacy" incumbent; in essence, the cycle of subsidy regenerates itself. Don't rule it out: it isn't out of the question as long as capital is cheap. While it's reasonable to assume the industry will run out of greater fools, there has always been a high degree of correlation between "minutes" and "suckers born". The WSJ reported today that Softbank is pumping cash into multiple meal delivery services operating in the same markets and therefore competing directly with one another, each firm engaged in an arms of subsidies with one another to sign restaurants, delivery labor and customers. It is difficult to fathom the logic of this.

The second factor is the implicit assumption that the tech cycle has triumphed over the credit cycle. There is a popular theory that technological innovation has become more important than capital in setting prevailing economic conditions. The evidence of this is the shift in economic activity steered by emerging technologies in areas such as ecommerce and fintech. A technology-centric business benefits from lower costs for facilities, lower inventory carry costs, and lower network (transaction) costs, and therefore has an intractable competitive advantage over incumbents. As I've written previously, unfortunately the evidence doesn't entirely support this yet. Plus, deep-pocketed incumbents can raise capital to acquire, compromise or corrupt the business models of would-be disruptors, not to mention that would-be disruptors are finding themselves engaged in technological arms races not with incumbents, but other would-be disruptors. This distorts the playing field, making it much more about capital than tech.

It's curious that contemporary strategy among big tech firms is to burrow into the existing economy as un-metered, un-regulated, subscription-based utilities, as opposed to betting on ever-accelerating revenue from their intrinsic value-generative nature. Consider entertainment streaming services: by selling subscriptions, they are willfully exchanging the potential for sky-high equity-like returns from the value of the content they produce (which is how movie studios used to operate) for more modest debt-like returns from the utility that subscribers will pay for access to a library where they can find something they can tolerate just enough to pass the time (which is how cable companies operate). While streaming services are engaged in a long-running competition for content and tech, they have concluded they are not going to win by out-tech-ing or out-content-ing one another. Streaming entertainment is not a value proposition, it is a utility proposition. A utility business model is one that is explicitly (a) not leading with tech innovation and (b) seeking immunity from the credit cycle.

What this tells us is that the tech cycle is not the dominating economic force. As it stands today, more people suffer economically when the credit cycle turns than when the tech cycle turns (e.g., a dearth of innovative new technologies). A turn in the credit cycle contracts business buying which creates layoffs. A turn in the tech cycle makes means there will not be a still more convenient way to get a ride from The Loop to O'Hare or food delivered from a Hell's Kitchen restaurant to an apartment in Midtown. While it may happen some day, we are still not yet at a point where the tech cycle is triumphant.

The third factor goes to the question of labor capacity versus labor productivity. Labor productivity and labor-saving efficiency are really measures on the same axis: less time, effort and energy necessary to complete a task and ultimately achieve an outcome. A different but equally important dimension is labor capacity: the more people engaged in gainful employment, the greater the level of household income, the more individual households reap economic benefit.

Labor participation in the United States took a direct hit in September, 2008, and hasn't recovered. After hovering above 66% for over 18 years, it went into sharp decline, bottoming at 62.5% in 2015 and recovering only to 63.2% today. To put it in absolute terms, there are 20 million more jobs in the US today than there were in 1999 (peak labor participation), but the US population has grown by 48 million more citizens. Job growth hasn't kept pace with population growth. This suggests that the economic benefits of productivity gains (through organic labor productivity or technology) are concentrated in fewer hands, implying that the economic benefits of technology gains are asymmetrically distributed.

Yes, labor capacity is a measure, not a driver. From 1950 to 1967, the labor participation rate hovered in the 59% range. And even with a growing population, technological advances can create price deflation that raises the standard of living for everyone: many and perhaps most of those 48 million additional US citizens since 1999 have smartphones, which none of the 279 million Americans had in 1999. Still, there is asymmetric benefit to those technological advances: those not working are not enjoying the totality of economic benefits of increased productivity described in the opening paragraph. As much as proponents advocate that technology improves labor productivity, that same tech is also increasing in the Gini coefficient.

Does technology improve productivity? Undoubtedly. But before hailing any technology as an economic windfall on par with traditional measures of labor productivity, best to scrutinize how it organically it achieves it, how resilient it is, and how widely its benefits are spread around the work force. Technology may eventually change traditional economics, but there is one thing even the best technology cannot overcome: there is no such thing as a free lunch.