I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Tuesday, March 31, 2020

Autonomy Now

Distributed software development has been practiced for decades. Companies with global footprints were experimenting with this at least as far back as the 1970s. Skilled labor, global communication networks and collaborative tools made "offshore development" possible at scale from the mid-1990s onward. Improved skills, faster networks and more sophisticated collaboration tools have made distributed development practical for very complex software initiatives.

There can be significant differences in the way a team collaborates internally, and the way it collaborates with other teams across a program. Consider a distributed Agile program consisting of multiple teams based in different countries around the world. Under normal circumstances, individual teams of 8 to 12 people work day-in and day-out with colleagues in the same physical location. Intra-team events take advantage of the team's close proximity: the team room, collaborative practices like pair programming and desk checks, team ceremonies like stand-ups, and low-fidelity information radiators such as card walls are all high-bandwidth collaboration techniques. In-person communication is robust, spontaneous and fluid, so it makes sense to take full advantage of it. Conversely, inter-team events such as a Scrum-of-Scrums involve only key team members such as the project manager and lead developer, and are scheduled to take advantage (or at least minimize the inconvenience) of time zone overlap. In practice, any single team in a large program - even an Agile team - can function internally in a tightly coupled manner even though it is loosely coupled to other teams in the same program of work.

The COVID-19 pandemic has a lot of the global work force working in physical isolation from one another; this pushes distributed work models to their extreme. Yes, of course, it has long been possible for teams of individuals to work completely remotely from one another: e.g., tenured experts in the relevant technology who are also fluent in the business context and familiar with one another. But most teams don't consist of technology experts who know the domain and one another. In the commoditized IT industry, people are are staffed as "resources" who are qualified based on their experience with relevant technologies. Domain expertise is a bonus, and interpersonal skills (much less familiarity with team-mates) never enter the equation. A good line manager and competent tech lead know how to compensate for this through spontaneous, high-bandwidth interaction: if somebody's work is going adrift, pull them aside, ask the business analyst or product owner to join you, whiteboard and code together for a bit, and get it fixed. A good line manager and tech lead compensate for a lot of the messiness intrinsic to a team of commodity-sourced people. The physical isolation much of the world is experiencing makes this compensation more difficult.

There are lots of companies and individuals self-publishing practical advice for remote working. Many are worth reading. Although the recommendations look hygienic, good remote collaboration hygiene reduces individual frustration and maximizes the potential communication bandwidth. An "everyone is 100% remote" from one another model has scale limitations, and poor hygiene will quickly erode whatever scale there is to be had.

My colleague Martin Fowler posted a two-part series on how to deal with the new normal. The posts have a lot of practical advice. But the concluding paragraphs of his second post address something more important: it is imperative to change management models.

Being independent while working remotely is not working remotely in an independent manner. The more tightly coupled the team, the more handoffs among team members; the more handoffs, the more people will have to engage in intra-team communication; the lower the fidelity of that communication, the higher the propensity for mistakes. More mistakes means lower velocity, lower quality, and false positive status reports. In practice, the lower the fidelity of intra-team collaboration of a tightly coupled team, the lower the fidelity of inter-team collaboration regardless they are tightly or loosely coupled.

This is where a distributed program of truly Agile teams has a resiliency that Agile-in-name-only teams, command-and-control SAFe teams, and waterfall cannot intrinsically possess by their very nature. A requirement written as a Story that fulfills the INVEST principle is an autonomous unit of production. A development pair that can deliver a Story with minimal consultation with others in the team and minimal dependencies on anybody else in the team is an autonomous delivery team. A Quality Assurance Analyst working from clear acceptance criteria for a Story can provide feedback to the development pair responsible for the development of the Story. Stories that adhere to the INVEST principle can be prioritized by a product owner and executed in a Kanban-like manner by the next available development pair.

A tightly coupled team operating in a command-and-control style of management doesn't scale down to a more atomic level of the individual or pair. The program manager creates a schedule of work, down to the individual tasks that will fulfill that work and the specialist roles that will fulfill those tasks. Project managers coordinate task execution among individual specialists in their respective teams. One project manager is told by three people working on tasks for a requirement that their respective tasks are complete, yet the whole of their work is less than the sum of the parts. Now the manager must chase after them to crack their skulls together to get them to realize they are not done, and needs to loop in the tech lead to figure out where the alignment problems(s) are. This is difficult enough to do when people are in distributed teams in a handful of office buildings; it's that much more difficult when they are working in isolation of one another. Product quality, delivery velocity, and costs all suffer.

Command-and-control management creates the illusion of risk-managed delivery at large scale with low overheads. Forget about scaling up with efficiency; to be robust, a management paradigm needs to be able efficiently to scale down to deliver meaningful business results at the atomic level of the individual or pair. Micromanagement does not efficiently scale down because of the inherently high overheads. Self-directed autonomous teams do efficiently scale down because of the inherently low overheads.

In 2013, I spilled a few photons on the management revolution that never happened: for a variety of reasons in the 1980s, we believed we were on the cusp of a devolution of authority; instead, we got much denser concentration of authority. In 2018, I spilled a lot of photons on autonomous teams at enterprise scale being an undiscovered country worth the risk of exploring.

The COVID-19 pandemic is creating intense managerial challenges right now. It is important to note that there are likely to be long-term structural effects on businesses as well. Perhaps companies will encourage employees to work from home more regularly so the company can permanently reduce office square footage and therefore lease expense. Perhaps a new generation of secure mobile technologies will make it seem idiotic that large swaths of workers are office rather than home based. Perhaps companies will revise their operating models and position specs, requiring greater individual role autonomy to maintain high degrees of productivity in regular and irregular operating conditions. Perhaps metrics for contract labor - metrics that are not attendance based - will emerge to satisfy expectations of value delivery.

Perhaps, with the potential for long-term effects looming, it is time to go explore that undiscovered country of autonomy.

Saturday, February 29, 2020

To Transform, Trade Ego for Humility

Ten years ago, when the mobile handset wars were in full swing, I wrote a blog analyzing the differences among the leaders in the space. Each had come to prominence in the handset market differently: Nokia was a mobile telephony company, Blackberry a mobile email company, Apple a personal technology company, Google an internet search and advertising company.

With the benefit of hindsight, we know how it played out. Nokia hired a manager from Microsoft to wed the handset business to any alternative mobile operating system to iOS that wasn't made by Google. RIM initially doubled down on their core product, but eventually scotched their proprietary OS in favor of Android. Neither strategy paid off. Nokia exited the handset business in 2013. RIM exited the handset business in 2016. Both companies burned through billions of dollars of investor capital on losing strategies in the handset market.

There has been evidence published over the years to suggest that the self-identity of the losing firms worked against them: interactions via voice call and email had less overall share time on mobile devices, overtaken by emerging interactions such as social media. By providing a platform for independent software development, an entirely new category of software - the mobile app - was created. iOS and Android were well positioned to create and exploit the change in human interaction with technology. Nokia and Blackberry were not.

* * *

Earlier this week, Wolfgang Münchau posited that the European Union is at a cultural disadvantage to the United States and China in the field of Artificial Intelligence. Instead of finding ways to promote AI through government and private sector development and become a leader in AI technology, the EU seems intent on defending itself from AI through regulation. For that to be effective, as Mr. Münchau writes, technology would have to stop evolving. Since regulators tend not to be able to imagine a market differently than it is today, new AI developments will be able to skirt any regulation when they enter the market. It seems to be a Maginot Line of defense.

When it comes to technology, Mr. Münchau writes that the European mindset is still very much rooted in the analogue age, despite the fact that the digital age began well back in the previous century. This is somewhere on a spectrum of a lack of imagination to outright denial.

That begs the question: why does this happen? In the face of mounting evidence, why do people get their ostrich on and bury their heads in the sand? Why does a company double down instead of facing its new competitive landscape? Why does the leadership of a socio-economic community of nearly 450 million people simply check out?

Mr. Münchau points out three phenomenon behind cultural barriers to adaptability.

The dominant sentiment in modern-day Europe is anxiety. Its defining need is protection. And the defining feature of its collective mindset is complacency. In the European Commission’s white paper on artificial intelligence all three come together in an almost comical manner: the fear of a high-tech digital future; the need to protect oneself against it; and the complacency inherent in the belief that regulation is the solution.

What stands in the way of change? Fear. Resistance. Laziness.

* * *

Some executive at some company believes the company needs to change in response to some existential threat. That which got it here will not take it forward. Worse still, its own success is stacked against it. What we measure, how we go to market, what we make, how we make, all of that and more needs a gigantic re-think. Unleash the dogs of transformation.

In any business transformation, there is re-imagining and there is co-option. Wedding change to your current worldview - your go-to-market, your product offering, your ways of working - impairs your outcomes. At best, it will make your current state a little less bad. Being less bad might satiate your most loyal of customers, it might improve your production processes around the margins, but it won't yield a transformative outcome.

Transformation that overcomes fear, resistance, and laziness requires doing away with corporate ego. "As a company, we are already pretty good at [x]." Well, good for you! Being good in the way you are good might have made you best in class for the industry you think you're in. What if instead we took the position, "we're not very good at [x]?" General Electric's industrials businesses grew in the 1990s once they inverted their thinking on market share: instead of insisting on being the market share leader, GE redefined those markets so that no business unit had more than 10% market share. That meant looking for adjacent markets, supplemental services, things like that. It's hard to grow when you've boxed yourself in to a narrow definition of the markets you serve; it's easier to grow when you give yourself a bigger target market. That strategy worked for GE in the 1990s.

Re-imagining requires more than just different thinking. It requires humility and a willingness to learn. From everybody. The firm's capital mix (debt stifles change, equity does not), capital allocation processes (waterfall gatekeeping stifles adaptability), how it sees the products it makes (software and data are more lucrative than hardware), how it operates (deploy many times a day), must all change. That means giving up allegiance to a lot of things we accept as truth. This is not easy: creating a learning organization embraced by investors and labor alike is very difficult to do. But if you're truly transforming, this is the price of admission if you're going to overcome resistance and laziness.

What about fear? Those who truly understand the need to transform will face their deepest fear: can we compete?

In the span of just a couple of years, two deep pocketed firms with healthy growth trajectories introduced mobile handset products and services that eclipsed the functionality of incumbent offerings by 99%. The executive who understood the sea change taking place would not concoct a strategy to fight the battle on their terms. That executive would try to understand what the terms of competition are going to become, and ask if the firm had the balance sheet to scale up to compete on terms set by others.

Mr. Münchau points out that the same phenomenon may be repeating itself among Europe's automakers. They got a late start developing electronic vehicle technology. With governments mandating electrification of auto fleets, the threat is not only real, it's got a specific future date on it. Hence there has been increased consolidation (proposed and real) in the automotive industry in the past decade: an automaker needs scale to develop EV technologies to compete. Those automakers that have consolidated are accepting at least some of the reality that they face: automakers as national champions that create a lot of high-paying industrial jobs struck a balance among public policy, societal interests, and corporate interests for many decades. The change to EV technology is challenging the sustainability of that policy. If the enormity of fighting outdated public policy weren't enough, carmakers moving from internal combustion to electricity also face the transition from hardware to more of a software mindset. The ways of working are radically different.

The firm that truly needs to transform doesn't have the luxury of doubling down on what it knows. It must be willing to give up on long-held beliefs, change its course of action when the data tells it that it must, and face the future with a confidence borne of facts and not conjecture. It must trade ego for humility.

Friday, January 31, 2020

Lost Productivity or Found Hyperefficiency?

Labor productivity creates economic prosperity. Increasingly productive labor results in lower cost products (greater output from the same number of employees == lower labor input costs), higher salaries (productive workers are valuable workers), greater purchasing power (labor productivity allows households to keep monetary inflation in check), increasing sophistication (skill maturity to take on greater challenges), and higher returns on capital. The more productive a nation's workforce, the higher the average standard of living of its population.

In recent years, economists have drawn attention to low productivity growth in western economies as a key factor restraining economic growth and perpetuating low inflation and low interest rates. In particular, they cite the lack of breakthrough technologies - e.g, the emergence of the personal computer in the 1980s - to spur labor productivity and with it, more rapid economic growth. By traditional economic measures, things do not appear to be getting much better.

There is an alternative perspective that is far more optimistic: digital companies drive down costs through hyper-efficiency (speed, automation and machine scale) and price transparency. Algorithms are cheaper than humans and can be networked to perform complex collections of tasks at a speed, and subsequently a scale, that humans cannot achieve. Twined with the radical reduction of information asymmetry (particularly with regard to product price data), it stands to reason that there has been significant productivity growth in western economies: supply chains have never been so optimized, retail and wholesale transactions so price-fair and friction-free. This stands to reason: it is considerably less time- and energy-intensive to ask an Echo to order more Charmin toilet paper than it is to drive to a grocery store or pharmacy, walk in, price compare to justify those few extra pennies for softness, queue, pay, and drive home. The argument for this invisible efficiency is that economic models have simply failed to change in ways that reflect this phenomenon. The productivity is there, and will intensify with technologies such as AI and ML; the instrumentation simply doesn't exist to measure it.

In this definition, productivity through technology is a deflationary force that makes products more affordable. Even if real wages remain stagnant, the standard of living increases because people can afford more goods and services as they cost less today than they did yesterday. In theory, the increasing standard of living will occur regardless the cost of capital: because retail prices are going down, interest rates could move higher with no ill effects to the economy, juicing returns on capital. The bigger the tech economy, the better off everybody is.

There is truth to this. Consider healthcare: although medical costs are much higher today in nominal terms than they were in 1970, they are much lower in real terms when adjusted both for monetary inflation and medical-technological innovation. If medicine were still practiced today as it was 50 years ago, the cost of delivery would be lower in real terms, but the standard of care would be much, much lower than what it is today. Would you want to receive cardiac treatment at a 1970 standard, pulmonology treatment at a 1980 standard, or HIV treatment at a 1990 standard? Or would you rather be treated for all of these to a standard of care available in 2020? Technology is clearly a deflationary force that increases individual prosperity.

Still, there are three factors that should temper enthusiasm for an unmeasurable tech-led labor productivity bonanza.

The first has to do with the real price of and the real payers for tech-generated benefits. Ride sharing services have added driver/fleet capacity and accelerated speed-of-access for local transportation service. However, the individual consumer isn't fully picking up the tab; the ride is heavily subsidized by private capital. That makes the price affordable to the user. The question is, how sustainable is the price without the private-capital subsidy?

Economic subsidies are a common practice, typically sponsored by governments to protect or advance economic development. Sometimes a subsidy is direct, as is often the case with agricultural commodity price supports: if depressed crop prices drive farmers out of business, a nation loses its ability to feed itself, so in years of commodity gluts governments will offer direct assistance to make farmers whole. And, sometimes a subsidy is indirect. The United States was dependent on oil from foreign countries for much of the past 60 years. The price of petroleum products in the US did not reflect the cost of US military bases as well as having the Fifth Fleet patrol the Persian Gulf. The federal government prioritized energy security to guarantee supply and reduce the risk to energy prices of supply shocks. The immediate cost of that security and stabilization was borne by the US taxpayer; the policy was founded on the expectation that the federal government would be made whole over the long term through increasing tax receipts from economic growth that resulted from cheap energy.

There are subsidies that are sustainable and subsidies that are not sustainable. In theory the US projecting military power to secure Middle Eastern oil was a sustainable economic subsidy: containing energy prices while your nation gives birth to the likes of Microsoft and Apple and many other companies seems a good economic bargain (exclusive of carbon emissions, which did not historically factor into economic policy). By comparison, productivity in the Soviet Union grew in lock-step with direct government investment in industry (primarily steel production) through the 1950s and 60s, Trouble was, when the Soviet government pulled back investment, labor productivity growth flatlined. Labor productivity was entirely dependent on outside (e.g., government) financial injection. The lack of organic productivity growth translated into stagnation of economic prosperity of the masses. A standard of living that was competitive with the United States and Western Europe in the 1950s was hopelessly trailing by the 1980s. Turns out Maggie was right: eventually you really do run out of other people's money.

The investment case for the ride sharing companies is that there will eventually be one dominant player with monopolistic pricing power. A market for on-demand transportation is now established, so a single surviving ridesharing firm will reap the winner-take-all benefit of that market, giving it scale. Being the only game in town, the surviving firm will have pricing power. In theory, the surviving firm should have access to a larger labor pool spanning Subaru drivers to software developers, thus depressing wages, and thus the cost of service. Lower input costs twined with scale should mean a lower price increase is needed for the firm to become profitable.

But there are a lot of variables in play here. Ridesharing firms are carrying billions of dollars of losses they accreted over many years that they need to make up for their investors to be made whole; that will create pressure to raise prices. There are other industries competing for the labor of these firms (especially those software developers), so input costs will not necessarily decline. Because drivers work for multiple ridesharing services, their utilization is already high, meaning economies of scale that will temper price increases passed on to consumers.

If or when a monopolistic competitor triumphs, prices are going to rise and individual consumer's "productivity" will be impaired by the withdrawal of the price subsidy. Consolidation and scale will not perpetuate the subsidy, so the price of service is going to rise. The subsidy is only sustained if a new entrant with deep-pocketed backers emerges to challenge what will by then be a "legacy" incumbent; in essence, the cycle of subsidy regenerates itself. Don't rule it out: it isn't out of the question as long as capital is cheap. While it's reasonable to assume the industry will run out of greater fools, there has always been a high degree of correlation between "minutes" and "suckers born". The WSJ reported today that Softbank is pumping cash into multiple meal delivery services operating in the same markets and therefore competing directly with one another, each firm engaged in an arms of subsidies with one another to sign restaurants, delivery labor and customers. It is difficult to fathom the logic of this.

The second factor is the implicit assumption that the tech cycle has triumphed over the credit cycle. There is a popular theory that technological innovation has become more important than capital in setting prevailing economic conditions. The evidence of this is the shift in economic activity steered by emerging technologies in areas such as ecommerce and fintech. A technology-centric business benefits from lower costs for facilities, lower inventory carry costs, and lower network (transaction) costs, and therefore has an intractable competitive advantage over incumbents. As I've written previously, unfortunately the evidence doesn't entirely support this yet. Plus, deep-pocketed incumbents can raise capital to acquire, compromise or corrupt the business models of would-be disruptors, not to mention that would-be disruptors are finding themselves engaged in technological arms races not with incumbents, but other would-be disruptors. This distorts the playing field, making it much more about capital than tech.

It's curious that contemporary strategy among big tech firms is to burrow into the existing economy as un-metered, un-regulated, subscription-based utilities, as opposed to betting on ever-accelerating revenue from their intrinsic value-generative nature. Consider entertainment streaming services: by selling subscriptions, they are willfully exchanging the potential for sky-high equity-like returns from the value of the content they produce (which is how movie studios used to operate) for more modest debt-like returns from the utility that subscribers will pay for access to a library where they can find something they can tolerate just enough to pass the time (which is how cable companies operate). While streaming services are engaged in a long-running competition for content and tech, they have concluded they are not going to win by out-tech-ing or out-content-ing one another. Streaming entertainment is not a value proposition, it is a utility proposition. A utility business model is one that is explicitly (a) not leading with tech innovation and (b) seeking immunity from the credit cycle.

What this tells us is that the tech cycle is not the dominating economic force. As it stands today, more people suffer economically when the credit cycle turns than when the tech cycle turns (e.g., a dearth of innovative new technologies). A turn in the credit cycle contracts business buying which creates layoffs. A turn in the tech cycle makes means there will not be a still more convenient way to get a ride from The Loop to O'Hare or food delivered from a Hell's Kitchen restaurant to an apartment in Midtown. While it may happen some day, we are still not yet at a point where the tech cycle is triumphant.

The third factor goes to the question of labor capacity versus labor productivity. Labor productivity and labor-saving efficiency are really measures on the same axis: less time, effort and energy necessary to complete a task and ultimately achieve an outcome. A different but equally important dimension is labor capacity: the more people engaged in gainful employment, the greater the level of household income, the more individual households reap economic benefit.

Labor participation in the United States took a direct hit in September, 2008, and hasn't recovered. After hovering above 66% for over 18 years, it went into sharp decline, bottoming at 62.5% in 2015 and recovering only to 63.2% today. To put it in absolute terms, there are 20 million more jobs in the US today than there were in 1999 (peak labor participation), but the US population has grown by 48 million more citizens. Job growth hasn't kept pace with population growth. This suggests that the economic benefits of productivity gains (through organic labor productivity or technology) are concentrated in fewer hands, implying that the economic benefits of technology gains are asymmetrically distributed.

Yes, labor capacity is a measure, not a driver. From 1950 to 1967, the labor participation rate hovered in the 59% range. And even with a growing population, technological advances can create price deflation that raises the standard of living for everyone: many and perhaps most of those 48 million additional US citizens since 1999 have smartphones, which none of the 279 million Americans had in 1999. Still, there is asymmetric benefit to those technological advances: those not working are not enjoying the totality of economic benefits of increased productivity described in the opening paragraph. As much as proponents advocate that technology improves labor productivity, that same tech is also increasing in the Gini coefficient.

Does technology improve productivity? Undoubtedly. But before hailing any technology as an economic windfall on par with traditional measures of labor productivity, best to scrutinize how it organically it achieves it, how resilient it is, and how widely its benefits are spread around the work force. Technology may eventually change traditional economics, but there is one thing even the best technology cannot overcome: there is no such thing as a free lunch.

Tuesday, December 31, 2019

But Is It Really a Tech Firm?

There are lots of executives who would have you believe that the business they run is really a tech business. With tech firm valuations still at sky-high levels, it's easy to understand why. Tech commands a premium valuation because (a) the potential for non-linear growth relative to investment; (b) low barriers to entry into adjacent markets amplifies that growth; (c) scale of offerings changes the commercial model from transactional to flat-fee subscription, making a tech firm an unregulated utility; (d) payments take place behind-the-scenes of the tech consumption, creating sustainable recurring revenue; and (e) tech industries tend to be winner-take-all.

Translated into investor-bait, here is what this means. Selling access to movies is all well and good. Selling all forms of entertainment - on different media, on different frequencies, for different prices - is even more interesting. Selling access to it as a service on a fixed price makes it an uninterrupted cash flow. Getting a significant number of people on the planet to pay a fixed subscription fee once a month every month is epic scale.

This kind of reach isn't a new or even a recent phenomenon. You may recall a time when McDonald's restaurants used to show the number of burgers they sold on the golden arches signs, in the tens and later in the hundreds of millions. Then it simply became "billions and billions served." You may not recall that twenty years ago, McDonald's (briefly) targeted their sales in terms of the percentage of total meals consumed globally on a daily basis. Around the same time, the largest of the large banks was targeting a total number of accounts across all product categories relative to the entire population of the planet. 'Twas ever thus: Standard Oil achieved monopoly status over American oil in the late 19th century; Rome achieved hegemony over Europe.

Tech didn't invent the economic harvesting of humans at scale, it's simply the current means of achieving it. In Roman times, it was achieved through territorial conquest (tax revenue through subservience of subjects). In the industrial age, it was achieved by selling productivity, e.g., labor-saving machines and the energy to run them (revenue from products to improve productivity of business and household activities). Today, it is achieved by selling entertainment (revenue from selling services that fill the passive time of individuals). At a time in history when conquest is out of favor and productivity gains have slowed, monetizing everybody's abundant downtime from all those labor-saving products of the industrial age is the next frontier.

Not exclusively, of course. There are still plenty of opportunities for productivity gains. Electric vehicles require fewer components, which means less labor is required to manufacture them. Tax compliance is mostly rules, and rules can be implemented as algorithms and therefore replace large number of auditors. And when cars can drive themselves, individual, on-demand transportation isn't limited by the number of drivers but the accessibility of vehicles. There are still plenty of productivity gains to be realized, and their potential still grabs headlines, but productivity is the old frontier; entertainment is the new.

Regardless the source - political domination, economic productivity, or entertainment - the potential for scale drives equity value. Potential is more lucrative to investors than reality. Bond investors are told the company is growing at a predicable rate and spending is under control, which secures the credit rating and coupon; equity investors are told that it isn't the sky that's the limit, but our ability to fathom every quantum reality of where the business could go, and that tech is the enabling factor. Hence there are plenty of CEOs and CIOs alleging they are "tech companies that happen to operate in the [insert-industry-name-here] industry."

You've probably heard this statement hundreds of times from hundreds of executives, to a point that it doesn't merit even as much as an eye-roll any more. I've always thought it would be helpful to have a consistent and objective means of assessing whether they really fit the bill of a tech firm or not.

Andy Kessler wrote an interesting op-ed in the WSJ a few weeks ago cataloging five characteristics that define a tech firm. They are: growth; R&D intensity; margins; productivity; and tech spending intensity. This is a very useful heuristic.

Growth: "Even though prices go down, units go up faster so you get rapid and sustainable growth." Among other things, that means race to the bottom pricing isn't destructive if volume rises ahead of it. A good litmus test of growth: "Beware of fake growth like market-share growth: If you sell seat cushions for a 50,000-seat stadium, you can double sales every year but eventually you’ll run out of seats. Instead look for giant markets." If the company does not have truly exponential growth potential through tech, they are not a tech firm.

R&D: This is a positive and negative indicator. On the plus side, tech firms have to invest for invention and innovation. On the minus side, "Companies often boost earnings by starving research, a serious red flag." A company not investing in original research in tech is not a tech company. But it isn't the creation of tech that matters as much as how effectively it is mainstreamed. Mr. Kessler wrote another op-ed this past week in which he points out the rapid rise of such things as voice command, live streaming and medical monitoring have gone from new to commonplace and, in some cases, depended upon. The ability to create technology simply yields another Xerox PARC or Kodak digital camera; the ability to operationalize R&D is entirely another.

Margins: "the ideal tech product doesn’t cost anything to distribute—roughly zero marginal cost, like software." This is true for bits, silica and advertisements. Whatever a tech firm is selling should have near-zero marginal cost for each additional sale. By this definition, consulting firms are not tech firms: because they rent bodies, they have direct costs proportional to sales.

Productivity: Marginal improvements in productivity have been with us since the dawn of time; replacing entire swaths of labor activity is transformational. Levers and pulleys allowed humans to power simple devices to create incremental labor saving, while the flywheel engine completely replaced the need for people to perform specific tasks. Organizing people to drive their cars to transport others is not a productivity boost; cars that navigate themselves to people in need of mobility without the presence of a driver is a productivity boost.

Tech spend intensity, or the extent to which a company must continuously upgrade core capacity. A company on the technology treadmill has no choice but to spend capital on enhancement and expansion. Note that this does not apply to self-inflicted woes. I've worked with entirely too many firms that are hostage to tech spend commitments due to poor tech lifestyle decisions: vendor spend is directly proportional to cleaning up mistakes made by those very same vendors. Committed spend for purposes of hygiene is not the same indicator of tech intensity as disciplined spend for purposes of improvement.

This simple heuristic makes it easy to score (Mr. Kessler suggests awarding one point for each). By his reasoning, a score of 3 or above qualifies a company as a tech business, while a 1 qualifies them as a tech user, and quickly applying it to firms with which I'm familiar it winnows out the wanna-bes. Next time somebody is touting their tech credentials, apply this simple rating to see how well they stack up. It will be more constructive than an eye-roll.

Saturday, November 30, 2019

Muddling Through

British trade in the West was never an instrument of empires. It had no headquarters like the compounds at Montreal and no capitol like the rock-rooted fortress at Quebec. It had no seasonal rhythm like the canoe caravans coming down the rivers on the spring race of water to the tall ships above the quaysides loading their pyramids of peltry. It had no trace of the pagentry of black-robed priests visioning by their campfires in the forest a new empire for the Church, and no imperial plan of feudal seigniories and savage nations submissive to the rule of the Rock. The British trade was sporadic, individual and motivated by simple greed. When forty dollars' worth of trade goods would secure a thousand dollars' worth of peltry, there was incentive enough to bring pack trains to Ohio.
Walter Havinghurst, Land of Promise

In 1959, Dr. Charles Lindblom posited that companies that had disciplined processes for experimentation and discovery had an advantage over those that operated, in the words of Dr. John Kay, against "a single comprehensive evaluation of all options in light of defined objectives". The argument Doctors Lindblom and Kay make is common sense. For those of us without the gift of omnipotence, the ability to reassess objectives and tactics with the benefit of first-hand market knowledge is a handy capability to have.

While this is intuitively appealing to entrepreneurially-minded managers, it is decidedly unappealing to risk-averse capital. Complex corporate strategy derived from analysis and modeling creates the appearance of authoritative expertise. Authoritative experts are in the business of selling predictability. Debt capital likes predictability. As long as there is cheap capital to be raised from risk-averse investors there will be complex business strategies. This doesn't make for good strategy or good investment. This just makes it something that happens.

There is an obvious contemporary comparison between Amazon (financed with equity capital) and WeWork (financed largely with debt capital) and no doubt there will someday be a good case study that sheds light on the differences between the two firms. Suffice to say for now that in the latter's case, somebody tells a good story (Adam Neumann at WeWork), somebody has money burning a hole in their pocket (Masayoshi Son at Softbank). The story makes for an eye-popping valuation, but the story doesn't make it a rational valuation.

Human history is rife with examples of big, ambitious, up-front design that fails to live up to the hype and reward the capital behind it. I've been doing some research into the development of North America from the time of the Renaissance. The opening of the North American continent created a massive trade opportunity for European countries. Although the Spaniards were the first to arrive during the Renaissance, the French were first to pursue opportunities in the North American interior. The interior was rich with wildlife. The indigenous peoples were adept at harvesting the pelts of that wildlife. European manufactured products could be profitably traded for those pelts: $40 of trade goods to $1,000 of prepared pelts, per the opening quote.

As lucrative as the trade opportunities were, the French dreams for the development of the interior were even more ambitious.

Having seen the immense forests of the Ohio and the broad prairies of the Mississippi, La Salle formulated his life's ambition. He would establish French civilization in the rich country between the two rivers. He would begin by systematizing and enlarging the fur trade, using cargo vessels on the lakes, erecting fur depots on the rivers, establishing a chain of warehouses and magazines. It was a bold undertaking that looked to the protection of French interests all the way from the mouth of the Mississippi to the Great Lakes and the establishing of stations of prestige and power at a dozen strategic points.
[...]
It was a dimly comprehended country, but La Salle saw it more clearly than any man of his age. He made the restless [Governor] Frontenac envision it like a panorama from the Rock above the St. Lawrence. And it made the ambitious governor grasp his program of a commercial empire with French goods going systematically to the strategic posts and the fur caravans drawing in over the great web of rivers to the broad sea lanes of the Lakes. Together they drew up a design of occupation, fortification and settlement.

La Salle's big vision needed big execution. It did not suffer from a lack of big execution. He oversaw the construction of the 60-foot long Griffon, the first ship ever on the great lakes beyond Ontario; a vessel of that size could transport larger quantities of trade goods than the largest fleet of canoes. He established trade relationships with interior peoples. He established forts throughout the present day states of Ohio, Indiana and Illinois, reaching as far west as the Mississippi.

What the big vision did suffer from was unforeseen circumstances, to which La Salle was simply unable to respond. Advance traders squandered their French-made trade goods, trading for themselves rather than for their employer. One of La Salle's trusted lieutenants - Louis Hennepin - was taken prisoner by the Sioux. La Salle's men abandoned his key fortification of Fort Crevecoeur. A key trading partner - a village of the Illinois - was defeated by the Iroquiois. A storm over Lake Michigan claimed the Griffon, laden with pelts from the North American interior for Europe. Ultimately, La Salle's detractors were successful in discrediting him in Montreal and Paris, and his creditors closed in on him. While he would ultimately gain support from the French crown for one final expedition, it would fail for the same reasons his previous expedition did: ambition impervious to ground truths.

Though the ensuring years saw scattered new trading posts and mission stations - at Chicago and St. Joseph, at Detroit, Cahokia and Vincennes - there was no bold design of a French civilization encompassing the whole interior country.

The legacy of the French vision lingers on in the names of rivers, cities and counties that bear the names of the influence of the French explorers and of the explorers themselves: Eau Claire, Fond du Lac and St. Louis; Hennepin, Joliet and Marquette; and of course, La Salle himself. But these are just echoes of the past. As Dr. Havinghurst points out, the North American market for European trade goods fell into the hands of those who were not executing in pursuit of a grand design, but content to muddle through.

No surprise, then, that the 20th century instantiations of these 18th century ambitions - Singer Industries and TRW - if they are remembered at all, it is as legacy place names and not 21st century concerns.

Thursday, October 31, 2019

The Law of Unintended Consequences

Over and above all, it remains to be seen how far the super-Bank will make use of its immense facilities for credit expansion. This is the aspect of the scheme which deserves the most attention, as it opens up a vista of alarming possibilities. The scheme itself is sound, and it is far from its authors’ intention to make of the Institution a means for international credit inflation. But then, the bank scheme of John Law was also in itself sound....It also remains to be seen whether the scheme of our modern John Laws -- however sound their intention may be -- will not be brought to shipwreck...
-- Is Libra really the world’s most ambitious international settlement system? FT Alphaville

Technology is possibility. Technology is potential. Technology is hope. It drives out inefficiencies. It destabilizes authority. It rights wrongs. We've witnessed examples of all of these things in the last two-and-a-half decades, sometimes in dramatic fashion. They seem to occur at an ever accelerating rate.

Among the things that makes any technology magical is its ability to elevate each and every individual user to the center of the universe. One of the ways a technology does this is by rapidly and iteratively incorporating things users ask for. Short delivery times allow for feedback from a wide variety of current and potential users to be factored, explored, and refined continuously - and equitably - into the software. This gives iterative tech the unique capability of placing every individual at the center of value delivery, as there are many intersection points of features and types of user. When done well, there isn't a single person imposing their vision of what users should do, but community preferences and desires crowdsourced to achieve a goal state. This is the egalitarianism of technology, manifested through a product mindset and Agile practices.

Little wonder that tech sports a halo in the eyes of economists, politicians, and the masses alike. Yet we know from experience that tech is not a one way trip to a universally better tomorrow.

We think of technology as a force for good because of the benefits we have seen it yield, but we do not often consider the problems it is likely to leave in is wake. Cheap capital flooded into sales-tax free e-commerce at the cost of brick-and-mortar firms and their employees, and by extension the municipal & state coffers to which those firms and employees contributed taxable sales and incomes. Using advertising to subsidize innocuous user activities like information or product search gave rise to creepy surveillance capitalism. Energy-efficient buildings don't fulfill their promise because "buildings don't use energy -- people do." It's a phenomenon called the rebound effect: tech-driven advances in energy efficiency result in increased rather than decreased consumption. "[P]redicting what people will do is notoriously difficult."

Tech firms advocate big tech solutions to big economic and societal problems. Somebody thinks people pay too much in fees to transfer money, so they propose to "revolutionize payments" through cryptocurrencies. Somebody thinks that asset prices are too high, so they propose to "revolutionize asset ownership" through fractionalization. Somebody thinks that low-risk borrowers are penalized by being pooled with high-risk borrowers; somebody else thinks that too many high-risk borrowers are excluded from credit markets, so each build technology products targeted at the margins of credit markets.

Because people have limited capacity to comprehend the world in a state differently than how it exists today (a phenomenon known as regulatory capture), the debate focuses on the past and present, not what the brave new world of "revolutionized payments" or "revolutionized credit formation" will mean.

Here is what those things mean. Unrestricted movement of capital across borders is a pro-cyclical driver of capital flight; that amplifies currency fluctuations and interest rate volatility. Siphoning off the extremes of the credit pool removes high quality borrowers and encourages loans to borrowers of extremely low credit quality; that increases interest rates charged to middle- and high-risk borrowers because it takes low-risk borrowers out of the pool. It also enables unregulated institutions to write usurious loans that find their way into the mainstream financial system because the notes are ultimately bought by banks. Banks will not write the loans due to the credit worthiness of the borrower, and banks will not write the loans because regulatory agencies prohibit charging the concomitant interest rate banks would need to charge given the credit worthiness of the borrower. Ironically, banks can (and do) buy the loans ultimately provided to those borrowers by unregulated financial institutions and put the loans on the books as high-yield assets. Fringe financial institutions win, banks win, borrowers lose.

Big tech solutions to problems real or perceived on the margins amplify volatility. That makes them contributors and potentially generators of Black Swan events. The technologies that aspire to increase accessibility or reduce friction in isolation of other market characteristics such as regulation actually contribute to volatility in the markets they allege to optimize. True, they may do so at their own expense: marketplace lenders, dependent on banks to buy the loans they write, are extremely vulnerable to the credit cycle they promised to circumvent. But as big tech is a winner-take-all-loser-take nothing proposition, and as big tech harvests rather than liberates the individual, big solutions from big tech are a one-way proposition that transfers income from the core of an economy to the big tech company that ultimately wins out.

Can technology make improvements in payments and credit formation? Sure. But perhaps better to take an evolutionary approach rather than a revolutionary approach. Lots of innocent people get hurt badly in revolutions. They are powerless to defend their interests.

Oh, and about that quote to start this blog. You might be thinking it's about Libra. It isn't. It was written in 1972 about the Bank of International Settlements. Plus ça change...

Monday, September 30, 2019

The Financialization of Disruptive Technology

It's fashionable to champion an investment-oriented model for software development, particularly around exploratory opportunities. Allocate risk capital, run experiments through software, learn what works and what doesn't work, re-focus, rinse, repeat, reap rewards.

I've been a proponent of companies doing this for a very long time. It twines the notion of devolved decision making with thinking of IT as an investment rather than a cost. Invest in what you know paid off handsomely for Peter Lynch. "Continuously adjust your investment position based on what you learn" seems an apt mantra for today's world. Organize for innovation, re-acquire lost tribal knowledge, challenge - but respect - commercial orthodoxy, and constantly re-apply what you learn to change the rules of engagement in an industry. Every day you're in business is a day you're able to bring the fight, and this is simply the new way of bringing the fight. Better figure it out.

What if this is not a viable strategy?

I've danced around this question for the past 7 or 8 years, challenging the invest-for-disruption premise from a lot of different angles. Among the problems:

  1. The labor density of new ideas has risen nearly 3x the rate of inflation. As the FT put it, that implies it is getting harder to find new ideas.
  2. Investment yields are highly concentrated. There are a plenty of analyses to show that a small percentage of investments yield the majority of the gains. A recent FT Lex article on biotech investing drives the point home: "A 20-year study found only one-fifth of exits were profitable. Just 4 per cent of investments made half the returns." Picking winners is hard.
  3. Regulated industries are unappealingly complex, but those complexities exist to protect consumer and provider alike. Denying, ignoring, or circumventing market sophistication results in bad outcomes for everybody: investors are subject to cycles they thought they were immune to, customers get robbed, and in the end management takes the same path their orthodox predecessors did decades ago.
  4. Cheap capital makes it easy for anybody to enter the innovation game. The multitude of companies competing to offer home meal kits and ride-hailing show there are no barriers to entry. Unique ideas aren't unique for very long, and the economics of exploiting them are much shorter lived.
  5. Deep pocketed investors make it expensive to stay in the innovation game. The WSJ has pointed out quite a few times that We Company, Uber, Tesla, and many other firms subsidize every customer transaction with investor capital. And as this graphic illustrates, that is an extraordinarily expensive investment proposition.

The whole point of the portfolio model applied to captive technology investing was to avoid taking a long position in any one thing. That created nimbleness at the portfolio level such that capital - and the knowledge workers that capital pays for - could be rapidly redeployed to the best opportunity given our most current information. This took advantage of a unique characteristics of software vis-a-vis its industrial (hardware) predecessors: real-time adaptability. Whether it was the accounting department building tools in Visicalc in 1982 or a team of developers creating the company's first e-commerce site in 1997, the ability to rapidly deploy a new capability in software created an operational differentiator. Manufacturing changes took years. Organizational changes took months. Software changes took minutes.

That meant that software had the potential to be lower-case-i-investing: we knew in the early 1980s and again in the late 1990s that applied adaptable cheap technology could create incremental efficiency gains and therefore advantages. The formula was to exploit the adaptability of software and expedite its application: get new code changes deployed every month, every day, every hour when possible. As an operating phenomenon - that is, as it impacted day-to-day operations - this offered tremendous potential for competitive advantage: land punches left, right and center at an alarming rate and you put all your competitors at a disadvantage.

Yet per the above, software is no longer strictly an operating phenomenon. It's a financial phenomenon. Software is now upper-case-I-investing: all positions are long positions, and the stakes are winner-take-all. The incremental nature of the portfolio model has more to do with trench warfare in World War I than it does with sustainable competitive advantage. These are wars of attrition.

I've chronicled this phenomenon over the years, and over the course of that time have written up simple playbooks for strategic responses: i.e., the incumbent-cum-innovator and late movers. These are appropriate as far as they go, but incomplete once the tech has been fully financialized. If the tech business has been fully financialized, the playbook has to reflect the influence of the finance, not the tech.