I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Wednesday, June 30, 2021

Labor's New Deal

The pandemic has created a lot of interesting labor market dynamics, hasn’t it? Week after week brings a new wave of employee survey results that make it clear a lot of workers want to retain a great deal of the location independence they have experienced over the past year. Multiple studies report roughly the same results among knowledge workers, globally: 75% want flexibility in where they work, 30% don’t want to return to an office, and 1 in 3 won’t work for an employer that requires them to be on site full time. In addition, 1 in 5 workers expect to be with a different company in the next year, as many as 40% are thinking about quitting and over half are willing to listen to offers.

This isn’t just sentiment: employees are voting with their feet. The Wall Street Journal reported a few weeks ago that the share of the workforce leaving their jobs is the highest it has been in over twenty years.

Labor wants a new pact.

The post-COVID recovery is a once-in-a-decade economic recovery. To the extent that a company’s growth is indexed to the growth of its labor force (where near-term automation is not an option), a company has to hire. If it doesn’t, it’s going to sit out this recovery. That means businesses are motivated buyers of labor.

The American economy is surging, but employers are struggling to fill skilled and unskilled positions alike. One factor is the absence of slack in the labor market. Curiously, the labor participation rate is plumbing levels not seen since the 1970s. The number of 18 to 65 year olds actively working has been in steady decline since the mid-2000s, a few years before the 2008 financial crisis. It dropped significantly again with the pandemic, and has not yet recovered to pre-pandemic levels. Statistically, there should be labor market slack, but there is no slack as quite a few working age people are electing not to rejoin the workforce. Another factor is that with every company hiring it’s hard for any one employer to achieve visibility among job seekers. A simple search for “product manager” positions in Chicago yields over 6,300 openings; in New York over 6,800 openings; and in Dallas over 5,800 openings. Social media banners announcing “we’re hiring” are useless when every company is hiring.

Labor market tightness and difficulty in differentiating is forcing companies to raise wages. Large, deep-pocketed employers of unskilled labor including WalMart, McDonalds and Amazon have raised their entry level labor wages. Mid-tier and mom-and-pop competitors will be forced to do the same. And, many employers are responding to their own captive surveys yielding results like those mentioned above, offering greater workplace and working hour flexibility to existing staff and recruits. Average wages are going up, and workplace policies are changing to be more accommodative to labor.

With labor tight and economic expansion all around, employers will become increasingly competitive for labor. They will have to be aggressive just to stay in place. Imagine a company with, say, 100 experienced software engineers, project managers, QA engineers and the like that expects to add a dozen more people to the team in the next year. If they lose 20% of this knowledge workforce per the survey results, and assuming 10% of the people they put on the payroll are dud hires, they’ll have to hire upwards of 35 people to achieve a net gain of just 12.

All of this means that labor is having a once-in-a-generation moment.

Labor's power in America arguably peaked in the 1960s and has been on the wane since, the striking Air Traffic Controllers getting fired in the early 80s often held out as a seminal moment in labor's multi-decade decline. But some of you may recall that in the late 1990s, labor briefly had a moment. That was not only the go-go days of the dot-com era, but domestic US call centers were going up in all kinds of American cities, big box retailers wanted their customers to know they were "always open" and kept stores open for 24 hours a day (somebody just might be itching to buy a circular saw at 2a), and fast food drive thrus were kept open 2 hours longer than the dining rooms (conveniently, 'til after the pubs closed). For a brief period, "Sales Associate" positions came with medical and retirement benefits. Well, labor is back. The WSJ made the point last week that labor has power today that it has not enjoyed in decades. And, per the aforementioned statistics, labor is exercising that power.

With so much agitation among workers and demand for labor high, conditions are ripe for labor market “disruptors”. Some employers will simply become very aggressive recruiters of employees of other firms. If disruptive recruiting, employment and retention practices prove successful, we will see winners and losers emerge in “the war for talent.” And it isn’t start up or fringe firms taking aggressive postures. According to the WSJ, Allstate has determined that 75% of the positions they employ can be done remotely, while another 24% can be done in a hybrid fashion. That’s 99% of a traditional employer’s workforce that will have location flexibility. This means location independence may not be a worker bonus as much as it may simply be the new norm. It also means that a company may not simply struggle to hire, but that a failure to adequately adjust to the future of work will make a company vulnerable to disruption as its work force is an easy target for other employers.

History tells us that labor’s moment may not last for very long. But the longer that labor shortages last, and particularly with so much competition for knowledge workers, labor won’t come away empty handed.

Monday, May 31, 2021

Is There a Business Case for Utility Tech Investments?

Last year I wrote a piece on legacy modernization initiatives. Among the points I made was that legacy modernization is at best a break-even proposition: modernization is simply trading something old for its modern counterpart, getting the same capabilities in return. Of course, there are first order benefits to legacy modernization. Additional or more comprehensive capabilities that come standard with a new COTS product; lower labor intensity and less dependency on costly knowledge workers required to sustain legacy assets; and reducing systemic fragility (e.g., production downtime) are all very real economic benefits that have P&L impact. But by and large, these benefits at best cover the costs for a modernization effort: the new assets will come with a cost to acquire and customize, a cost to migrate, a cost to integrate with other systems, and annual costs to maintain, support and evolve. Software ain’t cheap to buy, implement and live with.

But one thing I did not point out in last year’s blog is that a legacy modernization - even a sweeping one - falls into the category of utility tech not value-generative tech.

A value-generative investment is a roll of the dice that, say, a new market opportunity can be developed or a cost efficiency can be made where none was possible before. There is some uncertainty whether a market opportunity can be converted or a cost efficiency can be realized because of factors outside of anybody’s control: that buyers will see the company as a provider of a new category service it has never offered before, that a problem space is sufficiently consistent enough to allow for systemic improvements, that the technology exists to perform the task in the environments and conditions where it must perform, and so on. A value-generative investment is the pursuit of something that may not have been necessary or possible, and therefore could not have been done before. A value-generative investment is an exercise in deploying risk capital through IT in the pursuit of extraordinary benefit that yields competitive advantage.

This does not describe legacy modernization. Investments in utility capabilities are the pursuit of improvements in the way things are done, because at present they are inadequate by contemporary standards. The risks in a legacy modernization investment are entirely to do with execution of the investment itself, not how well the investment performs post-production. For legacy modernization benefits to be realized, the assets must be built to be reliable and low-maintenance; and customization, conversion and cutover costs must not spiral out of control. The proximate causes for utility investment failure are all within the confines of the execution of the investment itself: that the people doing the work are competent in the domain and technologies; that there is low staff turnover for the duration; that the team is not creating an entire project phase of execution (in the form of unanticipated late-stage integration and testing) to solve problems entirely of its own making; and so forth. True, there are unknowns in the business domain and in the legacy systems, and such uncertainty does create the risk for costs to increase. However, uncertainty of this kind is generally covered by cost contingency in the investment proposal. Even in the extreme cases where legacy assets are completely unmaintainable, legacy system modernization is still the replacement of one known domain of capabilities with another.

The nature of the uncertainty in the investment matters because it changes the nature of the capital allocation question put to an investment committee. For value-generative investments, the investment committee is asked whether it wants to gamble some of the firm’s capital in the uncertain pursuit of extraordinary benefit. By and large, only the investment capital itself is at risk, because an investment committee can terminate an underperforming value-generative investment with little reputational and operational blowback. However, for utility investments, the investment committee is asked whether it wants to tie up corporate capital for an extended period of time to improve the quality of services within the firm. Utility investments tend to be all-in commitments, so the investment committee is also underwriting the risk that additional capital will be necessary and that it will be tied up for a longer period of time to make good on the modernization investment.

Hence these are two very different types of capital allocation. One is to bet some pocket change at a casino table with something less than 100% expectation of a full payoff, and perhaps any payoff at all. The other is to prepay for two years for a health club membership in the anticipation that regularly using the health club will result in lower insurance premiums. In capital terms, the prior is equity, the latter is debt.

The justification for each investment is markedly different. The upside potential - however remote - for a value-generative pursuit will eclipse its cost. The upside potential for a utility pursuit will be break-even at best. Even the most thorough of cost-benefit analyses will not make a utility investment a no-brainer. Look, even a value-generative pursuit that fails yields a good story for the CEO to tell the board, provided it wasn’t an outsized gamble of scarce corporate funds. But many a C-level exec has been fired for cost overruns on utility investments.

A compelling value-generative tech proposal gets the investment committee to ask, “we accept the possibility, how probable is the payoff and how long is the window of opportunity?” Yet even the most compelling utility tech proposal gets the investment committee to ask, “we accept the need to do this, but do we have to do this right now?

The question that a cost-benefit analysis for a utility tech investment must frame is, “why should we do this right now?” We’ll look at what that analysis consists of in a future post.

Friday, April 30, 2021

Armchair Regulator

Bernie Madoff, one of the greatest financial fraudsters of all time, died earlier this month. That his was possibly the largest Ponzi scheme ever created a sense of history unfolding in front of our eyes as it was exposed in 2008. That his fraud was exposed at the height of the greatest economic crisis since 1929 gave the financial crisis a name and a face that the legions of bankers, politicians, Treasury and Fed officials could not. That this was a fraud hidden in plain sight for years, probably decades, that had duped so many cemented the popular impression that nobody had their hands on the wheel of the financial economy; it was a runaway train that spelled doom for its willing and unwilling passengers alike.

My interest in the Madoff fraud has always had little to do with the fraud itself or the context of the times, and more to do with the failure of regulators to detect and investigate Madoff’s activities. The regulatory lapses would be comical if their effects weren’t so tragic.

Through the eyes of regulatory practice, the Madoff fraud is a story of what regulators did not do. The SEC received a half-dozen detailed allegations about Madoff’s business over a 16 year span, allegations that the SEC either dismissed without investigation, or investigated in a half-hearted fashion. In light of what was ultimately learned, the latter are egregious lapses in regulatory conduct. SEC officials accepted implausible statements from Madoff as fact. They failed to perform the most rudimentary of independent research. To wit: a single call to the DTCC would have revealed Madoff hadn’t executed a trade in years.

From a regulatory perspective, the primary conclusions have been (a) shame on the SEC and (b) that regulators are not looking for fraud while it is happening, only describing it in detail once it has been exposed.

Yet there are deeper, personal, questions to reflect on.

The most popular - and the most useless - of these questions is whether or not you would have been a willing investor. The “would you have invested” question is useless in no small part because it implies that the victims are complicit in the perpetuation of the fraud by their own blind faith borne entirely of greed. And it is true that for decades, Madoff’s returns were outrageous and inexplicable by all standards, yet he persisted year after year after year, attracting new investors in his ever-growing Ponzi scheme. But if invited, coerced, hounded, and even shamed by people in your trust network, would you have put money in? Of course you would have. Very few individual investors have the sophistication and skepticism, let alone the cuts and bruises and calluses, of an institutional investor. Add the marketing that Madoff employed - personal relationships, exclusivity, secrecy, and the twisted implication that because the SEC had investigated his firm no less than six times and found nothing it was quite clearly legitimate - and the individual with capital to invest didn’t have much of a chance. As the 19th century sage PT Barnum pointed out, the coefficient of suckers to birth rates has always been nearly 1.0. Very few have special immunity.

The better questions have to do with how you would have behaved as a regulator. The popular posture toward the SEC is one of disappointment, that they failed the very investors they are paid to protect, that the institution itself is corrupted by regulatory capture. Yet had you been a SEC investigator, would you have recognized the impossibility of Madoff’s returns when presented with little more than other people’s theories? Would you have been diligent with inquiries after the boss told you to drop it? Would you even have been just a little bit more hygienically thorough?

Answer those questions in the context of business-as-usual at any regulatory agency. How understaffed is the SEC? How many plausible allegations of fraud is the SEC presented with each month? And the allegations they receive, are they truly the product of independent researchers seeking to right wrongs, or are they motivated by some nefarious intent such as professional jealousy? How does the SEC balance the quantity versus the quality of investigations they undertake? How routine do these investigations become to the investigators themselves? What makes any of us think we would not ourselves be burdened by these - and many, many more - factors in the disposition of our jobs as investigators? This line of questioning is not intended to suggest the SEC was not derelict in its duties, but to point out how difficult the job is.

If the regulatory failures surrounding Madoff are examples of bad governance, what does good look like? I have written elsewhere that we can take cues from the positive behaviors and actions of the activist investor playbook. But if the regulatory lapses in the Madoff case tell us anything, it is that a playbook is one thing, a persona is another. Dr. John Kay summed this up best when he described the person best suited to do this kind of job:

You require both an abrasive personality and considerable intellectual curiosity to do the job.

It is easy to criticize the regulators in hindsight from the comfort of one’s armchair. It is another to possess the personality traits, the discipline to know when to bring them to bear, and the energy to sustain them, in every situation every day in which we find ourselves.

Wednesday, March 31, 2021

The Ever Given is Not a Black Swan Event

By transporting a container more cheaply than any other vessel afloat, [Emma Maersk] and her six sister ships were expected to stimulate even faster growth in international trade, lowering the cost of moving goods through the supply chains that had reshaped the global economy…

An extremely large container ship - the Ever Given - has been in the news this past week for running aground and blocking traffic through the Suez Canal. This has caused considerable disruption to global supply chains. Some have incorrectly referred to the incident and the aftermath as a “black swan” event.

The disruption is not simply the result of a ship running aground and blocking traffic in a busy, narrow passageway. The disruption is a function of the scale of the container ships themselves. In the mid-2000s, with global trade growing by leaps and bounds every year, shipping companies opted for larger and larger vessels. The largest of these ships each carry the equivalent of over 10,000 truckloads of goods. With trade growing, the value proposition of ever-larger ships was self evident in 2007: “On a per-container basis, a larger vessel cost less to build and operate than a smaller one, allowing the owner to undercut competitors’ cargo rates and still earn a healthy profit.”

Such scale led to optimization among tightly-coupled participants in the global supply chain. “Operating on regular schedules - such that an identical vessel departs Shanghai every Wednesday, stops in Singapore nine days later and arrives in Antwerp five weeks hence, with tight connections to barges and freight trains - intermodal container transport gave manufacturers and retailers the confidence to plan tightly organized long-distance supply chains.”

But as Nassim Taleb pointed out well over a decade ago in his book The Black Swan, “Globalization [of financial markets] creates interlocking fragility, while reducing volatility and giving the appearance of stability. In other words it creates devastating Black Swans.”

And so it has happened with container shipping.

Though supremely efficient at sea, Emma and the even larger ships that followed in her wake became a nightmare. By making freight transportation slower and less reliable than it had been decades earlier, they helped to stifle the globalization of manufacturing…

Local optimization - in this case, minimizing the cost of transporting a container of goods over the sea - distorted the economies of scale of global shipping. Shippers, facing excessive capacity after the financial crisis of 2008, either went out of business, merged, or did deals to fill their ships to capacity. Optimization of the oceangoing containership fleet came at the cost of the efficiency of the rest of the supply chain into which it was integrated.

Discharging and reloading the vessel took longer as well, and not only because there were more boxes to put off and on. The new ships were much wider than their predecessors, so each of the giant shoreside cranes needed to reach a greater distance before picking up an inbound container and bringing it to the wharf, adding seconds to the average time required to move each box. Thousands more boxes multiplied by more handling time per box could add hours, or even days, to the average port call. Delays were legion.
Freight railroads staggered under the heavy flow of boxes into and out of the ports. Where once an entire shipload of imports might be on its way to inland destination within a day, now it could take two or three. Queues of diesel-belching trucks lined up at terminal gates, drivers unable to collect their loads because the ship lines had too few chassis on which to haul the arriving containers. And often enough, the partners in one of the four alliances that came to dominate ocean shipping didn’t use the same terminal in a particular port, requiring expensive truck trips just to transfer boxes from an inbound ship at one terminal to an outbound ship at another.

But the disruption to supply chains created by the beaching of the Ever Given is not a black swan event. First, the Ever Given was involved in an incident in 2019 in which it was caught in high winds while operating at slow speed and collided with a pleasure ferry in the Elbe River in Hamburg, Germany. The same conditions have been cited as contributing factors to the vessel’s recent beaching in the Suez, and it is fair to say those conditions are not exactly the stuff of “one hundred year events”. Second, the supply chain bedlam triggered by the Ever Given’s beaching exposed the asymmetric downside risk endemic to the systemic fragility - that is, the woefully inadequate robustness - in tightly-coupled supply chains. However, this asymmetric risk - as evidenced by the previous two paragraphs - was hidden in plain sight.

An incident is not a “black swan event” simply because some people could not fathom the possibility of it occurring. The triggering event had happened before. The systemic fragility was as plain as day. In fact, the article I’ve quoted throughout this post describing that fragility - titled, appropriately enough, The Megaships That Broke Global Trade - was not published in the wake of the Ever Given’s beaching: it appeared last October in the Wall Street Journal. This is not a black swan event. This was a fragile system operating on borrowed time.

I kept a copy of the October WSJ article intending to use it as a metaphor for large software development processes designed to optimize labor unit costs, specifically developer labor unit costs. The pattern is the same, the risks are the same, the consequences are the same. But with hindsight the article reads much better as a chronicle of one-way risk: a tightly-coupled system with highly concentrated activity in the value stream that, in turn, is locally optimized to a single variable is a powder keg that can be detonated by the most mundane of sparks.

As written, the October WSJ article about the Emma Maersk is a story of misplaced local optimization creating systemic inefficiency and undesirable outcomes. Ironically, the author laments the factors that do not “flatter the bottom line” resulting from local optimization of the oceangoing container fleet: the need for “keeping more inventory, shipping via multiple routings and producing in multiple factories rather than in giant sole-source plants”. While these measures certainly increase costs, they also make manufacturing and distribution far more robust (that is, less fragile). The Ever Given incident points out the benefits of doing so. Cheap insurance policies against one-way downside risk such as multiple facilities and alternative shipping routes are a far preferable price to pay than to have factors outside of your control - a beached container vessel in the Suez - make a mockery of fragile optimization.

While the Emma Maersk article is a compelling story of systemic distortions resulting from local optimization, it is really the story of a critical yet fragile system waiting for a simple, pedestrian event to realize asymmetric downside risk.

Thursday, March 11, 2021

Canoecopia 2021 Is This Weekend

Cheryl and I are presenting a session we titled People, Paddling and Food at Canoecopia this year. Our presentation is about how food affects behaviors in the backcountry. Well, more accurately, how you can get in front of those behaviors, to prevent food from having a negative impact on how a group functions when paddling in the backcountry, as well as how food can have a positive effect on group dynamics. We hope you find the subject interesting enough that you will attend.

Even if you do not find this subject compelling, please give Canoecopia a look. It is the largest paddling conference in North America. It was canceled last year as pandemic lockdown policies took effect hours before the conference was to begin. It is a virtual conference this year.

A lot of people have worked really hard to create a virtual outdoors conference, from creating the conference infrastructure, to making compelling vendor interactions, to recording and staging pre-recorded presentations, to coordinating all of this, to the presenters making work for the coordinators (we mostly got it right on our third submission...), as well as many others doing hundreds of other behind-the-scenes tasks.

Registration is $15. For $15 you get access to amazing, thoughtful people in the paddling community spanning a complete spectrum of topics.

2020 was a difficult year for backcountry trips - overcrowding, closed borders, etc. 2021 will probably not be much different. Regardless the circumstances, we all have the opportunity to improve our thoughtfulness of and tradecraft in the backcountry. Canoecopia is the premier forum in which we can learn how to do that.

I do hope you will attend.

Sunday, February 28, 2021

Echoes

In recent days, I’ve been reminded of some core company values, once fresh and different, later taken for granted. It was good to be reminded of them.

* * *

I was on a call the other day when a member of the client’s team said, “we don’t prioritize because something is easy, we prioritize something because it is hard.” They went on to acknowledge that some of my colleagues had counseled them to do this some months before I started working with them. I was overjoyed to hear the client say this.

When I first joined ThoughtWorks in 2005, one of our core messages was to solve an “outer quadrant” problem first - something in the problem space that is difficult and complex. Something small and discreet of course, as opposed to trying to boil the ocean, but very much in the category of “hard” than “easy”. This was contradictory to the prevailing wisdom at the time (that persists to this day), which was to lead with quick wins, pilots or proofs of concept.

By and large, none of these latter approaches address the core of the problem that needs to be solved. “Quick wins” attack the margins. They might very well be useful, but quick wins are just as likely to contribute to systemic complexity by layering on new points of integration, adding logic and system redundancies, and creating confusion over veracity of data or transaction success. I once came into contact with a company whose primary engagement pattern was to create duplicative data warehouses by copying data from existing data sources and writing reports off it. They’d done this 6 or 7 times at one company alone. It isn’t difficult to imagine the “single source of truth” problem that arose from timing discrepancies and reliability problems among all of those data warehouses. All those quick wins did was make them more sclerotic.

Pilots and proofs of concept tend to be evaluated in environments heavily favored to the success of the pilot, not in environments representative of the actual problem space that needs to be solved. The result of quick wins and pilots and proofs of concept is a false positive with regard to the affordability and even solve-ability of a particular problem space. You might feel good for having done something, but the something doesn’t make material progress toward addressing the underlying problem.

It seems to me that the “quick win” and “pilot” philosophy became increasingly prominent in American business in the 1980s. In the 1970s, American corporations had earned a reputation for poor quality. They also had become gripped by analysis paralysis fueled by, among other things, a fear of making a mistake. Management consultants like Tom Peters struck a nerve on this conflation of “poor quality” and “doing nothing about it”. In the book “In Search of Excellence” published in 1982, Peters and Robert Waterman listed what they called the “8 characteristics of excellent companies”. Top of the list was “a bias for action:” “a preference for doing something - anything - rather than sending a question through cycles and cycles of analyses and committee reports.” Peters, Waterman and others advocated “a bias for action” to cut the Gordian knot of inaction.

At the same time, Japanese management techniques were in the ascendency. Japan’s economy was booming, in no small part because products made in Japan had earned a reputation for both quality and affordability. American managers wanted to know what their Japanese counterparts did differently. Among the things that came to the surface in what made Japanese management different was to solve small problems and learn before solving a big problem. Whereas an American company would attempt to solve Big Problems with Big Solutions, the Japanese management philosophy attacked Big Problems by solving Small Problems with Small Solutions. In the American company, more often than not the Big Solution yielded a Big Mistake: massive cost overruns, excessive time delays, and all-too-frequently outright failure. In the Japanese company, The Big Problem was resolved over time as a sum of the evolution of Small Solutions.

Iterative solution development resonated with the “bias for action” theme because it provided a means of overcoming the collective learned helplessness among employees of American companies. A systemic problem in an American corporation was larger than any one person or small group of people (or their budgets) within an enterprise could solve, but small problems could be solved through simple, affordable solutions. The emerging personal computer technology of the time enabled this tremendously. The result was a brief management revolution of the 1980s, where pent up frustration among employees of being trapped in oppressive systems that yielded poor quality met up with the evangelical management themes of excellence that stressed “action” and “devolved decision making”, which in turn were operationalized by the liberating personal technologies of the day.

The themes of a “bias for action” and “iterative solutions” and many other things stressed by Peters and others under the aegis of “excellence” live on in many forms today, not least of which are Lean and Agile software development as well as product management practices. All well and good, but somewhere along the way the messages get corrupted. All too often, a “bias for action” prioritizes “quick wins” which are interpreted as “go after the low-hanging fruit” and is applied as “do the easy stuff first.” The easy stuff is almost always at the margins and not the core, and are not representative of the need. Similarly, even the most well-intentioned proof-of-concept misses the mark if it intentionally excludes the challenges central to the problem space.

To hear somebody say “we’re prioritizing the hard stuff first” is rewarding indeed. It’s an indicator that they’re serious about solving the problem at hand.

* * *

When I joined ThoughtWorks, one of the things that many of the experienced software developers stressed was mentorship. They would seek out less experienced colleagues who were difficult to staff on a client project and mentor them in everything ranging from technical skills, to the value system for how we worked, to ways in which they could be effective consultants.

Mentoring can be a slow process, the results difficult to measure, it doesn’t scale, and it doesn’t garner a whole lot of attention. Mentoring is not something that comes naturally to a lot of people, and by and large there are more people interested in doing things that are noisy and that scale - go on sales calls, speak at conferences - than in doing the humdrum work of mentoring a person.

Mentoring does more than just help people acquire skills and consulting acumen. It gives new people first hand experience in practicing the values that a company of people holds. This builds muscle memories that become their default way of working. That, in turn, builds the culture of a company, better than any email or podcast or missive ever can.

We had virtual leaving drinks for a long-time colleague particularly adept at doing this kind of mentoring, as he is joining another firm. I will miss him.

Sunday, January 31, 2021

Distribution

In the past year, I’ve written several times about changes taking place and likely to be long lasting in the wake of the COVID-19 pandemic. One area I’ve touched on, but haven’t delved into much, is distribution.

Producers of all kinds have had to create new ways of connecting with their customers. Restaurants lost and had to contend with severely curtailed access to their primary distribution channel, the dining room. Countless manufacturers lost and had to contend with complete loss of their primary distribution channel, small retail and department stores. Digital products like movies lost and had to contend with severely curtailed access to their primary distribution channel, theatres.

Producers have responded by finding, creating and reprioritizing alternative means of distribution. All restaurants (well, those still in business) increased their distribution through take-away meals. eCommerce retail sales from groceries to gadgets skyrocketed last year. In December, Warner Communications announced they will distribute movies simultaneously to theaters and on their HBO streaming service in 2021.

And the opportunities to innovate in distribution are far from over. To wit: in December, the FAA relaxed guidelines for drone delivery, expanding the potential for commercial drone use in the US (and of course, drone delivery has expanded in the rest of the world).

Distribution will be among the primary ways the chattering classes characterize the post-pandemic world. To what extent do people return to old - largely physical - forms of distribution? Are the new forms of distribution long-lasting, or are they just phenomenon of their times, like Sherry served at Christmastime in Britain?

Change is like a regenerative fire. Fire needs three things: fuel, spark, and accelerant.

The fuel is policies in response to COVID-19 and, in turn, the individual and business reaction to them. Companies need to sell and people want to buy. The longer those policies remain in place, the more significant these new forms of distribution become to producers. Those policies also depress the prices for assets tightly coupled to past forms of distribution, things like commercial airplanes and shopping malls.

The spark is the realization that businesses don’t need to operate in the ways that they have for decades, e.g., expenses don't need to be so high, a company doesn’t need as much square footage or needs to be able to use its physical space differently. The evidence exiting 2020 supports this: even though revenues declined in 2020, earnings per share for the S&P 500 for the year look great.

The accelerant is cheap capital: Fed policy will maintain cheap capital for the foreseeable future. This creates liquidity that has to go somewhere, and will find every nook and cranny.

Distribution has changed. Policy is entrenching those changes. The businesses that didn’t collapse survived in large part because they found new distribution channels. Innovation in distribution is accelerating. Cheap capital will finance more innovation in distribution. It’s a reinforcing cycle. And it’s just beginning.