I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Sunday, April 30, 2023

Measured Response

Eighteen months ago, I wrote that there is a good case to be made that the tech cycle is more economically significant than the credit cycle. By way of example, customer-facing tech and corporate collaboration technology contributed far more to robust S&P 500 earnings during the pandemic than the Fed’s bond buying and money supply expansion. Having access to capital is great; it doesn’t do a bit of good unless it can be productively channeled.

Twelve months ago, I wrote a piece titled The Credit Cycle Strikes Back. This time last year, rising interest rates and inflation reminiscent of the 1970s cast a pall over the tech sector, most obviously with tech firms laying off tens of thousands. Arguably, it cast a pall over the tech cycle in its entirety, from households forced to consolidate their streaming service subscriptions to employers increasingly requiring their workforce to return to office. Winter had come to tech, courtesy the credit cycle.

Silicon Valley Bank collapsed last month. The balance sheet, risk management, and regulatory reasons for its collapse are well documented. The Fed responded to SVB’s collapse by providing unprecedented liquidity in the form of 100% guarantees on money deposited at SVB. The headline rationale for unlimited deposit insurance - economic policy, political exigence - are also well documented elsewhere. Still, it is an economic event worth looking into.

An interesting aspect to the collapse of SVB is the role that social media played in the run on the bank. A recent paper presents prima facie evidence that the run on SVB was exacerbated by Twitter users. In a pre-social media era, SVB’s capital call to plug a risk management lapse may very well have been a business as usual event; that is, at least, what it appears SVB’s investment banking advisors anticipated. Instead, that capital call was a spark that ignited catastrophic capital flight.

If the link between Tweets and capital flight from SVB is real, the Fed’s decision looks less like a backstop for bank failures caused by poor risk management decisions, and more a pledge to contain the impact of a technology cycle phenomenon on the financial system. As the WSJ put it this week, “… Twit­ter’s role in the saga of Sil­i­con Val­ley Bank re­it­er­ated that the dy­nam­ics of fi­nan­cial con­ta­gion have been for­ever changed by so­cial me­dia.” Most banks had paid attention to the fact that Treasurys had declined in value and took appropriate hedge positions to protect their core business of maturity transformation. Based on fundamentals it wasn’t immediately obvious there was a systemic crisis at hand. Yet the rapidity with which SVB had collapsed was unprecedented. The Fed’s response to that rapidity was equivalent to Mario Draghi’s “whatever it takes” moment.

Social media-fueled events aren’t new in the financial system; by way of example: meme stock inflation. And assuming SVB’s collapse truly was a social media phenomenon, the threat was still at human scale: even if those messengers had a more powerful megaphone than the newspaper reporter of yore observing a queue of people outside a bank branch, it was a message propagated, consumed and acted upon by humans. Thing is, the next (or more accurately, the next after the next) threat will be AI driven, the modern equivalent to program trading that contributed to Black Monday in 1987. Imagine a deepfake providing the spark fueling adjustments by like-minded algorithms spanning every asset class imaginable.

As tech has become an increasingly potent economic force, it represents a bigger and bigger challenge to the financial system. To wit: eventually there will be a machine scale threat to the financial system, and human regulators don’t have machine scale. As the saying goes, regulation exists to protect us from the last crisis - as in, regulations are codified well after the fact; the scale mismatch we’re likely to face implies a low tolerance for delay. The last line of defense are kill switches, and given the tightly coupled, interconnected, and digital nature of the modern financial system, orchestrating kill switches presents a machine scale problem itself. The Fed, the Department of the Treasury, the OCC, the FDIC, the European Central Bank, and all the rest need new tools.

Let’s hope they don't build HAL.

Friday, March 31, 2023

Competency Lost

The captive corporate IT department was a relatively early adopter of Agile management practices, largely out of desperation. Years of expensive overshoots, canceled projects, and poor quality solutions gave IT not just a bad reputation, but a confrontational relationship with its host business. The bet on Agile was successful and, within a few years, the IT organization had transformed itself into a strong, reliable partner: transparency into spend, visibility into delivery, high-quality software, value for money.

Somewhere along the way, the “products not projects” mantra took root and, seeing this as a logical evolution, the captive IT function decided to transform itself again. The applications on the tech estate were redefined as products, assigned delivery teams responsible for them with Product Owners in the pivotal position of defining requirements and setting priorities. Product Owners were recruited from the ranks of the existing Business Analysts and Project Managers. Less senior BAs became Product Managers, while those Project Managers who did not become part of the Product organization were either staffed outside of IT or coached out of the accompany. The Program Management Office was disbanded in favor of a Product Portfolio Management Office with a Chief Product Officer (reporting to the CIO) recruited from the business. Iterations were abandoned in favor of Kanban and continuous deployment. Delivery management was devolved, with teams given the freedom to choose their own product and requirements management practices and tools. With capital cheap and cashflows strong, there was little pressure for cost containment across the business, although there was a large appetite for experimentation and exploration.

As job titles with "Product" became increasingly popular, people with work experience in the role became attractive hires - and deep pocketed companies were willing to pay up for that experience. The first wave of Product Owners and Managers were lured away within a couple of years. Their replacements weren't quite as capable: what they possessed in knowledge of the mechanical process of product management they lacked in fundamentals of Agile requirements definition. These new recruits also had an aversion to getting deeply intimate with the domain, preferring to work on "product strategy" rather than the details of product requirements. In practice, product teams were "long lived" in structure only, not in institutional memory and capability that matter most.

It wasn't just the product team that suffered from depletion.

During the project management years of iterative delivery, something was delivered every two weeks by every team. In the product era, the assertion that "we deploy any time and all the time" masked the fact that little of substance ever got deployed. The logs indicated software was getting pushed, but more features remained toggled off than on. Products evolved, but only slowly.

Engineering discipline also waned. In the project management era, technical and functional quality were reported alongside burn-up charts. In the product regime, these all but disappeared. The assumption was, they had solved their quality problems with Agile development practices, quality was an internal concern of the team, and primarily the responsibility of developers.

The hard-learned software delivery management practices simply evaporated. Backlog management, burn-up charts, financial (software investment) analysis and Agile governance practices had all been abandoned. Again, with money not being a limiting factor, research and learning were prioritized over financial returns.

There were other changes taking place. The host business had settled into a comfortable, slow-growth phase: provided it threw off enough cash flow to mollify investors, the executive team was under no real pressure. IT had decoupled itself from justifying every dollar of spend based on returns to being a provider of development capacity for an annual rate of spend. The definition of IT success had become self-referential: the number and frequency of product deployments and features developed, with occasional verbatim anecdotes that highlighted positive customer experiences. IT's self-directed OKRs were indicators of activity - increased engagement, less customer friction - but not rooted in business outcomes or business results.

The day came when an ambitious new President / COO won board approval to rationalize the family of legacy of products into a single platform to fuel growth and squeeze out inefficiency. The board signed up provided they stayed within a capital budget, could be in market in less than 18 months, and could fully retire legacy products within 24 months, with bonuses indexed to every month they were early.

About a year in, it became clear delivery was well short of where it needed to be. Assurances that everything was on track were not backed up by facts. Lightweight analysis led to analysis work being borne by developers; lax engineering standards resulted in a codebase that required frequent, near-complete refactoring to respond to change; inconsistency in requirements management meant there was no way to measure progress, or change in scope, or total spend versus results; self-defined measures of success meant teams narrowed the definition of "complete", prioritizing the M at the expense of the V to meet a delivery date.

* * *

The sharp rise of interest rates has made capital scarce again. Capital intensive activities like IT are under increased scrutiny. There is less appetite for IT engaging in research and discovery and a much greater emphasis on spend efficiency, delivery consistency, operating transparency and economic outcomes.

The tech organization that was once purpose built for these operating conditions may or may not be prepared to respond to these challenges again. The Agile practices geared for discovery and experimentation are not necessarily the Agile practices geared for consistency and financial management. Pursuing proficiency of new practices may also have come at the cost of proficiency of those previously mastered. Engineering excellence evaporates when it is deemed the exclusive purview of developers. Quality lapses when it is taken for granted. Delivery management skills disappear when tech's feet aren't held to the fire of cost, time and, above all, value. Domain knowledge disappears when it walks out the door; rebuilding it is next to impossible when requirements analysis skills are deprioritized or outright devalued.

The financial crisis of 2008 exposed a lot of companies as structurally misaligned for the new economic reality. As companies restructured in the wake of recession, so did their IT departments. Costly capital has tech in recession today. The longer this condition prevails, the more tech captives and tech companies will need to restructure to align to this new reality.

As most tech organizations have been down this path in recent memory, restructure should be less of a challenge this time. In 2008, the tech playbook for the new reality was emerging and incomplete. The tech organization not only had to master unfamiliar fundamentals like continuous build, unit testing, cloud infrastructure and requirements expressed as Stories, but improvise to fill in the gaps the fundamentals of the time didn't cover, things like vendor management and large program management. Fifteen years on, tech finds itself in similar circumstances. Mastering the playbook this time round is regaining competency lost.

Tuesday, February 28, 2023

Shadow Work

Last month, Rana Foroohar argued in the FT that worker productivity is declining in no small part because of shadow work. Shadow work is unpaid work done in an economy. Historically, this referred to things like parenting and cleaning the house. The definition has expanded in recent years to include tasks that used to be done by other people that most of us now do for ourselves, largely through self-service technology, like banking and travel booking. There are no objective measures of how much shadow work there is in an economy, but the allegation in the FT article is that it is on the rise, largely because of all the fixing and correcting that the individual now must do on their own behalf.

There is a lot of truth to this. Some of the incremental shadow work is trivial, such as having to update profile information when an employer changes travel app provider. Some is tedious, such as when people must patiently hurdle through the unhelpful layers of primitive chat bots to finally reach a knowledge worker to speak to. Some is time consuming, such as when caught in an irrops travel situation and needing to rebook travel. And some is truly absurd, such as spending months navigating insurance companies and health care providers to get a medical claim paid. Although customer self-service flatters a service provider’s income statement, it wreaks havoc on the customer’s productivity and personal time.

But it is unfair to say that automated customer service has been a boon to business and a burden to the customer. Banking was more laborious and inconvenient for the customer when it could only be performed at a branch on the bank’s time. And it could take several rounds - and days - to get every last detail of one’s travel itinerary right when booking a business trip through a travel agent. Self-service has made things not just better, but far less labor intensive for the ultimate customer.

It is more accurate to say that any increase in shadow work borne by the customer is not really a phenomenon of the shift to customer self-service as much as it lays bare the shortcomings of providers that a large staff of knowledgable customer service agents were able to gloss over.

First, a lot of companies force their customers to do business with them in the way the company operates, not in the way the customer prefers to do business. A retailer that requires its customers to put an order on a specific location rather than algorithmically routing the order for optimal fulfillment to the customer - e.g., for best availability, shortest time to arrival, lowest cost of transportation - forces the customer to navigate the company’s complexity in order to do business. Companies do this kind of thing all the time because they simply can’t imagine any other way of working.

Second, edge cases defy automation. Businesses with exposure to a lot of edge cases or an intolerance to them will shift burden to customers when they arise. The travel industry is highly vulnerable to weather and suffers greatly with extreme weather events. Airline apps have come a long way since they made their debut 15 years ago, but when weather disrupts air travel, the queues at customer service desks and phone lines get congested because there is a limit to the solutions that can be offered through an app.

Third, even the simplest of businesses in the most routine of industries frequently manage customer service as a cost to be avoided, if not outright blocked. A call center that is managed to minimize average call time as opposed to time to resolution is incentivized to direct the caller somewhere else or deflect them entirely rather than resolve the customer problem. No amount of self-service technology will compensate for a company ethos that treats the customer as the problem.

There is no doubt that shadow work has increased, but that increase has less to do with the proliferation of customer self-service and more to do with the limitations of its implementation and the provider’s attitude toward their customer.

Perhaps more important is what a company loses when it reduces the customer service it provides through its people: the inability to immediately respond humanely to a customer in need; the aggregate loss of customer empathy through a loss of contact. This makes it far more difficult for a company to nurture its next generation of knowledge workers to troubleshoot and resolve increasingly complex customer service situations.

But of greater concern is that as useful as automation is from a convenience and scale perspective, its proliferation drives home the point that customers are increasingly something to be harvested, not people with whom to establish relationships. Society loses something when services are proctored at machine rather than human scale. In this light, the erosion of individual productivity is relatively minor.

Tuesday, January 31, 2023

Relics

I recently came across a box of very old technology tucked away in my basement: PDAs, mobile phones, digital cameras and even a couple of old laptops, all over two decades old. It was an interesting find, if slightly disturbing to think this stuff has moved house a couple of times. Before disposing of something, I try to repurpose it if I can. That's hard to do with electronics once they're orphaned by their manufacturers. Still, electronics recycling wasn't as easy to do twenty years ago, so perhaps just as well that I held onto them until long after it was.

In addition to bringing back fond memories, finding this trove got me thinking about about how rapidly mobile computing evolved. In the box from the basement were a couple of PDAs, one each by HP and Compaq; phones by Motorola, Nokia (including a 9210 Communicator) and Ericcson; and a digital video recorder by Canon. The Compaq brand has all but disappeared; the makers of two of the three phones exited the mobile phone business years ago; the Mini-DV technology of the camcorder was obsolete within a few years of its manufacture.

There were also a couple of laptops in the box, one each made by Compaq and Sony. The interesting thing about the laptops is how little the form factor has changed. My first laptop was a Zenith SuperSport 286. The basic design of the laptop computer hasn't changed much since the late 1980s (although mercifully they weigh less than 17 lbs). The Compaq and Sony laptops in that box from the basement are not physically different from the laptops of today: the Sony had a square screen and lots of different ports, where a modern laptop has a rectangular screen and a few USB ports.

The laptop, of course, replaced the luggable computer of the 1970s and early 1980s made by the likes of Osborne and Kaypro and Compaq. The luggable was a statement for the era: what compels a person to haul around disk drives, CPU, keyboard and a small CRT? Maybe it was the free upper-body workout. The laptop was a quantum improvement in mobile computing.

But once that quantum improvement happened, the laptop became decidedly less exciting. As the rate of change of capabilities in the laptop slowed, getting a new laptop became less of an event and more of a pain in the ass. Not to mention that, just like the PDA and phone manufacturers mentioned above, the pioneers and early innovators didn’t survive long enough to reap the full benefits of the space maturing.

And the same phenomenon happened in the PDA/Phone/camera space. The quantum leap was when these converged with the original iPhone. Since then, a new phone has become less and less of an event. Yes, just like laptops, they get incrementally better. Fortunately, migration via cloud makes upgrading less of a pain in the ass.

The transition from exciting to ordinary correlates to the utility value of technology in our lives: in personal productivity, entertainment, and increasingly as the primary (if not only) channel for doing things. There are, of course, several transformative technologies in their nascent stages. Somehow, I don’t think any are spawning the Zenith Data Systems and Compaqs making a future relic that somebody someday will be slightly amused to find in a box in their basement.

Saturday, December 31, 2022

Reinvention Risk Trade

Southwest Airlines has made headlines in recent days for all the wrong reasons: bad weather impacted air travel, which required Southwest to adjust plane and crew schedules. Those adjusted schedules were often logistically flawed because the planes and crews matched at a specific place and time didn’t make sense in the real world. Making matters worse, those adjusted schedules had to be re-(and re- and re-)adjusted every time either the weather changed or operations changed (ie., more flight cancellations), and both the weather and operations were changing throughout Southwest's route network. The culprit, according to people at Southwest quoted by the Wall Street Journal, was scheduling technology that could not sufficiently scale and is nearing end-of-life. Whether a problem of rapid growth or neglected investment, everybody seems to agree that Southwest has been living on borrowed time.

The neglect of core technology is an all too common a practice by virtually every company: the technology becomes more complex than its foundational architecture was ever intended to support, the team familiar with the technology erodes through layoff and attrition, and as a result a technology become more vulnerable to failure. But it still works day in and day out, so there is no incentive to invest in repair or replacement.

Unfortunately, vulnerability of an aging technology isn't a financial statement phenomenon; it is at best one risk mentioned among many in the 10-K. However, money spent on the labor to reduce that vulnerability is a financial statement phenomenon. Add to that the opportunity cost: every dollar spent on risk mitigation is a dollar that doesn't go toward a net new investment in the business, or a dollar that can't be returned to investors. While it doesn’t cost anything for a technology to fall into a state of disrepair, it sure costs a lot to rehabilitate it. Conversely, neglect is not only free, it’s cash flow positive: i.e., the company can claim victory for streamlining tech spend.

But as mentioned above, neglect creates business risk. And risk is a peculiar thing.

There have been dozens of massive macroeconomic risks realized in the past 25 years - acts of terror, acts of war, financial crises, environmental disasters, viral pandemics - that have made a mockery of the most sophisticated of corporate risk models. Yet risk is still no better an investment proposition than it was a quarter of a century ago: investing to be prepared for "black swan" events (i.e., robustness) is still an uncommon practice (n.b. perhaps inventory build-up and multiple sourcing practices in response to supply chain disruption in recent years will change this, but it remains to be seen how durable this turns out to be). And anyway, dilapidated internal systems are self-inflicted exposures: even if they can talk about such risks publicly, CEOs aren't paid for their acumen at developing and executing remediation strategies. Plus, just about every company will accept exposure to technology risk as business as usual. Business is risk. If a company spent to mitigate every last risk, it would be wildly unprofitable. There's an amount the company budgets annually for maintaining the status quo and every now and again the company will try staffing some up-and-coming manager or hire some hotshot consultants to figure out a way to make things a little less bad. This is great, but it amounts to pennies spent mitigating very large dollar amounts of exposure. In other words, hope is all too often the insurance policy against having a huge hole blown in the income statement by the failure of a high-risk technology.

While risk is generally not an investible proposition for technology (unless business operations are being wildly disrupted because of it, such as is happening to Southwest this week), sometimes there is a golden ticket that promises to make the risk simply go away, such as when a company has a legitimate case to make that it can reposition itself as an ecosystem if only it were built on a cloud-based platform. With consistent cash flows and an existing - and under-leveraged - network of partners, the right leader can motivate investors to pony up to make a wholesale replacement of existing technology. It's a growth story with a side order of risk mitigation through modernization. And with the appropriate supporting data, this is an attractive proposition to risk capital.

Investible, yes, especially since it is more than just an investment that makes the business less bad than it need be. But the headline doesn't tell the whole story. Switching from one technology to another is not a trade of one set of business parameters (the company's current business and operating model) for another (the company's future business and operating model). It is more accurately a trade of risk profiles: exposure to a current technology (the tech and operations supporting current cash flows) versus exposure to aspirational technology (the tech and operations supporting aspirational cash flows).

The magnitude of the technology risk between the two is really no different. It is, optimistically, an exchange of current system sustainability risk for the combination of development risk and future system sustainability risk. System fragility and key person risk may make the status quo highly unattractive, but software development has long track record of cost overruns and failure. In practice, of course, development risk and current system sustainability risk are carried at the same time, and current system risk may be carried for a very long time if it proves difficult to fully retire some legacy components. The true exposure is therefore far more complex than current versus future technology. In practical terms, this means is that just because “reinventing the business” makes legacy modernization more palatable to investors doesn’t mean it offers the business a safe way out of technology risk.

It bears mentioning that a business electing to mitigate existing technology risk through reinvention is taking on a new set of challenges, especially if that company has not made such an investment in recent years. It must be ready to deal with contemporary software delivery patterns and practices that are much different from those of even a decade ago. It must know how to avoid the common mistakes that plague replatforming initiatives. It must be prepared to deal with knowledge asymmetry vis-a-vis vendor partners. It must know how to set the expectation for transparency in the form of working software, not progress against plans. And it must be prepared to practice an activist form of governance - not the bullshit spewed by vendors passed off as governance - to make those investments a success.

Reinvention promises freedom from the shackles of the status quo, but while going about that reinvention, exposure to technology risk vastly increases and stays at an elevated level for a long period of time. The future awaits the replatformed business, but do be careful what you get investors to agree to let you sign up to deliver.

Wednesday, November 30, 2022

You should...

Our favorite craft brewer has a tap room. They never have more than a dozen beers on tap. They only serve their own brewed beers, never anything sourced from another producer. They have only marginal amounts of product distribution; for all intents and purposes, they sell only through their tap room. While they’ll fill a growler or crowler, they do not keep inventory in cans, only kegs. They turn over 2/3rds - maybe it’s 3/4ths, maybe it’s 7/8ths - of their taps seasonally, where a season might be as short as a month or as long as half a year, depending on the beer. They have a flat screen but never broadcast sports or politics, only streamed images of nature or trains or the like. They stream their own custom audio playlist to provide ambient noise.

They run the business this way because this is the business they want to run. They have direct access to 99.9% of their customers (not 100% as once it leaves the premises, the contents of a crowler could end up in anybody’s stomach…) They’re not committed to provide beer to other businesses on any kind of a product mix by volume, let alone date delivery schedule. They get to experiment with product, constantly. They don’t make what they sell, they sell what they make.

On any given day in the taproom, a customer will give them advice, a sentence that always begins with the words “you should.” Such as, “you should distribute this and that beer to these bars in Madison and Milwaukee - you’d sell 20x as much as you do in a single tap room.” Or, “you should have a small electric oven and sell food.” Or “you should have dozens of TVs with football and this place would be packed on weekends.”

They are every bit as good at customer interaction as they are with making and serving beer. They listen patiently, smile, and reply with “thank you, we’ll think about that.”

* * *

The software business has long been intertwined with management consulting to one degree or another. Decades ago, tech automated tasks that changed long standing business processes; management was fascinated as this made businesses more efficient. The dot-com era (followed by mobile, and shortly thereafter by social media) ushered in changes in corporate <— —> customer and —> employee interactions. The contemporary tech landscape (cloud, AI, distributed ledger tech) - and not for the first time in the history of tech - promises to “reinvent the business.” ‘Twas ever thus: tech has long been, and sought out as, a source of business advice.

On the whole, tech is not a source of bad advice. When tech gets close to a problem space, it brings a different and generally value generative solution. Why do that work manually when we can easily automate and orchestrate that? Why have this customer talk to that salesperson when the customer can do that for themselves 99% of the time? Why have people churn through that data when a machine can learn the patterns?

But sometimes, advice from tech is truly value destructive.

I wrote about this some years ago, but standing next to me in the queue for a flight out of Dallas were a couple of logistics consultants lamenting the fact that a client had taken a tech consultancy’s advice and prioritized flexibility over volume in their distribution strategy. It sounds great in a windowless conference room: why let restaurants (who are 80% of the clientele) run out of branzino before the night is over? You should run a fleet of small delivery trucks to top up their stock of branzino for the night in near real time. Except, the distribution cost for a few branzino to that restaurant - even if we put it in a small truck with a few packages of great northern beans to the restaurant down the street and some basmati to a restaurant a few blocks away - is bloody expensive. The economics of distribution are based on volume, not flexibility. That restaurant will have to put a lot of adjectives in front of the noun to justify the cost of limited-supplied branzino on a Tuesday in November. ’tis far more economically efficient for the waitstaff to push the red snapper when the branzino runs out.

Another time, I was working with a manufacturer of very large equipment. The manufacturer sold through a dealer network. Dealers are given guidance from the manufacturer’s sales forecasting division as to the volume of each type of machine they should expect to sell in the next two years, by quarter. Dealers order machines with that guidance as an input (their balance sheet being another input), and over the course of time dealer orders get routed to a manufacturing plant to the dealer sales lot. The tech people couldn’t grok this. Manufacturing something without a specific end customer? You should have just-in-time manufacturing, so a customer order goes directly to a manufacturing facility. That way there is no finished goods inventory collecting dust on a dealer lot and the component supply chain can be somehow further optimized. Except, that exposes the manufacturer to demand swings. As it is, the manufacturer has hundreds of dealer P&Ls to which it can export its own P&L. They’ll build give or take 250,000 units of this model, and give or take 160,000 units of that model, and give or take 90,000 units of that other model, and 000,000s of all those other models, year in and year out, with minor modifications in major product cycles in an industry regulated by, among other things, emission standards. That’s a lot of machine volume, especially when there are dozens and dozens of models of tens of thousands of unit volume. The manufacturer has a captive dealer network that will buy 100% of what the manufacturer produces. The dealer network acts as a buffer on the manufacturer’s P&L: while the good years may not be maximally great for the manufacturer, the bad years aren’t too terribly bad, let alone event horizons on the income statement. That, in turn, creates consistency of cash flows for the manufacturer, which investors reward with a high credit rating, which makes debt more easily serviceable, which leaves money to reward equity holders. Just-in-time manufacturing exposes the manufacturer to end-customer market volatility, which would require a substantial change in capital structure, which would penalize both equity and debt holders. Markets go up, but markets also go down: minimizing the downside was of more value than maximizing the upside. Tech has known these swings (anyone remember the home computer revolution?), but the commercial landscape is so destructive, there is a lack of instititional memory.

There was the insurance company implementing a workflow management system for automating policy renewal. Although insurance data is highly structured, there are a lot of rules and conditions on the rules governing renewal, spanning the micro (e.g., geographic location in a city and number of employees) and macro (discounting and payout rules in the event a customer has a property & casualty policy as well as an umbrella policy, as opposed to just a property & casualty policy). There are a lot of policy renewal rules that go very deep into the very edge cases of the edge cases (e.g., a policy that renews on February 29). Well, the boss wants this policy automation thing done quickly, because we have a great story to share with investors that we’ve reduced the labor intensity of policy renewal. Along comes a tech vendor with a compelling suggestion: insurance company, you should incentivize your process automation vendor by rewarding them for the shortest time to development of each codified rule. (The operative word here is development, which is not the same as production delivery: delivery was deemed out of the control of the development partner.) Except, the contract the insurance company signed indexed cash payable to the vendor for development complete of each rule. Within three months, the vendor had tapped out over 80% of the cash for software development, yet each rule that was dev complete had on average over five severity-1 defects associated with it and was therefore unsuitable for deployment. Worse still, one third of those defects were blocking, meaning there were countless other defects to discover once the blocking defect was removed.

Then there is the purely speculative pontification. I wrote three years ago that management consultants love to advise customers to get into the disruption game. Consider what was happening in home meals and transportation and the like 5 years ago: this is coming for your industry, so you better get in the game. To wit: hey financial services firm, you should invest in developing your own line of disruptive fintech. Except, in practice it turned out to be far more prudent for incumbents to colonize startup firms by placing people on startup firms boards and then co-opt them to the credit cycle through greenmail policies. The latter strategy was a hell of a lot cheaper than the former. And those home-meal- and food-delivery tech firms who were the reference implementation for disruption? They ended up disrupting one another, more than they disrupted the incumbents. Come to think of it, the winning strategy was that of the wise fighting fish in the movie From Russia, With Love: the stupid ones fight; the exhausted winner of that fight is easy prey for the smart fighting fish who sat out the fight and waited patiently. (Note to self: this is two consecutive months that I’ve used FRWL as an analogy, I really need to diversify my analogies. That said, Eric Pohlmann’s voiceover is truly underrated in cinematic history.)

This is, arguably, playing out today as auto manufacturers pull back from autonomous vehicle investments. Hey automotive firm, you should invest in autonomous vehicle delivery because it will totally disrupt the industry. Except, it’s proving to be much further away from reality than thought. It was great as long as delivery expectations were low and valuations were high. It isn’t so lucrative now.

Obviously, all advice has to meet a company where it’s at. Generic assertions of impending tech disruption in a well established industry crater instantly (even faster than crypto during a bank run) when they meet incumbent economic dynamics. People (especially long term employees) resist operational change; debt cycles outright crush those changes. Not meeting a company where it’s at renders the advisor a curious (and at best mildly amusing) pontificator.

At the same time, advice also has to meet the industry that consumer is in where it’s at. That’s not so easy when the advisor can only think transactionally. “Digital disruption” and “omnichannel” are, thankfully, out of favor now. They were ignorant of the industry dynamics at play, as mentioned earlier: co-opt the disruptor to the prevailing industry trends and the aspirant tech cycle is subservient to the credit cycle. It is (if ironically in evolutionary terms) well captured by Opus the penguin’s response to the allegedly inevitable.

* * *

One thing about being in the advisory space is that at a micro level, just about every firm has something - many somethings - unique to offer. (The caveat “just about” is intentional: it’s just about, but not all: as Mojo Nixon pointed out, Elvis is everywhere, but not in everyone.) “You should” advice that does not reflect that uniqueness - the expression of the company itself - is bound to fall flat. Yes, macro trends matter, but start with the business itself. If the people in that business know who they are and who they are not, you’ve got a great place to start. And if they don’t, the most Hyde Park Corner prophet of “disruption” isn’t going to hold an audience for long.

* * *

In the interest of full disclosure, we have, as you might well expect, been sources of what we deem brilliant “you should” advice to the aforementioned craft brewer. You should:

  1. Have a beer that incorporates cough syrup as an ingredient, a beer version of a Flaming Moe.
  2. Let me put my head underneath the taps like Barney Gumble when Moe isn’t around.
  3. Have drone delivery of your beer. Because drones.
  4. Have a trap door you can open that drops egregious “you should” pontificators into a pool of hungry alligators.

We’ve been assured that the proprietors are giving serious thought to every one of these.

Monday, October 31, 2022

Strategy

A few months ago I was asked to review a product strategy a team had put together. I had to give them the unfortunate feedback that what they had created was a document with a lot of words, but those words did not articulate a strategy.

There is a formula for articulating strategy. In his book Good Strategy, Bad Strategy, Richard Rumelt puts forward the three essential elements of a strategy. It must:

  1. Identify a need or opportunity (the why)
  2. Offer a guiding policy for responding to the need or opportunity (the what)
  3. Define concrete actions for executing the policy (the how)

There’s more to it, of course. The need or opportunity has to be well structured and specific. The guiding policy must be focused on the leverage that a company can uniquely bring to bear (this is effectively the who that a company is) as well as anticipate the reaction of other market participants. The actions must be, well, actionable.

What we see too often passed off as strategy are goals (“grow the business by xx% in the next y years” is a goal, not a strategy); vision statements (“we want to be the premier provider of aquatic taxidermy products” is a lofty if vain ambition); or statements that are effectively guiding policies (“to be the one stop shop for all of our customer’s aquatic taxidermy needs”) without the need (why) articulated or actions (how) defined.

I’ve seen the aftermath of a number of failed strategic planning initiatives. Each time, the initiatives failed to articulate at least two, and sometimes all three, of the aforementioned elements that compose a strategy. The postmortems to understand why these initiatives failed exposed a few consistent patterns.

One pattern is that the people involved in the strategic thinking did not truly come to grips with what is actually going on in a company’s environment. To understand “what’s going on” requires collating the relevant facts (internal and external) into a cohesive analysis. That, in turn, requires a great deal of situational awareness: an honest assessment of a company’s capabilities, a high degree of customer empathy, and a fair bit of macroeconomic understanding. It also requires a sense of timeliness: not too immediate so as to be just a tactical assessment (your competitors are easier to do business with through digital channels than you are), not too far in the future to be purely speculative (ambient computing). All too often, the definition of the opportunity is derived - in many cases, copied verbatim - from some other source, such as an analyst report, somebody else’s PowerPoint making the rounds inside the company, the company’s most recent annual report. Or it is a truism (the world of aquatic taxidermy is going digital). Or worse still, it is a tautology (customers will buy aquatic taxidermy products through digital channels and from physical store locations from specialist retailers and general merchandisers).

Defining the opportunity through a thorough understanding what’s going on is hard. It’s also awkward, an exercise of the blindfolded people describing the pachyderm. And that’s ok. It takes several iterations, it requires diversity of participants, and while there will be many moments when the activity feels like churn (and not the kind of churn that yields butter or ice cream), it is worth the investment of time. The “what’s going on” is, arguably, the most important thing in formulating a strategy. If the “what’s going on” is wrong, the opportunity isn’t clear, and as a result the most eloquent guiding policy and the most definitive of actions will not solve the right problem. By way of analogy, directional North stars are great, but in the field we still largely navigate by compass. A compass is low tech. It works throgh attraction to a magnetic field that serves as a close enough proxy to true north, which we correct with declination. As Dr. John Kay showed, the most successful companies navigate by muddling through.

Another pattern: whereas the would-be strategic thinkers spend comparatively little time defining the opportunity, they are obsessed with formulating the equivalent of the guiding policies. Some of this is likely a function of professional programming: if, for the totality of your career, the boss has supplied you with the reason why you do the things that you do, it isn’t natural to start a new initiative by asking “why”. Just the opposite. But the biggest reason for focusing on the guiding policies is that the strategic thinkers believe they are being paid to come up with clever statements of what a company should do. No surprise that strategic planning exercises tend to produce a lot of “what to do” options, which they present as a portfolio of strategic opportunities. Yes, the portfolio passes the volume test applied to any strategic planning initiative: too few slides suggests the team just faffed about for several weeks. So what we get is a shotgun blast of strategy: dozens of “what to do” options, only some (not many, let alone all) of which are complimentary to one another. Plenty of things to try, but they’re just that: things to try. They don’t converge at cohesive interim states where the company is poised to engage in a next level of exploitation of an opportunity or need, exploitation that is amplified through development of the unique capabilities the company brought to the table in the first place. This is not a strategy as much as it is a task list of very coarsely grained things to maybe do, at some point, and see what happens.

The fear of not having a sufficient quantity of clever “whats” is understandable, but misplaced. ‘Tis better to have a few very powerful statements of “why” that tell the executive leadership team and the board very concrete things they do not know about their company or market, with very strong statements of “what” to do about them.

The third pattern contributing to strategic planning failure is the aversion to defining the concrete actions necessary to operationalize a strategy. As damaging as getting the why wrong is to the validity of the what, glossing over or ignoring the how renders a guiding policy into a fairy tale. Figuring out the how is, for a lot of people, the least attractive part of strategy formulation: it requires coming face to face with the organizational headwinds such as the learned helplessness, the dearth of domain knowledge, the resistance to change that characterize legacy organizations. Operationalization - especially in an environment with decades old legacy systems compounded on top of one another - is where great ideas go to die: we could never do that here, you don’t know the history, it doesn’t work like that, and so forth. Yet a strategy without a clear path of execution is just a theory. No company has the luxury of not starting from where it is today. Strategy has to meet a company where it is at. This isn't big up-front design; it's just the first iteration of the end-to-end to establish that execution is in fact plausible, supplemented with a now / next / later to define a plausible path of evolution.

The aversion to defining execution of a proposed strategy stems from at least two sources. One is the tedium of deep diving into operational systems to figure out what is possible and what is not, and to then delve into the details to turn tables to interrogate in detail the things we can do, changing the question from "why we can't" into “why we can”. But the more compelling reason that strategic thinkers avoid detailing execution that I’ve observed is the fear that a single ground truth could undo the brilliance of a strategy. Strategy is immutably perfect in the vacuum of a windowless conference room. It doesn’t do so well once it makes first contact with reality. And that is the real world problem to the person academically defining strategy in the absence of execution: when given a choice, a company will always choose as Ernst Stavro Blofeld did in the movie version of From Russia With Love: although Kronsteen’s plan may very well have been perfect, it was Klebb who, despite execution failure (engineered through improvisation by James Bond), was the only person who could achieve the intended objective. Strategy doesn't achieve outcomes. Delivery does.

I’ve worked with a number of people who insist they no longer wish to work in execution or delivery roles, only strategy. Living in an abstract world detached from operational constraints is great, but abstract worlds don’t generate cash flow for operating companies. The division of strategy and delivery is a professional paradox: if you do not wish to work in delivery, by definition you cannot work in strategy.

Strategy is genuinely hard. It isn’t hard because it bridges the gap between what a company is today and what it hopes to be in the future (the what). It’s hard because good strategy clearly defines what a company is and is not today (the who), what the opportunities are and are not for it in the future (the why), and the actionable steps it can take to making that future a reality (the how), orchestrated via compelling guiding policies (the what).

Successful business execution is difficult. Successful business strategy is even more difficult. If you want to work in strategy, you better know what it is you're signing up for.