I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

Wednesday, April 30, 2025

Consulting is episodic. That works better for consultants than consulting companies.

Consulting is an episodic line of work. That’s great for the individual consultant because of the sheer variety it provides: different industries, companies, people, jobs and problems to solve. You get to see not only how many unique ways companies do similar things, but (if you’re paying attention) you’ll understand why. Plus, every problem calls for different sets of knowledge and skills, meaning you’ll get the chance to learn as much as you already know.

The episodic model has been good for me. It’s given me the opportunity to work in a wide range of industries: commercial insurance; investment and retail banking; wealth management and proprietary trading; commercial leasing; heavy equipment manufacturing; philanthropy. I’ve been able to work on a wide range of problems, from order to cash, shop floor to trading floor, new machines to service parts to used machines and more service parts, raw materials purchasing to finished goods shipping, underwriting to claim filing, credit formation to fraud prevention. Plus, I’ve been able to solve for multiple layers of a single stage of a solution (the experience, the technical architecture, the accounting transactions, the code) as well as the problems unique to different stages (the business case, the development of the solution, the rescue, the rescue of the rescuers, the acceptance testing and certification, the migration and cutover, the cleanup, the crisis room after a bumpy rollout, the celebration for pulling off the implementation everybody said couldn’t be done.)

(Yes, that’s a lot of run-on sentences. It’s been a run-on career.)

At some point in time, with a company you’ve never worked with in an industry you’ve never been in before, you recognize classes of need when nobody else does. You see things differently. I’d like to believe this is the benefit of working with an experienced consultant.

The episodic nature of the work provides meaningful variety for the consultant provided the consultant does not get staffed in the same role to perform the same task over and over again. Variability of experience evolves the individual; specialization stunts career development. While a person can become valuable as the go-to person for a specific need at a particular moment in time, needs, like fashions, come and go. To wit: ISO 9001 compliance consultants, SAP implementation project managers and Agile coaches don’t command the hourly rates they once did. To the extent that those things have value today, the knowledge of the nouns is more valuable than knowledge of the verbs.

There are career risks and downsides to being in an episodic profession. Consultant labor is part of some other company’s secondary workforce: when economic times get tough, it’s easier for the buyers to cull the consultants than badged employees. Even when the economy is humming along, the contract will eventually run its course and it’s time to move on. The consultant can’t talk publicly about the companies they’ve done business with and must be judicious in anything they do share. And for every company with the toxic culture and petty employees you tolerate because you know you won’t have to work with them for long, there’s a company with a healthy culture and great people you wish you could work with for the rest of your career. But that’s not an option, because you’re valuable to any one client for what you learn by working with many, many others.

Still, on the whole, it’s a great line of work.

* * *

The episodic nature of consulting is great for consulting companies when there is more demand for services than there is supply. This is especially true when that demand is ambitious in nature: the proliferation of affordable personal computing in the 1980s, the commercialization of the internet in the 1990s, the mass adoption of mobile computing in the 2000s, the expansion of cloud computing in the 2010s. Big technology changes meant big spending, and wave after wave didn’t just fuel growth of services, it smoothed out what would otherwise have been more volatile boom and bust cycles for technology services firms.

When the spending pullback comes, consulting companies don’t much like the episodic nature of their business model. The income statement lurches and contracts. A surprise pullback by two or three clients leaves consultant labor idle. A surge in proposals expected to close but not yet closed creates uncertainty as to if, when, and how many new people to hire. Uncertainty frustrates investors and stresses out managers.

Recurring revenue, on the other hand, means predictability in cash flows, which means fewer ulcers and less frequent boardroom rants. In consulting, recurring revenue comes from the more mundane aspects of technology, things like long term care and feeding of digital infrastructure and legacy software assets. (Software development consultancies have tried to use the “products not projects” mantra to change buying patterns to no avail: it remains episodic.)

Repeatable revenue comes from repeating tasks. While there are episodic events in the delivery of those repeating tasks - incremental improvements to deployment scripts and production monitoring - there is considerably less variability in the nature of the work itself. Repeatability industrializes the work, and with it the workforce. Where labor and employer share the risks of the episodic nature of project-based consulting, labor carries the bulk of the risk of the repeating revenue: the incremental improvements that reduce labor intensity; the codification of tasks that enable the work to shift from a low cost labor market to a lower cost labor market, and to a still lower cost labor market after that.

* * *

The software services industry hopes that AI is the next incarnation of demand for consultant labor. As I wrote last month, AI has not yet proven to be a boon to consulting firms in the same way that cloud, mobile computing, internet and personal computing were.

At the same time, there is a lot of hand wringing over the damage AI has the potential to do to employee development. If AI is ever truly capable of replacing swaths of software developers, the skills development pipeline will thin out considerably. If it is “I do and I learn” as the old saw goes, then there is reason to be concerned with “AI does it for me and I’m none the wiser”. (Again, please do not misunderstand this as a statement of imminent doom for fundamental skill acquisition among junior consultants. Maybe someday, certainly not today.)

But I can’t help but think a preference for recurring over episodic work will have a bigger impact on the development - more accurately, the impairment of the development - of future knowledge workers in consulting firms. Business models built around specialization of knowledge curtail the variety of the variety to which people are exposed. As I wrote above, it isn’t just exposure to a variety of different projects, it’s a variety of different solutions in different stages of their evolution to different problems in different businesses. (Jimmy Collins missed the mark quite badly in that regard. As, of course, did so many among the pantheon of his anointed “greats”.)

* * *

This preference for recurring over episodic income streams is playing out in a lot of industries, including automotive and high tech manufacturing. This portends a curtailment of product innovation and with it, the advancement of capabilities: while a manufacturer may squeeze more functionality out of deployed hardware through services, it will be functionality of convenience more than capability. Simply put, the OEM uses the incumbency it has (through e.g., high switching costs) with captive customers to extract more rent for the solutions it engineered in the past. Loyalty isn’t earned as much as it is exploited.

In consulting, a preference for the recurring over the episodic portends smoother cash flows but a depletion - potentially quite rapid - of competency in its workforce. A consulting firm is a services firm, but a services firm is not necessarily a consulting firm. A services firm may be a supplier of specialized labor, but a specialist and an expert are very different things. A services firm incubates specialists with training and certifications; a consulting firm incubates experts with exposure to diverse clients, projects and responsibilities. The two can cohabitate, but a services firm will have a dominant ethos: provider of utility services or provider of value generative services. The prior supplies specialists to fill roles; the latter supplies experts to frame opportunities and develop solutions.

Companies in growth industries grow by gambling on their smarts. Companies in ex-growth industries grow by extracting more rents from captive customers. A consultancy pursuing rents no longer believes in, and perhaps no longer has, its smarts.

Monday, March 31, 2025

Tech services firms are aggressively applying AI in delivery. They aren’t ready for the consequences of cannibalizing their business model.

Technology services firms are going heavy on AI in delivery of services. This is motivated by need: services have been a tough market for a couple of years now, and AI is one of the few things every potential client is interested in. But it’s been difficult for services firms to get a lot of AI gigs. It’s a crowded field with not enough case studies to go around; this makes it difficult for potential customers to justify renting consulting labor when the starting point with their own staff is no different. Not to mention, most companies regard their AI investments as core intellectual property: they want the skills and knowledge that creates it indigenous to their payroll.

Tech services firms are instead applying AI to their own offerings to do things like accelerate the reverse-engineering of existing code and expedite forward engineering of new solutions. Those firms are developing proprietary products to do this (e.g., McKinsey last week posted a blog touting their proprietary AI platform for “rejuvenating legacy infrastructure”). The value prop is that by using AI tools, solution development takes less time and therefore costs less money and presents less risk.

This has ramifications for the consulting business model. The “big up front design” phase that lasts for months (if not quarters) is going to be a tough sell when the AI brought to bear is touted as a major time saver: in the minds of the buyers, either the machines speed things up or they don’t. But the real problem here isn’t just erosion of services revenue, but something far more elemental: bulge bracket consulting firms use that big up front design phase to train their employees in the basics of business on the customer’s dime. Not the customer’s business. The basics of business. A lot of workshops during that up front design time cover Principals of Financial Accounting I for the benefit of inexperienced staff.

(Before scuffawing at that notion, if you’ve ever worked in order management, think about the number of consultants who had no grasp of order-to-cash; who did not understand the relationship, let alone the difference, between sales orders and invoices; who did not understand the relationship of payments to invoices; who did not understand RMA. This is not “learning the customer’s business.” This is learning business. I could go on.)

And, of course, AI tools are accessible to anybody - not just to people in tech. This means that anybody can compose a prompt and get a response. To get value from the tools requires that the consumer be able to adjudicate what the tools produce. Even the most thoroughly articulated prompt is prone to yielding a GenAI response that is syntactically correct but does not actually work. That statement doesn’t apply just to code. Humans make solution design mistakes all the time; any synthetically produced response will require human validation. Adjudication requires expertise and first hand knowledge, and not just of the tech but of the problem space itself.

AI tools make verbs like solution definition, construction, and deployment less inaccessible to non-technology people. The more that progresses, the less valuable the knowledge in those areas becomes because it reduces knowledge asymmetry between tech person and non-tech person. At the same time, as long as AI generated responses must be verified and validated, there will be a premium on adjudication.

Please understand what I am not saying here. I am not saying AI has or is about to make those verbs highly accessible to non-technology people; I am saying AI - specifically, GenAI - has made those verbs less inaccessible to non-technology people. I am also not saying AI generated responses are instantly production ready; I am saying the opposite.

The net effect is a disruption of the traditional tech--business relationship. Tech labor is at a disadvantage from this disruption.

Giving AI tools to people who are unable to assess fitness for purpose of the output those tools produce will not increase labor productivity. A product manager who is not knowledgeable in the domain cannot exercise the one decision right a product manager is required to exercise: prioritization. Similarly, a developer or QA engineer who doesn’t understand the domain can only certify code as far as “nobody can tell me it isn’t right”. The more AI tools are used to produce something - requirements, technical architecture, code, tests - the more important the human evaluation of that output becomes. Anybody who cannot evaluate the fitness for purpose of what their AI tools produce will be reduced to being implementers and stage managers for those who can.

Stage management does not command a premium rate.

Tech services firms will eventually cannibalize the consulting business model with AI. Well, perhaps it’s more accurate that AI will eventually cannibalize the consulting business model of tech services firms. To avoid being caught out in a market shakeout, services firms have to do two things. One is to no longer rely on knowledge of technology and get much, much more in-depth in their knowledge of business, and in particular the knowledge of their customer’s business. The second is, armed with that knowledge, to define entirely new categories of technology-informed services to sell.

Tech services firms are in an AI arms race in a war for contracts. It’s not obvious they’re preparing to win the peace that follows.

Friday, February 28, 2025

American manufacturers have forgotten Deming's principles. Their software teams never learned them.

American manufacturing struggled with quality problems in the 1970s and 1980s. Manufacturers got their house in order with the help of W. Edwards Deming, applying statistical quality control and total quality management throughout the manufacturing process, from raw materials, to work in process, to finished goods. The quality of American made products improved significantly as a result.

Once again, American manufacturing is struggling with quality problems, from Seattle’s planes to Detroit’s cars. But the products are a little different this time. Not only are manufactured products physically more complex than they were 50 years ago, they consist of both physical and digital components. Making matters worse, quality processes have not kept pace with the increase in complexity. Product testing practices haven’t changed much in thirty years; software testing practices haven’t changed much in twenty.

And then, of course, there are “practices” and “what is practiced.”

The benefit of having specifications, whether for the threading of fasteners or scalability of a method, is that they enable components to be individually tested to see whether they meet expectations. The benefit of automating inspection is that it gives a continuous picture of the current state of quality of the things coming in, the things in-flight, and the things going out. Automated tests provide both of these things in software.

If tests are coded before their corresponding methods are coded, there is a good chance that the tests validate the expectation of the code and the code is constructed in such a way that the fulfillment of expectation is visible and quantifiable. Provided the outcome - the desired technical and functional behavior - is achieved, the code is within the expected tolerance, and the properties the code needs to satisfy can be confirmed.

All too often, tests are written after the fact, which leads to “best endeavors” testing of the software as constructed. Yes, those tests will catch some technical errors, particularly as the code changes over time, but (a) tests composed after the fact can only test for specific characteristics to the extent to which the code itself is testable; and (b) it relegates testing to an exactness of implementation (a standard of acceptance that physical products grew out of in the 19th century).

Another way to look at it is, code can satisfy tests written ex post facto, but all that indicates is the code still works in the way it was originally composed to the extent to which the code exposes the relevant properties. This is not the same as indicating how well the code does what it is expected to do. That’s a pretty big gap in test fidelity.

It’s also a pretty big indicator that measurement of quality is valued over executing with quality.

Quality in practice goes downhill quickly from here. Tests that do nothing except to increase the number of automated tests. Tests with inherent dependencies that produce Type 1 and Type 2 errors. Tests skipped or ignored. It’s just testing theater when it is the tests rather than the code being exercised, as in, “the tests passed (or failed)” rather than “the code passed (or failed)”.

Automated testing is used as a proxy for the presence of quality. But a large number of automated tests is not an indicator of a culture of quality if what was coded and not why it was coded is what gets exercised in tests. When it is the prior, there will always be a large, late-stage, labor intensive QA process to see whether or not the software does what it is supposed to do, in a vain attempt to inspect in just enough quality.

Automated test assets are treated as a measurable indicator of quality when they should be treated as evidence that quality is built in. Software quality will never level up until it figures this out.

Friday, January 31, 2025

We knew what didn’t work in software development a generation ago. They’re still common practices today.

In the not too distant past, software development was notorious for taking too long, costing too much, and having a high rate of cancellation. Companies spent months doing up-front analysis and design before a line of code was written. The entire solution was coded before being deemed ready to test. Users only got a glimpse of the software shortly before the go-live date. No surprise that there were a lot of surprises, all of them unpleasant: a shadow “component integration testing” phase, an extraordinarily lengthy testing phase, and extensive changes to overcome user rejection. By making the activities of software development continuous rather than discrete, collaborative rather than linear, and automated rather than repeated meant more transparency, less time wasted, and less time taken to create useful solutions through code.

It was pretty obvious back then what didn’t work, but it wasn’t so obvious what would. Gradually people throughout the industry figured out what worked less bad and, eventually, what worked better. Just like it says in the first line of the Agile manifesto. But I do sometimes wonder how much of it really stuck when I see software delivery that looks a lot like it did in the bad old days. Such as when:

  • "Big up front design" is replaced with … "big up front design": pre-development phases that rival in duration and cost those of yesteryear.
  • It takes years before we get useful software. The promise during the pitch presentation was that there would be frequent releases of valuable software… but flip down a few slides and you’ll see that only happens once the foundation is built. It’s duplicitous to talk about “frequent releases” when it takes a long time and 8 figures of spend to get the foundation - sorry, the platform - built.
  • We have just as little transparency as we did in the waterfall days. Back then, we knew how much we’d spent but not what we actually had to show for it: requirements were “complete” but inadequate for coding; software was coded but it didn’t integrate; code was complete but was laden with defects; QA was complete but the software wasn’t usable. The project tasks might have been done, but the work was not. Today, when work is defined as tasks assigned to multiple developers we have the same problem, because the tasks might very well be complete but "done" - satisfying a requirement - takes more than the sum of the effort to complete each task. Just as back then, we have work granularity mismatches, shadow phases, deferred testing, and rework cycles. Once again, we know how much we’re spending, but have no visibility into the efficacy of that spend.
  • The team is continuously learning … what they were reasonably expected to have known before they started down this path in the first place. Specifically, that what the team is incrementally "learning" is only new to people in the development team. Significant code refactoring that results from the team "discovering" something we already knew is a non-value-generative rework cycle.
  • We have labor-intensive late stage testing despite there being hundreds and hundreds of automated tests, because tests are constructed poorly (e.g., functional tests being passed off as unit tests), bloated, flaky, inert, disabled, and / or ignored. Rather than fixing the problems with the tests and the code, we push QA to the right and staff more people.
  • Deadlines are important but not taken seriously. During the waterfall days, projects slipped deadlines all the time, and management kept shoveling more money at it. Product thinking has turned software development into a perpetual investment, and development teams into a utility service. Utility service priorities are self-referential to the service they provide, not the people who consume the service. The standard of timeliness is therefore "you’ll get it when you get it."
  • Responsibility isn’t shared but orphaned. Vague role definitions, a dearth of domain knowledge, unclear decision rights, a preference for sourcing capacity (people hours) and specialist labor mean process doesn’t empower people as much as process gives people plausible deniability when something bad happens.

Companies and teams will traffic in process as a way to signal something, or several somethings: because of our process we deliver value, our quality is very high, our team is cost effective, we get product to market quickly. If in practice they do the things we learned specifically not to do a generation ago, they are none of the things they claim to be.

Tuesday, December 31, 2024

Yes, the future is digital. More importantly, it's asset light.

The 20th century company acquired capital assets (property, plant and equipment); employed a large, low skilled secondary workforce to produce things with that PPE; and employed a small, high skilled primary workforce to manage both the secondary workforce and administrate the PPE. By comparison, the 21st century company rents infrastructure - commercial space, cloud services, computers - and both employs and contracts knowledge workers who collaborate on solving problems.

I want to focus on the capital rather than the labor aspect of this. Unlike its 20th century predecessor, the modern company is capital-light. Companies don’t own the tools their employees use, they rent them (e.g., spreadsheet software) or buy a tool and the right to rent the tenancy when certain capabilities are used (e.g., machine tools with subscribable capabilities). 21st century companies do raise capital, but it is used primarily as a buffer to absorb losses during the startup years.

As I’ve written before, industrial firms - largely 20th century firms - are under investor pressure to look more like their 21st century counterparts. The (post-startup) 21st century firm has both scale and recurring revenue yielding very attractive margins and cash flows that, because they are light on fixed assets, result in higher returns to investors. For a 20th century firm to make this transition requires balance sheet and income statement restructuring. The balance sheet transformation is straightforward: sell buildings and pay rent to use them; contract for logistics services rather than own and operate a fleet of trucks. The income statement is much harder to change: if you make machines, the biggest bulge in the top line still comes from moving machines. Still, OEMs are trying to sell subscriptions to services on the machines they make, promising revenues in the low double digits of machine sales.

For a 20th century firm to change its income statement requires customer cooperation. That cooperation is not immediately forthcoming.

The OEM value prop to the machine owner/operator consists of at least two parts. One is that the owner/operator only pays for machine capabilities when they’re needed. Another is that subscribable capabilities can overcome the difficulties of replacing skilled labor.

The prior is obvious, but to a lot of owner/operators of dubious value: the 20th century mindset doesn’t equate a machine tool with a mobile phone, a device on which you pay to install and use apps.

The latter is a structural problem of labor markets. Plenty of firms in construction and manufacturing rely on older machine tools. They rely on skilled labor to use those machines in an expert fashion. These are not knowledge workers in the contemporary sense, but laborers skilled at using the machines, operating them efficiently, effectively, and responsibly. In expert hands, those machines will get the job done quickly, safely, and without damage to the machine itself. As that population of experts ages out, the firms dependent on them are finding it difficult to hire replacement employees. The latter OEM value prop amounts to shifting capability from the operator to the machine itself, which makes it easier for the owner/operator to source (hire and replace) labor. Theoretically, the machine owner/operator should be willing to pay to rent machine capabilities as the difference between the total compensation for an expert and the total compensation for an employee who is not, and will likely never be, an expert.

For the owner/operator, it means the machine is now both balance sheet and income statement phenomenon (beyond MRO costs). Again, theoretically this isn’t a problem as long as the cost of renting machine capabilities is less than the difference in employee compensation between expert and technician. But everybody knows that rents rise, and the owner/operator has less control over OEM fees than it does over negotiating employee comp. When the cost of renting the capability is not sanitized by the compensation gap, supplementing balance sheet with income statement impact requires that the owner/operator have (a) pricing power to be able to pass those costs onto customers or (b) taking the hit on its own margins. Neither are particularly attractive options.

Which points to another unattractive thing about substituting machine capability for employee skill: a greater dependence on the OEM for capability. There’s a fable that a former CEO of Nokia told of the Finnish boy who was outdoors in winter and cold, and discovered he could warm himself by wetting his pants. This, of course, is only effective for a short while and has undesirable consequences. The lack of skill development among employees - and in fact, a limitation on ever developing those skills because of the convenience of reaching for the subscribable capability - increases dependence of the owner/operator on the capability from the OEM. That gifts the OEM a lot of pricing power over the owner/operator.

Consider commercial agriculture. Suppose that the subscription price to use the software to optimize planting and fertilizing is indexed to the cost of seed and fertilizer. Suppose seed and fertilizer consumption is more effective by 25%, so the OEM charges the farmer a metered rate up to the 25% of the total volume the farmer would have made without the software times the current market price for seeds and fertilizer. Within the narrow definition of that subscription transaction, the economics seem to be fine, but systemically they work against the 20th century farmer. Seed and fertilizer companies will increase prices to compensate for the loss of volume. OEMs can increase the subscription fee as a percentage of savings within a +/- x% or so band to capture more of the farmer’s savings. The bottom line is, any savings will not accrue to the farmer. Worse still, if after all of this, commodity prices are soft, the farmer is screwed. Not out of business, but dependent on economic rescue. (Food security being up there as a national priority with energy security and financial security means governments will pump money into farms just as they often own petroleum companies and always bail out their banking system.)

For OEM subscriptions to truly take off, the owner/operator has to become a 21st century business. In our ag example, in the absolute form, that means putting seed and fertilizer out to bid with the winner agreeing to sell at cost and taking a cut of the yield; contracting to an ag operations firm that brings their own machines and personnel; and of course, leasing both land and buildings. In this case, the value prop of subscriptions is not to the farmer, but to the service provider with the equipment performing farm operations.

Admittedly this is a bit extreme. But it is not far fetched. Boeing doesn’t make what it sells… although that hasn’t turned out according to plan and for the time being they’re undoing that by purchasing the manufacturing business they spun out. Ok, bad example. A better example is Apple, which also does not make what it sells, and that has been quite successful. So, similar to how Apple reminds every new owner/operator upon unboxing a product, this soybean - the product of the seed, fertilizer, pesticide; of the choice of land on which it was grown; of the cultivation, irrigation, spraying and harvesting; of the storage and transportation - was Designed by Farmer Bob in Kansas.

The point is, there’s a mismatch of 20th century OEMs trying to synthesize and export a 21st century business model onto 20th century owner/operators. At present, it’s not a natural fit for either.

And by the way, just as this applies to manufacturing and agriculture, it also applies to households. We’re seeing the groundswell of a transition from owning to renting. While automobile leases have been around for years, interest rates and excessive property values are forcing would-be first time buyers into renting, possibly forever. An entire property industry is being built around this possibility. An “ownerless” society is far more inclined - and far better prepared - to manage a myriad of subscriptions than an “ownership” society is. This transition suggests automakers will gradually find it easier to sell subscriptions to things beyond satellite radio.

We’ve come a long way since the 1970s when the courts ruled households didn’t have to rent a phone from Ma Bell but could buy a telephone outright and plug it into the AT&T network. Since landline telephones lasted decades, ownership of the tool freed up income that had previously been committed to a subscription for a phone. Today, of course, AT&T (among others) has effectively found their way back to that model.

Turns out it was back to the future all along. It isn’t so much that the future is digital. It’s that it is asset light.

Saturday, November 30, 2024

Industrial firms are struggling with policy change. They can be designed to respond to change.

News media have been trying to interpret the economic and commercial ramifications that will come about as a result of the US elections earlier this month. How will tariffs be used in policy and what will that mean to consumer goods prices and manufacturing supply chains? What are the risks to industrial contractors of anticipated cuts in federal government spending? How will regulations change in areas like telecommunications and emissions? How will bond markets price 10 year Treasurys?

No doubt, industrial firms are facing highly disruptive policy changes. But if we zoom out for a minute, highly disruptive policy changes are the norm. Emissions, finance, energy, telecommunications, trade, healthcare and lots of other areas have been subject to significant regulatory change in the last two decades. To wit: when adding 70,000 pages to the Federal Register is considered a light year for new regulations, policy change is the norm, not the exception. Add to that non-policy sources of volatility - labor strikes, electrical blackouts, markets that failed to materialize, armed combat - and it is accurate to say that industrial firms have been subject to non-stop, if not increasing, volatility in their operating environments.

* * *

Wall Street rewards consistency in free cash flows above all else. Consistency in cash flows mollifies bond markets, which gives equity investors confidence that there will be ample cash for distributions through buybacks and dividends.

In manufacturing companies, strong operating cash flows are achieved through highly efficient production processes, from supply chain to transportation. Just-in-time inventory management is one of these practices. JIT flatters the balance sheet by minimizing cash tied up in raw materials inventory and in Property, Plant and Equipment (warehouse space) to hold that inventory. As implemented, though, JIT creates tight coupling within a production system: a hiccup in fulfillment from a supplier interferes with the efficiency of the entire production process (e.g., Boeing parking work-in-process in what is actually an employee car park due to a lack of fasteners earlier this year).

In short, industrial firms can throw off copious amounts of cash, but their processes - implemented as tightly integrated, complex systems - are fragile. Nassim Taleb pointed out this same phenomenon in financial markets: interlocking dependencies create systemic fragility. By way of example, the beaching of the Ever Given looked like a black swan event, but it was not: the problem wasn’t a global transportation problem, but a lack of robustness in end-to-end production processes themselves.

* * *

The more rigid the underlying processes, the more acute the need for external stability. Right now, uncertainty about policy change is creating external instability, rendering internal decisions about supply chain, shop floor, distribution and capital investment difficult to model, let alone make.

If constant volatility from one source or another is the new norm, "optimization" in manufacturing is no longer as simple as securing timely delivery of raw material inputs, squeezing labor productivity, and designing production plans around cheaper energy prices. Nor is optimization easily protected through crude contingency plans like holding excess raw materials as a hedge against supply chain disruption. An optimized production system must be not just tolerant to but accommodative of volatility.

Contemporary manufacturing operating systems solve for this.

  • Digital twins enable production modeling, simulation of disruptive events, and modeling of production responses to combinations of disruptive events.
  • Adaptive manufacturing - software defined production that integrates design with digital printing and robotic assembly - accelerates research and development and reduces friction created by NPI.
  • Flexline manufacturing allows Porsche to switch from making a combustion vehicle to an electric vehicle to a hybrid vehicle, in any sequence, all on the same line. The line is orchestrated with autonomous guided vehicles and does not require retooling or reconfiguration.

“Optimization” in a volatile world prioritizes resiliency over efficiency.

* * *

Wall Street gives a pass to companies when operations underperform due to external forces, because external forces are outside the control of the company. CEOs are graded on how well the company reacted to external disruption. But at some point, equity analysts and activist investors will figure out that manufacturing operations are unnecessarily vulnerable to external shocks. Why is the company not sufficiently resilient to take more of these changes in stride? At how many AGMs will we hear the same excuses?

There is need and opportunity to invest, but the climate isn’t conducive to investment. These are tech-heavy investments, and tech is still paying for largess during the immediate pre-COVID years, when CEOs were fired for showing insufficient imagination for how to spend cheap capital to digitally disrupt their industry. Unfortunately, a post-mortem analysis on that era exposes that not only did too many of the investments made during that time come to naught, the propensity to use contract labor and subsequent employee turnover meant no intangible benefit like institutional learning materialized (even of the "we know what doesn’t work" variety). They were just boondoggles that vaporized cash.

Tech has a bruised reputation and capital is more pricey now, just in time for manufacturing to find itself at a crossroads. The intrinsic sclerosis of legacy manufacturing operations forces industrial firms to react to external changes. If they had intrinsic flexibility, they could respond rather than be forced to simply react. With volatility the new norm, tech investments into modern manufacturing processes and technology are a pretty good bet.

A good bet, but with competing gamblers. Tech ("with your money, and our ability to spend your money…") and legacy manufacturing (fixed production) have to figure out how to partner with capital (10 year Treasurys are north of 400 bps) to make it a profitable bet. There’s a visible win, but the CIO, CTO, COO and CFO have to get out of the way.

Thursday, October 31, 2024

Bosses at troubled companies say they want growth through innovation. They prefer growth through girth.

The headlines are heavy with iconic companies that have hit the skids recently, from Starbucks, to Boeing, to Intel. All have relatively new CEOs, each of whom has said that their respective company's path to salvation lies in returning to their roots, to once again be a coffee shop or an engineering firm. The assertion is that by going back to basics, they can regain the crown of leadership in their respective markets.

The problem is, there is no going back. The combination of circumstances that created the conditions for rapid growth and market dominance are long gone. The socio-economic factors have changed. The regulatory environment has changed. The key technologies have changed. The supply chain has changed. The competitive landscape has changed. Don't bother with the flux capacitor.

What those bosses are really saying, of course, is “we need a do-over.” But there is no do-over. Not only are years of financial engineering not easily un-done, they created a financial burden that operations have to carry by generating copious free cash flow. Sorry, but no matter how desperate the situation, the new CEO is not the William Cage figure in the Edge of Tomorrow.

While the Starbucks and Boeings grab the headlines at the moment, there are many once iconic companies that backed themselves into a corner by starving operations to feed the balance sheet. Time ran out, old management was shown the door, new management is ushered in.

This is the playbook that new management follows.

The first step is to alleviate immediate financial pressures, starting with the balance sheet. This means any or all of: maxing out credit facilities, selling assets, raising cash through equity sales, taking the company private, breaking up the company, and in the extreme filing for bankruptcy protection. Investors may win (a company breakup can work out well), investors may lose (dilution of equity), and investors may get wiped out (common equity is valueless in bankruptcy).

Fixing the balance sheet buys time, but not a lot of time, so the income statement needs to be shored up as well. Quality problems? Losing customers? Margins too thin? The playbook here is well established: simplify operations, promote quality above all other metrics, reduce contractors and staff, cut discretionary costs, slash prices, over-reward customer loyalty, etc. The good news is, customers mostly win. The bad news is, employees mostly lose, as they will face perpetual cost cutting efforts ranging from the structural (opportunistic terminations) to the petty (workplace surveillance).

Which brings us back to the CEO statement that “we need to become who we once were”.

Unless there are high-value trophy assets, or a large portion of the debt can be saddled onto divisions spun off, the financial stabilization effort will leave a balance sheet that is out of proportion with the income statement. That is, the balance sheet is structured for a company with higher sales growth and stronger cash earnings. Before it does anything else, the company must face the fact that financial stabilization isn’t going to return the capital structure to what it was before all of the financial engineering happened.

That has serious repercussions for operations. Recapturing lost competence only solves the growth problem if the core business can once again be a growth business. This is the fundamental hypocricy of CEOs trafficking in "back to the future" statements: the core business has to be a growth business or returning to core competencies solves nothing. Absolutely nothing. And they know it. Being good at what you used to do well will staunch decline, but it offers no guarantee of a return to growth if there’s not much growth to go round. It is also worth pointing out that the “we need to become what we once were” statement also masks the actual objective: it isn’t to be the pre-eminent firm in the market the company made its name at, as much as it is to resucitate the income statement to a point that the balance sheet makes sense again.

If the core market is slow- or ex-growth, the go-to strategy is to capture a greater share of total customer spend, a.k.a. “revenue grab”. The opportunities here include extending the brand by selling adjacnet products and services (e.g., as GE famously expanded from selling nuclear power plant technology to servicing nuclear power plants in the 1980s). Another is selling against type: where a company once differentiated from its competition by refusing to sell what it derided as low-value offerings, it will cheerfully sell anything and everything because every dollar of revenue is now the same. Yet another opportunity is predatory monetization: charging for things that were previously given away for free (e.g., whereas once upon a time, all economy seats were priced the same, those near bulkheads or the aisle cost more than seats in the middle).

None of these alternatives are revolutionary. All have the advantage of not being “bet the business” pursuits. Which is helpful, because the balance sheet limits the investment the company can make in pursuing any growth opportunities. All of these can be pursued through partnership and small acquisition, as opposed to organic development; this jumpstarts capability, minimizes cash outlay, and promises faster time to return.

Yet none of these are truly growth strategies in that they open new markets or compel buyers to spend where they had not spent before. They are only growth strategies because they assume that customer income statements are growing: growth in wallet share creates exposure to growth in customer income statements. Restated, as aggregate customer topline grows, aggregate customer expenses grow, and a greater capture of customer spend leads to growth. It’s growth by girth, not by invention, innovation or creativity.

The intermediate-term success of this strategy is a return to modest growth and sufficient cash flows to make its interest payments and modest - if only occasional - dividends. These are not steps that will help the company regain its lost edge. Just the opposite: they signal capitulation that it will never regain that edge. Its long-term success is either to be acquired (a possibility because inflation increases revenue and simultaneously reduces the debt burden), or to be in a position to grab more revenue should a competitor stumble or outright fail.

To become what the company once was - a company that made its own growth by obsoleting its own products and services (why keep that 286 PC when you can have a Pentium? Why fly a fleet of 707s when you could fly a fleet of 747s? Why keep using that old phone since it can’t keep a charge anyway?) - requires both a capacity for innovation and a growth market. Yes, I just wrote “the company makes its own growth” because it does so by proxy. Technological advances bring new consumers into the market: personal computer manufacturers expected to sell more PCs once they had more powerful CPUs that could solve more complex problems; airlines expected to fly more passengers once they had more planes and larger planes to drive down costs in the post-deregulation airline industry.

A cash strapped, debt laden business is an unlikely candidate to invent the market-creating technologies of the kind that brought it to prominence in the first place, if for no other reason than a company pursuing revenue grab is focused on yesterday’s growth markets and is therefore poorly attuned to tomorrow’s. That is not to say it is utterly hopeless, but that the path back to growth markets isn’t going to be self-directed. Finding the way back into a growth market requires that the company be quick to recognize, implement and operationalize something new. But that, too, requires balance sheet leeway that, depending how dire the straits it finds itself at the start of the retrenching cycle, will limit its ability to spend on the R&D necessary to innovate. This is the cundrum of retrenchment: while fortifying the balance sheet is unavoidable, the resulting fortress imprisons cash flows and, ultimately, the income statement. Add - well, subtract - labor severed during retrenchment and the disengaged labor that remains, and it is highly unlikly that a fallen icon will return to the vim and vigor of its go-go days.

The retrenching company needs balance sheet and operational restructuring. It will then look for easy money anywhere that it can extend its brand and monetize its offerings. It will hope to buy time for inflation to work its magic on the debt burden and the topline, and to be ready for a competitor to stumble. That is the definition of success.

While a nice sentiment, nothing in this playbook benefits from the business returning to what it used to be. Because success of the grafted-on rescue management isn't "to win", it is "not to lose."

Monday, September 30, 2024

It isn’t “return to office.” It’s “malicious destruction of trust.”

Return-to-office mandates continue to trickle in, and every now and again a prominent employer makes a headline-grabbing decision that all employees must be back in office. The news articles focus on the obvious impact of RTO mandates: the threat to dual income families successfully managing school and daycare with career-advancing employment; loss of quality of life to nonproductive commute time. These are real. But RTO mandates also indicate something else: they are a public acknowledgement of an erosion of trust within a company.

Every company is a society unto itself, with values and social norms that determine how people behave and interact. There are high-integrity and low-integrity workplaces, the distinguishing characteristic being the extent to which people are “free from corrupting influences or motives”. Integrity manifests itself in how people interact with one another, in commitment to craft, and in the administration of the business itself.

First is interpersonal integrity, things that define whether the company is a toxic or fulfilling place to work. Do people take credit for the work of others? Do people want to look good to a point they will make others look bad? Is it safe for a person to acknowledge things they do not know, or to accept responsibility for a mistake?

There is operational integrity, things that define a firm’s commitment to excellence. Does recruiting pursue competent candidates or are they just filling vacancies? Do salespeople inflate the pipeline with low-probability or worthless leads? Do colleagues complete their work without taking shortcuts that could impair results? Does finance send invoices in the hopes the payer will pay without scrutinizing the bill?

Administrative systems indicate whether a workplace is high- or low-integrity because they communicate the extent to which trust is extended to individuals. Are the administrative systems an enabling mechanism for labor or a control mechanism over labor? For example, how highly restricted are employees in how they incur travel expenses? Is performance measurement designed to drive good practice, or confirm adherence to practice? Are annual reviews designed for personal development and advancement, or as a means of gathering structured data to rank order the workforce?

Societies work best when they run on trust. Companies cannot escape the need to spend money to demonstrate or investigate compliance - violations of trust are unfair and can be expensive when they occur - but it is not value generative expenditure. The more a company invests in controls and surveillance to compensate for a lack of trust, the higher the operating costs to the business. Conversely, the lighter the controls on labor, the lower the administrative burden, the greater the productive and creative output from labor. A company with few - ideally no - bad actors - has little real reason to incur cost to compensate for a trust void.

Which brings us back to the RTO mandates.

One of the primary justifications given for RTO is to increase worker collaboration with an eye toward driving creativity and innovation. That sounds plausible as most of us have had an experience where a creative solution came together quickly because of high-bandwidth, in-person collaboration. But if in-person collaboration is so compelling, having globally sourced teams and departments is an impairment of convenience, not a strategic advantage for sourcing best-in-class capability. Nor would firms severely cut employee travel budgets in the face of declining revenues - isn’t that precisely the time a company needs more innovation? Toss in the free productivity harvested from the individual laborer from flexible working, and justifying RTO because of a “paucity of innovation” is a bit of a stretch.

The more likely explanation is a desire for greater workforce control. Working from home proved that a lot of jobs can be done from anywhere. Knowledge is transferable. It isn’t a big stretch that jobs that can be done from anywhere can be done by anyone from anywhere. Physical supervision does not improve management’s ability to provide higher fidelity performance profiles, but it does allow management to assess performance with less friction. Is this person executing at the highest level of throughput, or are they dogging it? Is that person really an expert with deep knowledge, or are they expert at gaming the system? Spot productivity audits are a lot easier in cubeland than in Teamsland. If - when - the edict comes that we have to contract operational labor spend, middle managers may not have better data than they would have with a distributed workforce, but they'll have no excuse for not having it.

Why a push to increase control now? Because corporate income statements are still being buffeted about. Interest rates have cooled off but remain generationally high, depressing corporate capital spending. Price increases have masked drops in unit sales volume. Cumulative inflation has increased input costs. Management has little control over the topline, so it must exercise what control it can over the bottom line.

Labor is a big input cost, and labor working from home is an invisible workforce. Income statement pressures twined with future economic uncertainty make that invisible workforce an easy target. When asked about the number of people who work at The Vatican, Pope John XXIII is credited - probably erroneously - with having replied “about half”. It’s not hard for a COO to be cynical about labor productivity given supply chain, labor, price and cost roller coasters of the last four years.

Before RTO came shelter-in-place necessitating work from home. A lot of people who had never worked in a distributed fashion figured out how to make it work. They’re certainly not heroes in an altruistic sense as they were motivated by self-interest: preserving the company preserved the job. Still, this cohort kept the internal workings of the business functioning during a period of unprecedented uncertainty. That, in turn, merited an increase in operational trust (they responded with excellence) and interpersonal trust (they will do the right thing). RTO negates all of that earned trust.

Saturday, August 31, 2024

For years, tech firms were fighting a war for talent. Now they are waging war on talent.

In the years immediately following the dot-com meltdown, there was more tech labor than there were tech jobs. That didn’t last long. By 2005, the tech economy had bounced back on its own. After that, the emergence of mobile (a new and lucrative category of tech) plus low interest rate policy by central banks fueled demand for tech. Before the first decade of the century was out, “tech labor scarcity” became an accepted norm.

The tech labor market heated up even more over the course of the second decade of the century. Rising equity valuations armed tech companies with a currency more valuable than cash, a currency those companies could use to secure labor through things like aggressive equity bonuses or acqui-hires. COVID distorted this overheated tech labor market even further, as low interest rates for longer, a massive fiscal expansion, and even more business dependency on tech spurred demand. Growth was afoot, and this once-in-a-lifetime growth opportunity wasn’t going to be won with bog standard ways of working: it was going to be won with creativity, imagination and exploration. The tech labor pool expanded as tech firms actively recruited from outside of tech.

The point of this brief history of the tech labor market in the 21st century is to point out that it went from cold to overheated over the span of many years. Not suddenly, and not in fits and starts. And yes, there were a few setbacks (banks pulled back in the wake of the 2008 financial crisis), but in macro terms the setbacks were short lived. It was a gradual, long-lived, one-way progression from cold to super hot.

Then the music stopped, abruptly. COVID era spending came to an end, inflation got out of hand, and interest rates soared. Almost instantly, tech firms of all kinds went from growth to ex-growth. Unfortunately, they built businesses for a market that no longer exists. With capital markets unwilling to inject cash, tech companies need to generate free cash flow to stay afloat. Tech product businesses and tech services firms - those that haven’t filed for bankruptcy - as well as captive IT organizations all tightened operations and shed costs to juice FCF. (Tech firms and tech captives are also in mad pursuit of anything that has the potential to drive growth - GenAI, anyone? - but until or unless that emerges as The Rising Tide That Lifts All Tech Boats, it will not change the prevailing contractionary macroeconomic conditions tech is facing today.)

The operating environment has changed from a high tolerance for failure (where cheap capital and willing spenders accepted slipped dates and feature lag) to a very low - if not zero - tolerance for failure (fiscal discipline is in vogue again). Gone is the exception to spend freely in pursuit of hoped for market opportunities through tech products; tech must now operate within financial constraints - constraints for which there is very, very little room for negotiation. Everybody’s gotta hit their numbers.

While preventing and containing mistakes staves off shocks to the income statement, it doesn’t fundamentally reduce costs. Years of payroll bloat - aggressive hiring, aggressive comp packages to attract and retain people - make labor the biggest cost in tech. Wanton labor force expansion during the COVID years was done without a lot of discipline. Filling the role was more important than hiring the right person. A substantial number were “snowflakes”: people staffed in a role for an intangible reason, whether potential to grow into the role, possession of skills or knowledge adjacent to the position into which they were staffed, or appreciation for years of service - essentially, something other than demonstrable skill derived from direct experience. That means getting labor costs under control isn’t a simple matter of formulaic RIFs and opportunistic reductions with a minor reshuffling of the rank and file. Tech companies must first commoditize roles: define the explicit skills and capabilities an employee must demonstrate, revise the performance management system to capture and measure on structured evaluation data, and stand up a library of digital training to measure employee skill development and certification specifically in competencies deemed relevant to the company’s products and services. Standardizing roles, skills and training makes the individual laborer interchangeable. Every employee can be assessed uniformly against a cohort, where the retention calculus is relative performance versus salary. This takes all uncertainty out of future restructuring decisions - and as long as tech firms lurch between episodic cost cutting and bursts of growth, there will in fact be future restructuring decisions. For management, labor standardization eliminates any confusion about who to cut. The decision is simply whether to cut (based on sales forecasts) and when to cut (systemically or opportunistically to boost FCF for the coming quarter).

Of course, companies can reduce their labor force through natural attrition. Other labor policy changes - return to office mandates, contraction of fringe benefits, reduction of job promotions, suspension of bonuses and comp freezes - encourage more people to exit voluntarily. It’s cheaper to let somebody self-select out than it is to lay them off. FCF is a math problem.

These are clinical steps intended to improve cash generation so that a company can survive. While the company may survive, these steps fundamentally alter the social contract between labor and management in tech.

* * *

A lot of companies in tech used what they called “the war for talent” as marketing fodder, in both sales and recruiting. You should buy Big Consulting because it employs engineers a non-tech firm will never be able to employ on its own. Come to work for Big Software and get the brand on your resume. Every war has profiteers.

Small and mid sized tech has always had to be clever in how it competes for labor. Because it couldn’t compete with outsized comp packages, small tech relied on intangible factors, such as flexible role definitions and strong, unique corporate cultures.

The prior meant the employee would not only learn more because they had the opportunity to do more, they weren’t constrained by a RACI and an operating model that rewarded the employee for “staying in their lane” over doing what was necessary, best, right. This was a boon to the small tech employer, too, because one employee was doing the job of 2, 3, or even 8 employees at any other company, but not for 2x, 3x or 8x the comp.

The latter meant that by aggressively incubating well defined corporate norms and values, a smaller tech firm could position itself as a “destination employer” and compete for the strata of people it most wanted to hire. That might be a culture that values, say, engineering over sales. That might be a purpose-driven business prioritizing social imperatives over commercial imperatives. Culture was a material differentiator, and it’s fair to say that these values had some footing in reality: tech firms on the smaller end of the business scale had to mostly live their values or they wouldn’t retain their staff for very long given the increasing competition for tech labor. There was some “there” there to the culture.

Small and mid sized tech carved out a niche, but even these firms caught the growth bug. Where growth was indexed to labor, small and mid sized tech also went on a hiring spree. Again, where growth was the imperative, hiring lacked discipline. Bloated payrolls meant new people needed a corporate home; shortly after a hiring binge, the company is staffing twenty people to do the work of ten. In comes the RACI, out goes the self-organizing team. Plus the erosion of culture - the move away from execution that is representative of core values - was accelerated (if not initiated) by undisciplined hiring twined with natural labor attrition of long-timers during the go-go years for tech labor. Like it or not, the pursuit of growth is a factor in redefining culture: even if a growth agenda by itself injects no definitive identity, it does have a dilutive effect on established identity. To wit: new employees did not find the strong values-based culture described during the interview process, and long-time employees saw their values-based practices marginalized, because too many new hires with no first hand experience of cultural touch points to lean on were staffed on the same team. Culture devolves into a free-for-all that favors the newbie with the strongest will. The culture is dead, long live the growth agenda.

As mentioned above, the music stopped, and the company has to prioritize FCF. Prioritized over growth, because growth is somewhere between non-existent and just keeping pace with inflation. Prioritized over culture, because the culture prioritized people, and people are now a commodity.

Restated, labor gets the short end of the stick.

Employees recruited in more recent years from outside the ranks of tech were given the expectation that we’ll teach you what you need to know, we want you to join because we value what you bring to the table. That is no longer applicable. Runway for individual growth is very short in zero-tolerance-for-failure operating conditions. Job preservation, at least in the short term for this cohort, comes from completing corporate training and acquiring professional certifications. Training through community or experience is not in the cards.

For all employees, it means that the intangibles a person brings cannot be codified into a quarterly performance analysis and are completely irrelevant. The “X factor” a person has that makes their teams better, the instinct a person has for finding and developing small market opportunities, the open source product with the global community of users this person has curated for years: none of these are part of the labor retention calculus. It isn’t even that your first bad quarterly performance will be your last, it’s that your first neutral quarterly performance could very well be your last. The ability to perform competently in multiple roles, the extra-curriculars, the self-directed enrichment, the ex-company leadership - all these things make no matter. The calculus is what you got paid versus how you performed on objective criteria relative to your cohort. Nothing more. That automated testing conference for practitioners you co-organized sounds really interesting, but it doesn’t align with any of the certifications you should have earned through the commoditized training HR stood up.

Long time employees - those who joined years ago because they had found their “destination employer” - hope that “restructuring” means a “return to core values”. After all, those core values - strongly held, strongly practiced - are what made the company competitive in a crowded tech landscape in the first place. Unfortunately, restructuring does not mean a return to core values. Restructuring to squeeze out more free cash flow means bloodletting of the most expensive labor; longer tenured employees will be among the most expensive if only because of salary bumps during the heady years to keep them from jumping ship.

Here is where the change in the social contract is perhaps the most blatant. In the “destination employer” years, the employee invested in the community and its values, and the employer rewarded the loyalty of its employees through things like runway for growth (stretch roles and sponsored work innovation) and tolerance for error (valuing demonstrable learning over perfection in execution). No longer.

“Culture eats strategy for breakfast” is relevant when labor has the upper hand on management because culture is a social phenomenon: it is in the heads and hearts of humans. When labor is difficult to replace, management is hostage to labor, and culture prevails. But jettisoning the people also jettisons the culture. Deliberately severing the keepers of culture is not a concession that a company can no longer afford to operate by its once-strongly-held values and norms; it is an explicit rejection of those values and norms. By extension, that is tantamount to a professional assault on the people pursuing excellence through those values and norms.

Tech firms large and small once lured labor by values: who you are not what you know makes us a better community; how we work yields outcomes that are better value for our customers; how we live what we believe makes us better global citizens. Today, those same tech firms can’t get rid of the labor that lives those values fast enough.

Wednesday, July 31, 2024

US automakers are struggling with electrification. They won’t have that luxury bringing autonomy to market.

Four years ago today, I blogged about the difficulty automakers faced in transitioning to electric vehicles, specifically that there were consequences to transitioning too soon or too late. Here we are, four years later, and US automakers are in a tight place. Manufacturers invested heavily in the factories, only for sales to stall right when OEMs need them to soar. EV product discounts are eroding margins. Legacy US automaker losses on EVs have been papered over by strong sales of products in the combustion portfolio. They are deferring investments in PPE and new models.

It’s not just the legacy automakers that are finding the electric vehicle business difficult. Lordstown filed. Fisker filed (different legal entity, same outcome). Rivian - losing money making vehicles - needs VW’s cash lifeline as much as VW needs software to sort out their struggles creating an EV platform.

Regulation requires automakers to make more EVs but do not obligate consumers to buy EVs. Range limitations, inadequate charging infrastructure, power loss in cold weather, higher repair costs, higher insurance costs and the occasional fire are turning out to be disincentives that are overwhelming Treasury’s tax credit incentive.

Four years on, the electric future is still the future.

My point then was that making a one-way, all-in bet is a risky strategy. While the future may be a legislated certainty, the path to that future is not. The best way to deal with transitional uncertainty is to “muddle through” with policies that enable adaptability, attentiveness, and awareness. Toyota’s preference for hybrids over pure electric, and Porsche’s and Mercedes-Benz investment in flexline manufacturing, are examples of maintaining optionality by transitioning product and operations.

With the all-in strategy stalling, automakers are pulling back on electric vehicle production and lobbying congress to ease the timing of electrification mandates. If they are successful, it will buy automakers more time but create more market confusion. How committed are regulators to transition? Will consumers be forced to buy EVs? Will suppliers extend production for parts to keep older model combustion vehicles roadworthy for longer?

Transitory states do not pander to human impatience because they create the appearance of extended transition. But transitory states give OEMs and their suppliers, dealers, lenders, insurance companies, consumers and regulators the opportunity to learn and adjust. And in this case, the counterfactual - that bringing EVs to market in large numbers will result in a rapid transition of the fleet - is known to be untrue.

Smart strategy is transitory and adaptive, not all-in. That is just as true today as it was four years ago, as it has been for all of human existence.

* * *

Automobiles are in a multistage transition. Along with electrification, automakers must transition from building human operated to autonomous vehicles.

The theory supporting aggressive investment in electrification by incumbents and new entrants is that the new regulatory regime will create conditions for a financial windfall through share capture during transition. As pointed out above, disjointed public policy has not created those conditions in the US, and an investment frenzy has yielded an abundance of EVs which, in turn, has depressed returns for OEMs. As mentioned previously, any changes (i.e., relaxation) in public policy will only create more uncertainty that further threaten returns.

Autonomy is a much different opportunity. A bet on autonomy is a bet on the belief that autonomy brings entirely new and different use cases into the transportation sector. E.g., airlines stand to lose passenger volume on short haul flights to autonomous vehicles available through transportation-as-a-service. The payoff for autonomy is much, much larger than electrification.

The prize is bigger, the price is bigger. Electrification has gobbled up billions of dollars; autonomy will gobble up even more. The technology is more complex, the liability (for passenger, pedestrian and property) is greater, and the business models that exploit it are as yet unknown and unproven. Not to mention, autonomy will become even more complex once there is critical mass of autonomous vehicles on the road as the fleet can be made to behave collectively, not just individually.

Automakers hope the regulatory clock slows down to give them time to sort out electrification. Meanwhile, the race - a higher stakes race - is on for autonomy. There turned out to be first mover advantage in electrification in the US, as Tesla is still the sales leader by a wide margin. There will be first mover advantage in autonomy if only because being first to offer “free time for all the humans” will capture a lot of unit sales.

But the financial windfall from autonomy won’t come from vehicle sales: it will come from the services built around autonomous transportation. Figuring out those transformative services, everything from design to offer to pricing to availability - will emerge through discovery and thoughtful experimentation, organizational learning, adaptability and attentiveness. They will emerge by muddling through, not grand design.

The race to provide autonomy-based services starts once comprehensive autonomy is in-market. Coming in a distant runner-up in the race for autonomy will be very costly indeed.

Sunday, June 30, 2024

The yield curve is inverted. Tech's problem is asset price inflation.

The business of custom software development is, at its core, an asset business. Software development is the business of converting cash to intangible assets by way of human effort. Plenty of people opine about how important human labor is to software, and of course it is. Good development practices reduce time to delivery and create low-maintenance, easy-to-evolve software. What labor does and does not do is extremely important to the viability of software investments.

But software is an asset, not an operating expense. If there is no yield on a software asset, investing in software is a bad use of capital. No yield, no capital, no cash for salaries for people developing software. Money matters, whether or not we like to admit it.

This is a stark reversal for tech. When money was cheap and abundant as it was for over a decade, tech had the opposite problem: no yield, no problem! When capital wasn’t a constraint, the investment qualification wasn’t “what is this asset going to do for us” but “what are we denying ourselves if we don’t try to do something in this area.” Trying was more important than succeeding.

There are those who want to believe that financial markets are unemotional, but they are not. Momentum is a crucial factor in finance. Momentum is what gets investors to pile into the same position. Momentum turns a $100k plot of land into a $2m real estate “investment”. Momentum is an emotional justification in that the rationalization is hope, not fundamentals.

Tech rode momentum for a long, long time. Before COVID, the story that built momentum for tech was disruption. During COVID, the story was tech as a commercial coping mechanism. Momentum put abundant amounts of cash into the tech sector. Abundant cash inflated more than just salaries: it also inflated technical architectures and solution complexity. Money distorts.

That momentum has run its course. Tech is reaching - grasping - for any growth story. To wit: GenAI here, there, everywhere.

There are two winning hands in momentum trades: “hold to maturity” and “greater fool theory”. The prior requires a lot of intestinal - not to mention free cash flow - fortitude. The latter requires finding somebody foolish enough to spend as much (and ideally more). Nearly two years of contraction in the tech sector indicates a shortage of greater fools. Yes, some subsets of tech still command premium pricing; suffice to say there is no rising tide lifting all boats, and has not for quite some time.

Tech rode the wave of price inflation. The yield curve indicates that the wave has crested.

Friday, May 31, 2024

I can explain it to you, but I can't comprehend it for you

I’ve given my share of presentations over the years. I am under no illusions that I am anything more than a marginal presenter. My presentations are information dense, a function of how I learn. Many years ago, I realized that I learn when I’m drinking from the fire hose, not when content is spoon fed to me. I am focused and engaged with the prior; I become disinterested and disengaged with the latter. Of all the recommended presentation styles I’ve been exposed to over the years, I find the “tell them what you’re going to tell them / tell them / tell them what you just told them” pattern intellectually insulting. I prefer to treat my audience with respect and assume they are intelligent, so I err on the side of content density.

For this style to be effective, the audience has to also want to drink from the fire hose. If they do not, you won’t get past the first couple of paragraphs. But in over 30 years in the tech business, I find tech audiences generally respond to high-content-density presentations.

As the person leading a briefing or presentation, it is your responsibility to connect with the audience. However, there are limitations. The content as prepared is only as good as the guidance you’ve received to shape the subject and depth of detail. A presenter with subject matter expertise isn’t (or at any rate, should not be) wed to the content and can generally shift gears to adjust when there is a fidelity mismatch between content and audience. But being asked, even demanded, to explain even a moderately advanced concept in a limited amount of time to an audience lacking in subject matter basics is going to fall flat every single time.

* * *

People buy things - large capital things - for which they have little or no qualifications to purchase other than the fact that they have money to spend. Few people who buy houses are carpenters, plumbers or electricians. Few people who buy used cars are mechanical engineers.

This expertise disconnect plagues the software business. There are, unfortunately, times when contracts for custom software development are awarded by individuals or committees who have (at best) limited understanding of the mechanics of software delivery. And there are times when contracts for software product licenses are awarded by individuals or committees who have (at best) limited understanding of the complexity of the domain into which the licensed product must work.

An egregious, although not atypical, example is an 8-figure custom software development contract with payouts indexed to “story points developed”. Not “delivered”, not “in production.” The delivery vendor tapped the contract for cash by aggressively moving high-point story cards to “dev complete”. Never mind that nothing had reached production, never mind that nothing had reached UAT. By the time I got a look at it (they were looking - hoping - for process improvements that would yield deployable software rather than deplorable software), every story had an average of 7 open defects with a nearly 100% reopen rate. And yes, smart reader, ignore the fact that the apparent currency was “story points,” because it was not. The currency was the cash value of the contract; story points were simply a proxy for extracting that cash. Ironically, the buyer thought the arrangement was shrewd because it tied cash to work. Sadly it failed to tie cash to outcomes. In the event, the vendor had the buyer hostage: there were no clawbacks, so the buyer would either have to abandon the investment or have to sign extension after extension in the hopes of making good on it.

Licensed software products are no different. I’ve seen too many occasions where a buyer entered into a license agreement for some product without first mapping out how to integrate that product into their back office processes. When the buyer doesn’t come to the table prepared with a detailed understanding of their as-is state, they default into allowing the vendor to take the lead in designing solution architecture for the to-be state based entirely on generic and simplistic use cases, with disastrous outcomes to the buyer. Licensed products tend not to be 100% metered cost, and the vendor sales rep has a quota to meet and a commission to earn, so the buyer commits to some minimum term-based subscription spend with metered usage piled on top of that. In practice this means the clock is ticking on the buyer to integrate the licensed product the second the ink is drying on the contract. Finding out after the contract is signed that intrinsic complexity of the buyer environment is many orders of magnitude beyond the vendor supplied architecture is the buyer’s problem, not the vendor’s.

To level this information asymmetry between buyer and seller, buyers have independent experts they can call on to give an opinion of the contract or product or vendor or process. But of course there are experts and there are people with certifications. An expert in construction can look beyond things like surface damage to drywall and trim and determine whether or not a building is structurally sound. Then there are the “certified building inspectors” who look closely at PVC pipe covered in black paint and call it “cast iron plumbing.” All the certification verifies is that once upon a time, the certificate bearer passed a test. What is true in building construction is equally true in software construction. Buyers have access to experts but that doesn’t do them a bit of good if they don’t know how to qualify their experts.

Of course there’s a little more to it than that. Buyers have to be able to qualify their experts, want their expertise, and be willing and able to act on it. I’ve advised on a number of acquisitions. No person mooting an acquisition wants to hear “it’s a bad acquisition at any price”, especially if their job is to identify and close acquisitions. Years ago, I was asked to evaluate a company that claimed to have a messaging technology that could be used to efficiently match buyers and sellers of digital advertising space. They had created a messaging technology that was different from JMS only in that (a) theirs was functionally inferior and (b) it was not free. Instead of expressing relief at avoiding a disastrous deployment of capital, the would-be investor was desperate for justification that would overshadow these… inconveniences. As the saying goes, “you cannot explain something to somebody whose job depends on not understanding it.”

* * *

I have been fortunate to have worked overwhelmingly with experts and professional decision makers over the years, people who have been thoughtful, insightful, willing to learn, and who in turn have stretched me as well. I sincerely hope I have done the same for them.

Unfortunately, I have had a few brushes with those who fell irreconcilably short. The CTO of a capital markets firm who requested an advanced briefing on the mechanics of how distributed ledger technology could change settlement of existing and open the door for new complex financial products, but had done nothing before the briefing to learn the basics of what “this blockchain thing” is. The mid-level VP leading a RFP process who derailed a vendor presentation because she simply could not fathom how value stream mapping of business operations exposes inefficiencies that have income statement ramifications.

When we fail to connect with an audience, we have to first internalize the failure and look for what we might have done differently: what did we hear but not process at the time, what question should we have asked to clarify the perspective of the person asking. What is spoken is less important than what is heard.

At a certain point, though, responsibility for understanding lies with the listener. The audience member adamantly demanding further explanation may be doing so for any number of reasons, ranging from simple neglect (a failure to have done homework on the basics) to a deliberate unwillingness to understand (i.e., cognitive dissonance).

Which is where the title of this blog comes in. It’s a comment Ed Koch, the 105th mayor of New York, made to a constituent who demanded to know why the mayor’s office was introducing policy to lighten taxes, some time after New York had financially imploded and was still hemorrhaging high-income earners and businesses. “I can explain it to you” he told this constituent, “but I can’t comprehend it for you.”

Tuesday, April 30, 2024

The era of ultra low interest rates is over. Tech has painful adjustments to make.

Interest rates have been climbing for two years now. The Wall Street Journal ran an article yesterday with the headline the days of ultra low interest rates are over. Tech will have to adjust. It’s going to be painful.

When capital is expensive, we measure investments against the hurdle rate: the rate of return an investment must satisfy to exceed to be a demonstrably good use of capital. When capital is ridiculously cheap, we no longer measure investment success against the hurdle rate. In practice, cheap capital makes financial returns no more and no less valuable than other forms of gain.

There are ramifications to this. As fiduciary measures lapse, so does investment performance. We go in pursuit of non-financial goals like "customer engagement rate". We get negligent in expenditure: payrolls bloated with tech employees, vendors stuffing contracts with junior staff. We get lax in our standard of excellence as employees are aggressively promoted without requisite experience. We get sloppy in execution: delivery as a function of time is simply not a thing, because the business is going to get whatever software we get done when we get it done.

Capital may not be 22% Jimmy Carter era expensive, but it ain’t cheap right now. Tech has to earn its keep. That means a return to once familiar practices, as well as change that orchestrates purge of tech largesse. Business cases with financial returns first, non-financial returns second. Contraction of labor spend: restructuring to offload the overpromoted, and consolidation of roles or lower compensation for specialization. Transparency of what we will deliver when for what cost, and what the mitigation is should we not. An end to vanity tech investments, because the income statement, much less the balance sheet, can no longer support them.

Some areas of the tech economy will be immune to this for as long as they are thematically relevant. AI and GenAI are TINA (there is no alternative) investments: a lot of firms have no choice but to spend on exploratory investments in AI, because Wall Street rewards imagination and will reward the remotest indication of successful conversion of that imagination that much more. Yet despite revolutionary implications, AI enthusiasm is tempered compared to frothy valuations for tech pursuits of previous generations, a function of investor preference for, as James Mackintosh put it, profits over moonshots.. Similarly, businesses where there is a tech arms race on because innovation offers competitive advantage, such as in-car software, it will be business as usual. But these arms races will end, so it will be tech business as usual until it isn’t. (In fact, in North America, this specific arms race may not materialize for a long, long time as EV demand has plateaued, but that’s another blog for another day.)

Tech has had the luxury of not being economically anchored for a long time now. If interest rates settle around 400 bps as the WSJ speculated yesterday, those days are over. The adjustment to a new reality will be long and painful because there’s a generation of people in tech who have not been exposed to economic constraints.

This is the Agile Manager blog, as it has been since I started it in 2006. Good news, this change doesn’t mean a return to the failed policies of waterfall. Agile had figured out how to cope with these economic conditions. Tech may not remember how to use those Agile tools, but it has them in the toolkit. Somewhere.

That said, I also blog about economics and tech. If the Fed funds rate lands in the 400 bps range, tech is in for still more difficult adjustments. More specifically, the longer tech clings to hopes for a return to ultralow interest rates, the longer the adjustment will last, and the more painful it will be.

The ultralow rate party is over. It’s long past time for tech to sober up.

Sunday, March 31, 2024

Don’t queue for the ski jump if you don’t know how to ski

I’ve mentioned before that one of my hobbies is lapidary work. I hunt for stone, cut it, shape it, sand it, polish it, and turn it into artistic things. I enjoy doing this work for a lot of reasons, not least of which being I approach it every day not with an expectation of “what am I going to complete” but “what am I going to learn.”

As a learning exercise, it is fantastic. I keep a record of what I do on individual stones, on how I configure machines and the maintenance I perform on them, and for the totality of activities I do in the workshop each day. I do this as a means of cataloging what I did (writing it down reinforces the experience) and reflecting on why I chose to do the things that I did. Sometimes it goes fantastically well. Sometimes it goes very poorly, often because I made a decision in the moment that misread a stone, misinterpreted how a tool was functioning, or misunderstood how a substance was reacting to the machining.

My mistakes can be helpful because, of course, we learn from mistakes. I learn to recognize patterns in stone, to recognize when there is insufficient coolant on a saw blade, to keep the torch a few more inches back to regulate the temperature of a metal surface.

But mistakes are expensive. That chunk of amethyst is unique, once-in-a-lifetime; cut it wrong and it’s never-in-a-lifetime. If there isn’t coolant splash over a stone you’re cutting, you’re melting an expensive diamond-encrusted saw blade. Overheat that stamping to a point where it warps, or cut that half hard wire to the wrong length, and you’ve just wasted a precious metal that costs (as of today’s writing) $25+ per ounce for silver, $2,240+ for gold.

Learning out of a video or website or a good old fashioned book is wonderful, but that’s theory. We learn through experience. Whether we like to admit it or not, a lot of experiential learning results in, “don’t do it that way.”

Learning is the human experience. Nobody is omnipotent.

But learning can be expensive.

* * *

A cash-gushing company that has been run on autopilot for decades gets a new CEO who determines they employ thousands doing the work of dozens, and since most of these people can’t explain why they do what they do, the CEO concludes there is no reason why, and spots an opportunity to squeeze operations to yield even better cash flows. Backoffice finance is one of those functions, and that’s just accounting, right? That seems like a great place to start. Deploy some fintech and get these people off the payroll already.

Only, nobody really understands why things are the way they are; they simply are. Decades of incremental accommodation and adjustment have rendered backoffice operations extremely complicated, with edge cases to edge cases. Call in the experts. Their arguments are compelling. Surely, we can we get rid of 17 price discounting mechanisms and only have 2? Surely, we can we have a dozen sales tax categories instead of 220? Surely, we can get customers to pay with a tender other than cash or check? All plausible, but nobody really knows (least of all Shirley). Nobody on the payroll can explain why the expert recommendations won’t work, so the only way to really find out is to try.

Out comes a new, streamlined customer experience with simplified terms, tax and payments. Only, we lose quite a lot of customers to the revised terms, either because (a) two discounting mechanisms don’t really cover 9x% of scenarios like we thought or (b) we’re really lousy at communicating how those two discounts work. We lost transactions beyond that because customers have trust issues sharing bank account information with us. And don’t get started on the sales tax remittance Hell we’re in now because we thought we could simplify indirect tax.

Ok, we tried change, and change didn’t quite work out as we anticipated. It took us tens of millions of dollars of labor and infrastructure costs to figure out if these changes would actually work in the first place. Bad news is, they didn’t. Good news is, we know what doesn’t work. Hollow victory, that. That’s a lot of money spent to figure out what won’t work. By itself, that doesn’t get us close to what will work. Oh and by the way, we spent all the money, can we please have more?

Let’s zoom out for a minute. How did we get here? Since the employees don’t really know why they do what they do, and since all this activity is so tightly coupled, what is v(iable) makes the m(inimum) pretty large, leaving us no choice but to run very coarsely grained tests to figure out how to change the business with regard to customer facing operations that translate into back office efficiencies. Those tests have limited information value: they either work or they do not work. Without a lot of post-test study, we don’t necessarily know why.

This is not to say these coarse tests are without information value. With more investment of labor hours, we learn that there are really four discounting mechanisms with a side order of optionality for three of them we need to offer because of nuances in the accounting treatment our customers have to deal with. That’s not two but still better than the nineteen we started with. And it turns out with two factor authentication we can build the trust with customers to share their banking details so we can get out of the physical cash business. Indirect tax? Well, that was a red herring: the 220 categories previously supported is more accurately 1,943 under the various provincial and state tax codes. Good news is, we have a plan to solve for scaling up (scenarios) and scaling down (we’ll not lose too much money on a sales tax scenario of one).

Of course, we’ll need more money to solve for these things, now that we know what “these things” are.

That isn’t a snarky comment. These are lessons learned after multiple rounds of experiments, each costing 7 or 8 figures, and most of them commercially disappointing. We built it and they didn’t come, they flat out rejected it. We got it less wrong the second, third, fourth, fifth time around and eventually we unwound decades of accidental complexity that had become the operating model of both backoffice and customer experience, but that nobody could explain. Given unlimited time and money, we can successfully steer the back office and customers through episodic bouts of change.

Given unlimited time and money. Maybe it took five times, or seven times, or only three. None was free, and each experiment cost seven to eight figures.

* * *

There are a few stones I’ve had on the shelf for many, many years. They are special stones with a lot of potential. Before I attempt to realize that potential, I want to achieve sufficient mastery, to develop the right hypothesis for what blade to use and what planes to cut, for what shape to pursue, for what natural characteristics to leave unaltered and what surfaces to machine. Inquisitiveness (beginner’s mind) twined with experience on similar if more ordinary stones have led me to start shaping some of those special ones, and I’m pleased with the results. But I didn’t start with those.

Knowledge is power as the saying goes, and “learn” is the verb associated with acquiring knowledge. But not all learning is the same. The business that doesn’t know why it does what it does is in a crisis requiring remedial education. There is no shame in admitting this, but of course there is: that middle manager unable to explain why they do the things they do will feel vulnerable because their career has peaked as the “king of the how in the here and now.” Lessons learned from being enrolled in the master class - e.g., being one of the leads in changing the business - will be lost on this person. And when the surrogate for expertise is experimentation, those lessons are expensive indeed.

Leading change requires mastery and inquisitiveness. The prior without the latter is dogma. The latter without the prior is a dog looking at a chalkboard with quantum physics equations: it’s cute, as Gary Larson pointed out in The Far Side, but that’s the best that can be said for it. When setting out to do something different, map out the learning agenda that will put you in the position of “freely exercising authority”. But first, run some evaluations to ascertain how much “(re-)acquisition of tribal knowledge” needs to be done. There is nothing to prevent you from enrolling in the master class without fluency in the basics, but it is a waste of time and money to do so.

Thursday, February 29, 2024

Patterns of Poor Governance

As I mentioned last month, many years ago I was toying around with a governance maturity model. Hold your groans, please. Turns out there are such things. I’m sure they’re valuable. I’m equally sure we don’t need another. But as I wrote last month there seemed to be something in my scribbles. Over time, I’ve come to recognize it not as maturity, but more as different patterns of bad governance.

The worst case is wanton neglect, where people function without any governance whatsoever. The organizational priority is on results (the what) rather than the means (the how). This condition can exist for a number of reasons: because management assumes competency and integrity of employees and contractors; because results are exceedingly good and management does not wish to question them; because management does not know the first thing to look for. Bad things aren’t guaranteed to happen in the absence of governance, but very bad things can indeed (Spygate at McLaren F1; rogue traders at Société Générale and UBS). Worse still, the absence of governance opens the door to moral hazard, where individuals gain from risk borne by others. We see this in IT when a manager receives quid pro quo - anything from a conference pass to a promise of future employment - from a vendor for having signed or influenced the signing of a contract.

Wanton neglect may not be entirely a function of a lack of will, of course: turning a blind eye equals complicity in bad actions when the prevailing culture is “don’t get caught.”

Distinct from wanton neglect is misplaced faith in models, be they plans or rules or guidelines. While the presence of things like plans and guidelines may communicate expectations, they offer no guarantee that reality is consistent with those guidelines. By way of example, IT managers across all industries have a terrible habit of reporting performance consistent with plans: the “everything is green for months until suddenly it’s a very deep shade of red” phenomenon. Governance in the form of guidelines is often treated as “recommendations” rather than “expectations” (e.g., “we didn’t do it that way because it seemed like too much work”). A colleague of mine, on reading the previous post in this series, offered up that there is a well established definition of data governance (DAMA). Yes there is. The point is that governance is both a noun and a verb; governance “as defined” and “as practiced” are not guaranteed to be the same thing. Pointing to a model and pointing to the implementation of that model in situ are entirely different things. The key defining characteristic here is that governance goes little beyond having a model communicating expectations for how things get done.

Still another pattern of bad governance is governance theater, where there are governance models and people engaged in oversight, but those people do not know how to effectively interrogate what is actually taking place. In governance theater, some governing body convenes and either has the wool pulled over their eyes or simply lacks the will to thoroughly investigate. In regulated industries, we see this when regulators lack the will to investigate despite strong evidence that something is amiss (Madoff). In corporate governance, this happens when a board relies almost exclusively on data supplied by management (Hollinger International). In technology, we see this when a “steering committee” fails to obtain data of its own or lacks the experience to ask pertinent questions of management. Governance theater opens the door to regulatory capture, where the regulated (those subject to governance) dictate the terms and conditions of regulation to the regulators. When governance is co-opted, governance is at best a false positive that controls are exercised effectively.

I’m sure there are more patterns of bad governance, and even these patterns can be further decomposed, but these cover the most common cases of bad governance I’ve seen.

Back to the question of governance “maturity”: while there is an implied maturity to these - no controls, aspirational controls, pretend controls - the point is NOT to suggest that there is a progression: i.e., aspirational controls are not a precursor to pretend controls. The point is to identify the characteristics of governance as practiced to get some indication of the path to good governance. Where there is governance theater, the gap is a reform of existing institutions and practices. Misplaced faith requires creation of institutions and practices, entirely new muscle memories for the organization. Each represents a different class of problem.

The actions required to get into a state of good governance are not, however, an indication of the degree of resistance to change. Headstrong management may put up a lot of resistance to reform of existing institutions, while inexperienced management may welcome creation of governance institutions as filling a leadership void. Just because the governance gap is wide does not inherently mean the resistance to change will be as well.

If you’re serious about governance and you’re aware it’s lacking as practiced today, it is useful to know where you’re starting from and what needs to be done. If you do go down that path, always remember that it’s a lot easier for everybody in an organization - from the most senior executive management to the most junior member of the rank and file - to reject governance reform than to come face to face with how bad things might actually be.