I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.
Showing posts with label Change Management. Show all posts
Showing posts with label Change Management. Show all posts

Thursday, July 31, 2025

It isn’t whether AI will make developers redundant, productive or distracted; it’s that AI will make code disposable

There’s a lot of hand wringing going on over the impact of AI in software development. Depending on who you ask, AI (a) will eliminate the need for junior developers; (b) will make highly skilled developers an order of magnitude more productive; or (c) will never level up to a human developer, capable only at best at providing suggestions, at worst putting a developer in a premature debug cycle (of AI’s making) that extends rather than compresses the time to build software.

Most of the arguments to do with AI in software development focus on the process of development, specifically the combination of knowledge (what to do and how to do), experience (what not to do and how not to do), and empathy (what is relevant and what is a distraction in a specific problem space). For the skeptics, no AI model can replace these things because the knowledge is incomplete, the experience is actually processed as knowledge, and the empathy is not real. Ergo, AI can augment but cannot replace humans in software development.

This misses the point. The thing about AI writing software is not whether AI writes code as technically sound and functionally fit as humans. It’s that AI that writes software makes software disposable.

* * *

Prior to WWII, consumer purchases were overwhelmingly financed with cash, and those acquisitions were intended to last for a long time. A consumer saved and spent frugally; an acquired item - from clothing to car - was repaired (usually by the owner) rather than replaced. Similarly, manufacturers raised large sums of capital to build plants and fill them with machines, pledging a portion of future cash flows to pay for it all. The plants and the machines and the things they made were expected to have very long life spans

The “disposable society” began in 1950s, when post WWII excess led to discarding manufactured items - even durable goods - for non-functional reasons (e.g., styles or fashions). This phenomenon was celebrated rather than castigated: no longer did consumers have to make long-term commitments within tight constraints and limited choices. They could live in the moment through spontaneous decisions with few consequences.

Come the 1970s, manufacturers reduced cost of goods sold by turning to lower cost labor markets and introducing robots. Materials went cheap too, as plastics replaced metals in component parts. Plastics also weigh less, which reduced transportation costs. In real terms, the cost of manufactured things went down.

Cheaper materials meant weaker material strength meant lower product durability. But durability didn’t matter like it used to: if it broke, it was cheaper to purchase a replacement machine rather than service the broken one. Over time, businesses institutionalized this reality: manufacturers and their dealers sold more machines, had less cash and square footage tied up in service parts inventory, and had less need for service technicians on their payroll. It can be argued that today, a machine no longer covered by buyer-paid warranty coverage is a machine that, in the eyes of the manufacturer, is at EOL.

* * *

Custom software has always been a labor-intensive activity, most often financed as a capital cost. From a purely economic perspective - that is, ignoring the non-economic costs of damaged careers, ruptured trust, and all-nighters - software development bears a striking resemblance to low-tech-density economic activity of yore.

For the economic reasons (and likely those non-economic ones), custom software has been treated as a durable good similar to machines of the past: they have a high cost of acquisition; they’re capitalized over many years; they are maintained and occasionally upgraded; and every ounce of productive use is squeezed out of them, kept in production as bits of functionality are implemented in new solutions that don’t entirely replace the workhorse of old.

The cost of custom software includes the cost of perpetuating a team for the life of the asset that has the contextual knowledge of what, why and how the software does things. This is an insurance policy for the company that pays it and an annuity for the people who receive it. This is a long tail of labor costs.

If there is AI that can produce code - perhaps today only a small method or class - that same AI can reproduce that code. If there is AI that can not only produce code but also has in its library of all the previous iterations of the prompts, code, scripts, configuration, data structures and data captured by (transactional) and thrown off (observability) from which to learn, the cost of replacing that software in toto is going to be lower than the cost of maintaining in situ.

As long as the data is preserved and interdependencies with other software not broken, the software is disposable.

* * *

Before anybody gets arm-wavy that software development is right-brain creative problem solving more than left-brain deterministic task, consider that a lot of custom software consists of code that is not net new problem solving (just because it is new to the people in a particular team doesn’t mean it is new to the world), a lot of the creative aspects are in response to self-made software fashions (e.g., experiences or programming languages) rather than function, a lot of the defects - technical and functional - are products of limited human knowledge and recall of that knowledge.

The question is not how will AI change how we code. The ones over-indexing on “the process” are those dependent on there being longevity to “process”. The shift is how AI changes the economic nature of software itself: from durable asset with a high replacement cost to a disposable tool with a low replacement cost.

* * *

Dear reader: as you may know, I am a lapidary artist. Recently, the interest in my artwork has increased, to a point that I am going to take a break from the blog for a few months. Whether you agree or disagree with the content, I hope these blog posts have given you things to think about.

Wednesday, April 30, 2025

Consulting is episodic. That works better for consultants than consulting companies.

Consulting is an episodic line of work. That’s great for the individual consultant because of the sheer variety it provides: different industries, companies, people, jobs and problems to solve. You get to see not only how many unique ways companies do similar things, but (if you’re paying attention) you’ll understand why. Plus, every problem calls for different sets of knowledge and skills, meaning you’ll get the chance to learn as much as you already know.

The episodic model has been good for me. It’s given me the opportunity to work in a wide range of industries: commercial insurance; investment and retail banking; wealth management and proprietary trading; commercial leasing; heavy equipment manufacturing; philanthropy. I’ve been able to work on a wide range of problems, from order to cash, shop floor to trading floor, new machines to service parts to used machines and more service parts, raw materials purchasing to finished goods shipping, underwriting to claim filing, credit formation to fraud prevention. Plus, I’ve been able to solve for multiple layers of a single stage of a solution (the experience, the technical architecture, the accounting transactions, the code) as well as the problems unique to different stages (the business case, the development of the solution, the rescue, the rescue of the rescuers, the acceptance testing and certification, the migration and cutover, the cleanup, the crisis room after a bumpy rollout, the celebration for pulling off the implementation everybody said couldn’t be done.)

(Yes, that’s a lot of run-on sentences. It’s been a run-on career.)

At some point in time, with a company you’ve never worked with in an industry you’ve never been in before, you recognize classes of need when nobody else does. You see things differently. I’d like to believe this is the benefit of working with an experienced consultant.

The episodic nature of the work provides meaningful variety for the consultant provided the consultant does not get staffed in the same role to perform the same task over and over again. Variability of experience evolves the individual; specialization stunts career development. While a person can become valuable as the go-to person for a specific need at a particular moment in time, needs, like fashions, come and go. To wit: ISO 9001 compliance consultants, SAP implementation project managers and Agile coaches don’t command the hourly rates they once did. To the extent that those things have value today, the knowledge of the nouns is more valuable than knowledge of the verbs.

There are career risks and downsides to being in an episodic profession. Consultant labor is part of some other company’s secondary workforce: when economic times get tough, it’s easier for the buyers to cull the consultants than badged employees. Even when the economy is humming along, the contract will eventually run its course and it’s time to move on. The consultant can’t talk publicly about the companies they’ve done business with and must be judicious in anything they do share. And for every company with the toxic culture and petty employees you tolerate because you know you won’t have to work with them for long, there’s a company with a healthy culture and great people you wish you could work with for the rest of your career. But that’s not an option, because you’re valuable to any one client for what you learn by working with many, many others.

Still, on the whole, it’s a great line of work.

* * *

The episodic nature of consulting is great for consulting companies when there is more demand for services than there is supply. This is especially true when that demand is ambitious in nature: the proliferation of affordable personal computing in the 1980s, the commercialization of the internet in the 1990s, the mass adoption of mobile computing in the 2000s, the expansion of cloud computing in the 2010s. Big technology changes meant big spending, and wave after wave didn’t just fuel growth of services, it smoothed out what would otherwise have been more volatile boom and bust cycles for technology services firms.

When the spending pullback comes, consulting companies don’t much like the episodic nature of their business model. The income statement lurches and contracts. A surprise pullback by two or three clients leaves consultant labor idle. A surge in proposals expected to close but not yet closed creates uncertainty as to if, when, and how many new people to hire. Uncertainty frustrates investors and stresses out managers.

Recurring revenue, on the other hand, means predictability in cash flows, which means fewer ulcers and less frequent boardroom rants. In consulting, recurring revenue comes from the more mundane aspects of technology, things like long term care and feeding of digital infrastructure and legacy software assets. (Software development consultancies have tried to use the “products not projects” mantra to change buying patterns to no avail: it remains episodic.)

Repeatable revenue comes from repeating tasks. While there are episodic events in the delivery of those repeating tasks - incremental improvements to deployment scripts and production monitoring - there is considerably less variability in the nature of the work itself. Repeatability industrializes the work, and with it the workforce. Where labor and employer share the risks of the episodic nature of project-based consulting, labor carries the bulk of the risk of the repeating revenue: the incremental improvements that reduce labor intensity; the codification of tasks that enable the work to shift from a low cost labor market to a lower cost labor market, and to a still lower cost labor market after that.

* * *

The software services industry hopes that AI is the next incarnation of demand for consultant labor. As I wrote last month, AI has not yet proven to be a boon to consulting firms in the same way that cloud, mobile computing, internet and personal computing were.

At the same time, there is a lot of hand wringing over the damage AI has the potential to do to employee development. If AI is ever truly capable of replacing swaths of software developers, the skills development pipeline will thin out considerably. If it is “I do and I learn” as the old saw goes, then there is reason to be concerned with “AI does it for me and I’m none the wiser”. (Again, please do not misunderstand this as a statement of imminent doom for fundamental skill acquisition among junior consultants. Maybe someday, certainly not today.)

But I can’t help but think a preference for recurring over episodic work will have a bigger impact on the development - more accurately, the impairment of the development - of future knowledge workers in consulting firms. Business models built around specialization of knowledge curtail the variety of the variety to which people are exposed. As I wrote above, it isn’t just exposure to a variety of different projects, it’s a variety of different solutions in different stages of their evolution to different problems in different businesses. (Jimmy Collins missed the mark quite badly in that regard. As, of course, did so many among the pantheon of his anointed “greats”.)

* * *

This preference for recurring over episodic income streams is playing out in a lot of industries, including automotive and high tech manufacturing. This portends a curtailment of product innovation and with it, the advancement of capabilities: while a manufacturer may squeeze more functionality out of deployed hardware through services, it will be functionality of convenience more than capability. Simply put, the OEM uses the incumbency it has (through e.g., high switching costs) with captive customers to extract more rent for the solutions it engineered in the past. Loyalty isn’t earned as much as it is exploited.

In consulting, a preference for the recurring over the episodic portends smoother cash flows but a depletion - potentially quite rapid - of competency in its workforce. A consulting firm is a services firm, but a services firm is not necessarily a consulting firm. A services firm may be a supplier of specialized labor, but a specialist and an expert are very different things. A services firm incubates specialists with training and certifications; a consulting firm incubates experts with exposure to diverse clients, projects and responsibilities. The two can cohabitate, but a services firm will have a dominant ethos: provider of utility services or provider of value generative services. The prior supplies specialists to fill roles; the latter supplies experts to frame opportunities and develop solutions.

Companies in growth industries grow by gambling on their smarts. Companies in ex-growth industries grow by extracting more rents from captive customers. A consultancy pursuing rents no longer believes in, and perhaps no longer has, its smarts.

Tuesday, December 31, 2024

Yes, the future is digital. More importantly, it's asset light.

The 20th century company acquired capital assets (property, plant and equipment); employed a large, low skilled secondary workforce to produce things with that PPE; and employed a small, high skilled primary workforce to manage both the secondary workforce and administrate the PPE. By comparison, the 21st century company rents infrastructure - commercial space, cloud services, computers - and both employs and contracts knowledge workers who collaborate on solving problems.

I want to focus on the capital rather than the labor aspect of this. Unlike its 20th century predecessor, the modern company is capital-light. Companies don’t own the tools their employees use, they rent them (e.g., spreadsheet software) or buy a tool and the right to rent the tenancy when certain capabilities are used (e.g., machine tools with subscribable capabilities). 21st century companies do raise capital, but it is used primarily as a buffer to absorb losses during the startup years.

As I’ve written before, industrial firms - largely 20th century firms - are under investor pressure to look more like their 21st century counterparts. The (post-startup) 21st century firm has both scale and recurring revenue yielding very attractive margins and cash flows that, because they are light on fixed assets, result in higher returns to investors. For a 20th century firm to make this transition requires balance sheet and income statement restructuring. The balance sheet transformation is straightforward: sell buildings and pay rent to use them; contract for logistics services rather than own and operate a fleet of trucks. The income statement is much harder to change: if you make machines, the biggest bulge in the top line still comes from moving machines. Still, OEMs are trying to sell subscriptions to services on the machines they make, promising revenues in the low double digits of machine sales.

For a 20th century firm to change its income statement requires customer cooperation. That cooperation is not immediately forthcoming.

The OEM value prop to the machine owner/operator consists of at least two parts. One is that the owner/operator only pays for machine capabilities when they’re needed. Another is that subscribable capabilities can overcome the difficulties of replacing skilled labor.

The prior is obvious, but to a lot of owner/operators of dubious value: the 20th century mindset doesn’t equate a machine tool with a mobile phone, a device on which you pay to install and use apps.

The latter is a structural problem of labor markets. Plenty of firms in construction and manufacturing rely on older machine tools. They rely on skilled labor to use those machines in an expert fashion. These are not knowledge workers in the contemporary sense, but laborers skilled at using the machines, operating them efficiently, effectively, and responsibly. In expert hands, those machines will get the job done quickly, safely, and without damage to the machine itself. As that population of experts ages out, the firms dependent on them are finding it difficult to hire replacement employees. The latter OEM value prop amounts to shifting capability from the operator to the machine itself, which makes it easier for the owner/operator to source (hire and replace) labor. Theoretically, the machine owner/operator should be willing to pay to rent machine capabilities as the difference between the total compensation for an expert and the total compensation for an employee who is not, and will likely never be, an expert.

For the owner/operator, it means the machine is now both balance sheet and income statement phenomenon (beyond MRO costs). Again, theoretically this isn’t a problem as long as the cost of renting machine capabilities is less than the difference in employee compensation between expert and technician. But everybody knows that rents rise, and the owner/operator has less control over OEM fees than it does over negotiating employee comp. When the cost of renting the capability is not sanitized by the compensation gap, supplementing balance sheet with income statement impact requires that the owner/operator have (a) pricing power to be able to pass those costs onto customers or (b) taking the hit on its own margins. Neither are particularly attractive options.

Which points to another unattractive thing about substituting machine capability for employee skill: a greater dependence on the OEM for capability. There’s a fable that a former CEO of Nokia told of the Finnish boy who was outdoors in winter and cold, and discovered he could warm himself by wetting his pants. This, of course, is only effective for a short while and has undesirable consequences. The lack of skill development among employees - and in fact, a limitation on ever developing those skills because of the convenience of reaching for the subscribable capability - increases dependence of the owner/operator on the capability from the OEM. That gifts the OEM a lot of pricing power over the owner/operator.

Consider commercial agriculture. Suppose that the subscription price to use the software to optimize planting and fertilizing is indexed to the cost of seed and fertilizer. Suppose seed and fertilizer consumption is more effective by 25%, so the OEM charges the farmer a metered rate up to the 25% of the total volume the farmer would have made without the software times the current market price for seeds and fertilizer. Within the narrow definition of that subscription transaction, the economics seem to be fine, but systemically they work against the 20th century farmer. Seed and fertilizer companies will increase prices to compensate for the loss of volume. OEMs can increase the subscription fee as a percentage of savings within a +/- x% or so band to capture more of the farmer’s savings. The bottom line is, any savings will not accrue to the farmer. Worse still, if after all of this, commodity prices are soft, the farmer is screwed. Not out of business, but dependent on economic rescue. (Food security being up there as a national priority with energy security and financial security means governments will pump money into farms just as they often own petroleum companies and always bail out their banking system.)

For OEM subscriptions to truly take off, the owner/operator has to become a 21st century business. In our ag example, in the absolute form, that means putting seed and fertilizer out to bid with the winner agreeing to sell at cost and taking a cut of the yield; contracting to an ag operations firm that brings their own machines and personnel; and of course, leasing both land and buildings. In this case, the value prop of subscriptions is not to the farmer, but to the service provider with the equipment performing farm operations.

Admittedly this is a bit extreme. But it is not far fetched. Boeing doesn’t make what it sells… although that hasn’t turned out according to plan and for the time being they’re undoing that by purchasing the manufacturing business they spun out. Ok, bad example. A better example is Apple, which also does not make what it sells, and that has been quite successful. So, similar to how Apple reminds every new owner/operator upon unboxing a product, this soybean - the product of the seed, fertilizer, pesticide; of the choice of land on which it was grown; of the cultivation, irrigation, spraying and harvesting; of the storage and transportation - was Designed by Farmer Bob in Kansas.

The point is, there’s a mismatch of 20th century OEMs trying to synthesize and export a 21st century business model onto 20th century owner/operators. At present, it’s not a natural fit for either.

And by the way, just as this applies to manufacturing and agriculture, it also applies to households. We’re seeing the groundswell of a transition from owning to renting. While automobile leases have been around for years, interest rates and excessive property values are forcing would-be first time buyers into renting, possibly forever. An entire property industry is being built around this possibility. An “ownerless” society is far more inclined - and far better prepared - to manage a myriad of subscriptions than an “ownership” society is. This transition suggests automakers will gradually find it easier to sell subscriptions to things beyond satellite radio.

We’ve come a long way since the 1970s when the courts ruled households didn’t have to rent a phone from Ma Bell but could buy a telephone outright and plug it into the AT&T network. Since landline telephones lasted decades, ownership of the tool freed up income that had previously been committed to a subscription for a phone. Today, of course, AT&T (among others) has effectively found their way back to that model.

Turns out it was back to the future all along. It isn’t so much that the future is digital. It’s that it is asset light.

Saturday, November 30, 2024

Industrial firms are struggling with policy change. They can be designed to respond to change.

News media have been trying to interpret the economic and commercial ramifications that will come about as a result of the US elections earlier this month. How will tariffs be used in policy and what will that mean to consumer goods prices and manufacturing supply chains? What are the risks to industrial contractors of anticipated cuts in federal government spending? How will regulations change in areas like telecommunications and emissions? How will bond markets price 10 year Treasurys?

No doubt, industrial firms are facing highly disruptive policy changes. But if we zoom out for a minute, highly disruptive policy changes are the norm. Emissions, finance, energy, telecommunications, trade, healthcare and lots of other areas have been subject to significant regulatory change in the last two decades. To wit: when adding 70,000 pages to the Federal Register is considered a light year for new regulations, policy change is the norm, not the exception. Add to that non-policy sources of volatility - labor strikes, electrical blackouts, markets that failed to materialize, armed combat - and it is accurate to say that industrial firms have been subject to non-stop, if not increasing, volatility in their operating environments.

* * *

Wall Street rewards consistency in free cash flows above all else. Consistency in cash flows mollifies bond markets, which gives equity investors confidence that there will be ample cash for distributions through buybacks and dividends.

In manufacturing companies, strong operating cash flows are achieved through highly efficient production processes, from supply chain to transportation. Just-in-time inventory management is one of these practices. JIT flatters the balance sheet by minimizing cash tied up in raw materials inventory and in Property, Plant and Equipment (warehouse space) to hold that inventory. As implemented, though, JIT creates tight coupling within a production system: a hiccup in fulfillment from a supplier interferes with the efficiency of the entire production process (e.g., Boeing parking work-in-process in what is actually an employee car park due to a lack of fasteners earlier this year).

In short, industrial firms can throw off copious amounts of cash, but their processes - implemented as tightly integrated, complex systems - are fragile. Nassim Taleb pointed out this same phenomenon in financial markets: interlocking dependencies create systemic fragility. By way of example, the beaching of the Ever Given looked like a black swan event, but it was not: the problem wasn’t a global transportation problem, but a lack of robustness in end-to-end production processes themselves.

* * *

The more rigid the underlying processes, the more acute the need for external stability. Right now, uncertainty about policy change is creating external instability, rendering internal decisions about supply chain, shop floor, distribution and capital investment difficult to model, let alone make.

If constant volatility from one source or another is the new norm, "optimization" in manufacturing is no longer as simple as securing timely delivery of raw material inputs, squeezing labor productivity, and designing production plans around cheaper energy prices. Nor is optimization easily protected through crude contingency plans like holding excess raw materials as a hedge against supply chain disruption. An optimized production system must be not just tolerant to but accommodative of volatility.

Contemporary manufacturing operating systems solve for this.

  • Digital twins enable production modeling, simulation of disruptive events, and modeling of production responses to combinations of disruptive events.
  • Adaptive manufacturing - software defined production that integrates design with digital printing and robotic assembly - accelerates research and development and reduces friction created by NPI.
  • Flexline manufacturing allows Porsche to switch from making a combustion vehicle to an electric vehicle to a hybrid vehicle, in any sequence, all on the same line. The line is orchestrated with autonomous guided vehicles and does not require retooling or reconfiguration.

“Optimization” in a volatile world prioritizes resiliency over efficiency.

* * *

Wall Street gives a pass to companies when operations underperform due to external forces, because external forces are outside the control of the company. CEOs are graded on how well the company reacted to external disruption. But at some point, equity analysts and activist investors will figure out that manufacturing operations are unnecessarily vulnerable to external shocks. Why is the company not sufficiently resilient to take more of these changes in stride? At how many AGMs will we hear the same excuses?

There is need and opportunity to invest, but the climate isn’t conducive to investment. These are tech-heavy investments, and tech is still paying for largess during the immediate pre-COVID years, when CEOs were fired for showing insufficient imagination for how to spend cheap capital to digitally disrupt their industry. Unfortunately, a post-mortem analysis on that era exposes that not only did too many of the investments made during that time come to naught, the propensity to use contract labor and subsequent employee turnover meant no intangible benefit like institutional learning materialized (even of the "we know what doesn’t work" variety). They were just boondoggles that vaporized cash.

Tech has a bruised reputation and capital is more pricey now, just in time for manufacturing to find itself at a crossroads. The intrinsic sclerosis of legacy manufacturing operations forces industrial firms to react to external changes. If they had intrinsic flexibility, they could respond rather than be forced to simply react. With volatility the new norm, tech investments into modern manufacturing processes and technology are a pretty good bet.

A good bet, but with competing gamblers. Tech ("with your money, and our ability to spend your money…") and legacy manufacturing (fixed production) have to figure out how to partner with capital (10 year Treasurys are north of 400 bps) to make it a profitable bet. There’s a visible win, but the CIO, CTO, COO and CFO have to get out of the way.

Sunday, March 31, 2024

Don’t queue for the ski jump if you don’t know how to ski

I’ve mentioned before that one of my hobbies is lapidary work. I hunt for stone, cut it, shape it, sand it, polish it, and turn it into artistic things. I enjoy doing this work for a lot of reasons, not least of which being I approach it every day not with an expectation of “what am I going to complete” but “what am I going to learn.”

As a learning exercise, it is fantastic. I keep a record of what I do on individual stones, on how I configure machines and the maintenance I perform on them, and for the totality of activities I do in the workshop each day. I do this as a means of cataloging what I did (writing it down reinforces the experience) and reflecting on why I chose to do the things that I did. Sometimes it goes fantastically well. Sometimes it goes very poorly, often because I made a decision in the moment that misread a stone, misinterpreted how a tool was functioning, or misunderstood how a substance was reacting to the machining.

My mistakes can be helpful because, of course, we learn from mistakes. I learn to recognize patterns in stone, to recognize when there is insufficient coolant on a saw blade, to keep the torch a few more inches back to regulate the temperature of a metal surface.

But mistakes are expensive. That chunk of amethyst is unique, once-in-a-lifetime; cut it wrong and it’s never-in-a-lifetime. If there isn’t coolant splash over a stone you’re cutting, you’re melting an expensive diamond-encrusted saw blade. Overheat that stamping to a point where it warps, or cut that half hard wire to the wrong length, and you’ve just wasted a precious metal that costs (as of today’s writing) $25+ per ounce for silver, $2,240+ for gold.

Learning out of a video or website or a good old fashioned book is wonderful, but that’s theory. We learn through experience. Whether we like to admit it or not, a lot of experiential learning results in, “don’t do it that way.”

Learning is the human experience. Nobody is omnipotent.

But learning can be expensive.

* * *

A cash-gushing company that has been run on autopilot for decades gets a new CEO who determines they employ thousands doing the work of dozens, and since most of these people can’t explain why they do what they do, the CEO concludes there is no reason why, and spots an opportunity to squeeze operations to yield even better cash flows. Backoffice finance is one of those functions, and that’s just accounting, right? That seems like a great place to start. Deploy some fintech and get these people off the payroll already.

Only, nobody really understands why things are the way they are; they simply are. Decades of incremental accommodation and adjustment have rendered backoffice operations extremely complicated, with edge cases to edge cases. Call in the experts. Their arguments are compelling. Surely, we can we get rid of 17 price discounting mechanisms and only have 2? Surely, we can we have a dozen sales tax categories instead of 220? Surely, we can get customers to pay with a tender other than cash or check? All plausible, but nobody really knows (least of all Shirley). Nobody on the payroll can explain why the expert recommendations won’t work, so the only way to really find out is to try.

Out comes a new, streamlined customer experience with simplified terms, tax and payments. Only, we lose quite a lot of customers to the revised terms, either because (a) two discounting mechanisms don’t really cover 9x% of scenarios like we thought or (b) we’re really lousy at communicating how those two discounts work. We lost transactions beyond that because customers have trust issues sharing bank account information with us. And don’t get started on the sales tax remittance Hell we’re in now because we thought we could simplify indirect tax.

Ok, we tried change, and change didn’t quite work out as we anticipated. It took us tens of millions of dollars of labor and infrastructure costs to figure out if these changes would actually work in the first place. Bad news is, they didn’t. Good news is, we know what doesn’t work. Hollow victory, that. That’s a lot of money spent to figure out what won’t work. By itself, that doesn’t get us close to what will work. Oh and by the way, we spent all the money, can we please have more?

Let’s zoom out for a minute. How did we get here? Since the employees don’t really know why they do what they do, and since all this activity is so tightly coupled, what is v(iable) makes the m(inimum) pretty large, leaving us no choice but to run very coarsely grained tests to figure out how to change the business with regard to customer facing operations that translate into back office efficiencies. Those tests have limited information value: they either work or they do not work. Without a lot of post-test study, we don’t necessarily know why.

This is not to say these coarse tests are without information value. With more investment of labor hours, we learn that there are really four discounting mechanisms with a side order of optionality for three of them we need to offer because of nuances in the accounting treatment our customers have to deal with. That’s not two but still better than the nineteen we started with. And it turns out with two factor authentication we can build the trust with customers to share their banking details so we can get out of the physical cash business. Indirect tax? Well, that was a red herring: the 220 categories previously supported is more accurately 1,943 under the various provincial and state tax codes. Good news is, we have a plan to solve for scaling up (scenarios) and scaling down (we’ll not lose too much money on a sales tax scenario of one).

Of course, we’ll need more money to solve for these things, now that we know what “these things” are.

That isn’t a snarky comment. These are lessons learned after multiple rounds of experiments, each costing 7 or 8 figures, and most of them commercially disappointing. We built it and they didn’t come, they flat out rejected it. We got it less wrong the second, third, fourth, fifth time around and eventually we unwound decades of accidental complexity that had become the operating model of both backoffice and customer experience, but that nobody could explain. Given unlimited time and money, we can successfully steer the back office and customers through episodic bouts of change.

Given unlimited time and money. Maybe it took five times, or seven times, or only three. None was free, and each experiment cost seven to eight figures.

* * *

There are a few stones I’ve had on the shelf for many, many years. They are special stones with a lot of potential. Before I attempt to realize that potential, I want to achieve sufficient mastery, to develop the right hypothesis for what blade to use and what planes to cut, for what shape to pursue, for what natural characteristics to leave unaltered and what surfaces to machine. Inquisitiveness (beginner’s mind) twined with experience on similar if more ordinary stones have led me to start shaping some of those special ones, and I’m pleased with the results. But I didn’t start with those.

Knowledge is power as the saying goes, and “learn” is the verb associated with acquiring knowledge. But not all learning is the same. The business that doesn’t know why it does what it does is in a crisis requiring remedial education. There is no shame in admitting this, but of course there is: that middle manager unable to explain why they do the things they do will feel vulnerable because their career has peaked as the “king of the how in the here and now.” Lessons learned from being enrolled in the master class - e.g., being one of the leads in changing the business - will be lost on this person. And when the surrogate for expertise is experimentation, those lessons are expensive indeed.

Leading change requires mastery and inquisitiveness. The prior without the latter is dogma. The latter without the prior is a dog looking at a chalkboard with quantum physics equations: it’s cute, as Gary Larson pointed out in The Far Side, but that’s the best that can be said for it. When setting out to do something different, map out the learning agenda that will put you in the position of “freely exercising authority”. But first, run some evaluations to ascertain how much “(re-)acquisition of tribal knowledge” needs to be done. There is nothing to prevent you from enrolling in the master class without fluency in the basics, but it is a waste of time and money to do so.

Monday, July 31, 2023

Resistance

Organizational change, whether digital transformation or simple process improvement, spawns resistance; this is a natural human reaction. Middle managers are the agents of change, the people through whom change is operationalized. The larger the organization, the larger the ranks of middle management. It has become commonplace among management consultants to target middle management as the cradle of resistance to change. The popular term is “the frozen middle”.

There is no single definition of what a frozen middle is, and in fact there is quite a lot of variation among those definitions. Depending on the source, the frozen middle is:

  • an entrenched bureaucracy of post-technical people with no marketable skills who only engage in bossing, negotiating, and manipulating organizational politics - change is impossible with the middle managers in situ today
  • an incentives and / or skills deficiency among middle managers - middle managers can be effective change agents, but their management techniques are out of date and their compensation and performance targets are out of alignment with transformation goals
  • a corporate culture problem - it’s safer for middle managers to do nothing than to take risks, so working groups of middle managers respond to change with “why this can’t be done” rather than “how we can do this”
  • not a middle management problem at all, but a leadership problem: poor communication, unrealistic timelines, thin plans - any resistance to change is a direct result of executive action, not middle management

The frozen middle is one of these, or several of these, or just to cover all the bases, a little bit of each. Of course, in any given enterprise they’re all true to one extent or another.

Plenty of people have spent plenty of photons on this subject, specifically articulating various techniques for (how clever) “thawing” the frozen middle. Suggestions like “upskilling”, “empowerment”, “champion influencers of change”, “communicate constantly”, and “align incentives” are all great, if more than a little bit naive. Their collective shortcoming is that they deal with the frozen middle as a problem of the mechanics of change. They ignore the organizational dynamics that create resistance to change among middle management in the first place.

Sometimes resistance is a top-down social phenomenon. Consider what happens when an executive management team is grafted onto an organization. That transplanted executive team has an agenda to change, to modernize, to shake up a sleepy business and make it into an industry leader. It isn’t difficult to see this creates tensions between newcomers and long-timers, who see one another as interlopers and underperformers. Nor is it difficult to see how this quickly spirals out of control: executive management that is out of touch with ground truths; middle management that fights the wrong battles. No amount of “upskilling” and “communication” with a side order of “empowerment” is going to fix a dystopian social dynamic like this.

One thing that is interesting is that the advice of the management consultant is to align middle management’s performance metrics and compensation with achievement of the to-be state goals. What the consultants never draw attention to is executive management receiving outsized compensation for as-is state performance; compensation isn’t deferred until the to-be state goals are demonstrably realized. Plenty of management consultants admonish executives for not “leading by example”; I’ve yet to read any member of the chattering classes admonish executive to be “compensated by example”.

There are also bottom-up organizational dynamics at work. “Change fatigue” - apathy resulting from a constant barrage of corporate change initiatives - is treated as a problem created by management that management can solve through listening, engagement, patience and adjustments to plans. “Change skepticism” - doubts expressed by the rank-and-file - is treated as an attitude problem among the rank-and-file that is best dealt with by management through co-opting or crowding out the space for it. That is unfortunate, because it ignores the fact that change skepticism is a practical response: the long timers have seen the change programs come and seen the change programs go. The latest change program is just another that, if history is any guide, isn’t going be any different than the last. Or the dozen that came and went before the last.

The problematic bottom up dynamic to be concerned with isn’t skepticism, but passivity. The leader stands in front of a town hall and announces a program of change. Perhaps 25% will say, this is the best thing we’ve ever done. Perhaps another 25% will say, this is the worst thing we’ve ever done. The rest - 50% plus - will ask, “how can I not do this and still get paid?” The skeptic takes the time and trouble to voice their doubts; management can meet them somewhere specific. It is the passengers - the ones who don’t speak up - who represent the threat to change. The management consultants don’t have a lot to say on this subject either, perhaps because there is no clever platitude to cure the apathy that forms what amounts to a frozen foundation.

Is middle management a source of friction in organizational change? Yes, of course it can be. But before addressing that friction as a mechanical problem, think first about the social dynamics that create it. Start with those.

Friday, March 31, 2023

Competency Lost

The captive corporate IT department was a relatively early adopter of Agile management practices, largely out of desperation. Years of expensive overshoots, canceled projects, and poor quality solutions gave IT not just a bad reputation, but a confrontational relationship with its host business. The bet on Agile was successful and, within a few years, the IT organization had transformed itself into a strong, reliable partner: transparency into spend, visibility into delivery, high-quality software, value for money.

Somewhere along the way, the “products not projects” mantra took root and, seeing this as a logical evolution, the captive IT function decided to transform itself again. The applications on the tech estate were redefined as products, assigned delivery teams responsible for them with Product Owners in the pivotal position of defining requirements and setting priorities. Product Owners were recruited from the ranks of the existing Business Analysts and Project Managers. Less senior BAs became Product Managers, while those Project Managers who did not become part of the Product organization were either staffed outside of IT or coached out of the accompany. The Program Management Office was disbanded in favor of a Product Portfolio Management Office with a Chief Product Officer (reporting to the CIO) recruited from the business. Iterations were abandoned in favor of Kanban and continuous deployment. Delivery management was devolved, with teams given the freedom to choose their own product and requirements management practices and tools. With capital cheap and cashflows strong, there was little pressure for cost containment across the business, although there was a large appetite for experimentation and exploration.

As job titles with "Product" became increasingly popular, people with work experience in the role became attractive hires - and deep pocketed companies were willing to pay up for that experience. The first wave of Product Owners and Managers were lured away within a couple of years. Their replacements weren't quite as capable: what they possessed in knowledge of the mechanical process of product management they lacked in fundamentals of Agile requirements definition. These new recruits also had an aversion to getting deeply intimate with the domain, preferring to work on "product strategy" rather than the details of product requirements. In practice, product teams were "long lived" in structure only, not in institutional memory and capability that matter most.

It wasn't just the product team that suffered from depletion.

During the project management years of iterative delivery, something was delivered every two weeks by every team. In the product era, the assertion that "we deploy any time and all the time" masked the fact that little of substance ever got deployed. The logs indicated software was getting pushed, but more features remained toggled off than on. Products evolved, but only slowly.

Engineering discipline also waned. In the project management era, technical and functional quality were reported alongside burn-up charts. In the product regime, these all but disappeared. The assumption was, they had solved their quality problems with Agile development practices, quality was an internal concern of the team, and primarily the responsibility of developers.

The hard-learned software delivery management practices simply evaporated. Backlog management, burn-up charts, financial (software investment) analysis and Agile governance practices had all been abandoned. Again, with money not being a limiting factor, research and learning were prioritized over financial returns.

There were other changes taking place. The host business had settled into a comfortable, slow-growth phase: provided it threw off enough cash flow to mollify investors, the executive team was under no real pressure. IT had decoupled itself from justifying every dollar of spend based on returns to being a provider of development capacity for an annual rate of spend. The definition of IT success had become self-referential: the number and frequency of product deployments and features developed, with occasional verbatim anecdotes that highlighted positive customer experiences. IT's self-directed OKRs were indicators of activity - increased engagement, less customer friction - but not rooted in business outcomes or business results.

The day came when an ambitious new President / COO won board approval to rationalize the family of legacy of products into a single platform to fuel growth and squeeze out inefficiency. The board signed up provided they stayed within a capital budget, could be in market in less than 18 months, and could fully retire legacy products within 24 months, with bonuses indexed to every month they were early.

About a year in, it became clear delivery was well short of where it needed to be. Assurances that everything was on track were not backed up by facts. Lightweight analysis led to analysis work being borne by developers; lax engineering standards resulted in a codebase that required frequent, near-complete refactoring to respond to change; inconsistency in requirements management meant there was no way to measure progress, or change in scope, or total spend versus results; self-defined measures of success meant teams narrowed the definition of "complete", prioritizing the M at the expense of the V to meet a delivery date.

* * *

The sharp rise of interest rates has made capital scarce again. Capital intensive activities like IT are under increased scrutiny. There is less appetite for IT engaging in research and discovery and a much greater emphasis on spend efficiency, delivery consistency, operating transparency and economic outcomes.

The tech organization that was once purpose built for these operating conditions may or may not be prepared to respond to these challenges again. The Agile practices geared for discovery and experimentation are not necessarily the Agile practices geared for consistency and financial management. Pursuing proficiency of new practices may also have come at the cost of proficiency of those previously mastered. Engineering excellence evaporates when it is deemed the exclusive purview of developers. Quality lapses when it is taken for granted. Delivery management skills disappear when tech's feet aren't held to the fire of cost, time and, above all, value. Domain knowledge disappears when it walks out the door; rebuilding it is next to impossible when requirements analysis skills are deprioritized or outright devalued.

The financial crisis of 2008 exposed a lot of companies as structurally misaligned for the new economic reality. As companies restructured in the wake of recession, so did their IT departments. Costly capital has tech in recession today. The longer this condition prevails, the more tech captives and tech companies will need to restructure to align to this new reality.

As most tech organizations have been down this path in recent memory, restructure should be less of a challenge this time. In 2008, the tech playbook for the new reality was emerging and incomplete. The tech organization not only had to master unfamiliar fundamentals like continuous build, unit testing, cloud infrastructure and requirements expressed as Stories, but improvise to fill in the gaps the fundamentals of the time didn't cover, things like vendor management and large program management. Fifteen years on, tech finds itself in similar circumstances. Mastering the playbook this time round is regaining competency lost.

Thursday, June 30, 2022

The New New New Normal

My blogs in recent months have focused on macroeconomic factors affecting tech, primarily inflation and interest rates and the things driving them: increased labor power, supply shortages, expansion of M2, and unabated demand. The gist of my arguments has been that although the long-term trend still favors tech (tech can reduce energy intensity as a hedge against energy inflation, and reduce labor intensity as a hedge against labor inflation, and so forth), there is no compelling investment thesis at this time, because we’re in a state of both global and local socio-economic transition and there is simply too much uncertainty. Five year return horizons are academic exercises in wishful thinking. Do you know any business leader who, five years ago, predicted with any degree of accuracy the economic conditions we face today and the conditions we experienced on the way to where we are today?

It is interesting how the nature of expected lasting economic change has itself changed in the last 2+ years.

A little over two years ago, there was the initial COVID-induced shock: what does a global pandemic mean to market economies? That was answered quickly, as the wild frenzy of adaptation made clear that supply in most parts of the economy would find a way to adapt, and demand wasn’t abating. Tech especially benefited as it was the enabler of this adaptation. Valuations ran wild as demand and supply quickly recovered from their initial seizures. Tech investments quickly became clear-cut winners.

As events of the pandemic unfolded, the question then became, "how will economies be permanently changed as a result of changes in business, consumer, labor, capital and government behavior?" The longer COVID policies remained in place, the more permanent the adaptations in response to them would become. For example, why live in geographic proximity to a career when one can pursue a career while living in geographic proximity to higher quality of life? Many asked this and similar questions, but not all did; among those that did, not all answered in the same way. This created an inevitable friction in the workforce. Not a year into the pandemic and the battle lines over labor policies were already being drawn between those with an economic interest in the status quo ante calling for a return to office (e.g., large banks) and those looking to benefit from improved access to labor and lower cost base embracing a permanent state of location independence (e.g., AirBNB). Similar fault lines appeared in all sorts of economic activity: how people shop (brick-and-mortar versus online), how people consume first-run entertainment (theaters versus streaming), how people vacation, and on and on. Tech stood to benefit from both lasting pandemic-initiated change (as the enabler of the new) and the friction between the new and reversion to pre-pandemic norms (as the enabler of compromise - that is, hybrid - solutions). Tech investments again were winners, even if the landscape was a bit more polarized and muddled.

Just as the battles to define the soon-to-be-post-COVID normal were gearing up for consumers and businesses and investors, they were eclipsed by more significant changes that make economic calculus impossible.

First, inflation is running amok in the US for the first time in decades. While tame by historic US and global standards, voters in the US have become accustomed to low inflation. High inflation creates political impetus to respond. Policy responses to inflation have not historically been benign: by way of example, the US only brought runaway 1970s inflation (in fact, it was stagflation - high unemployment and high inflation) under control with a hard economic landing in the form of a series of recessions in the late 1970s and early 1980s. With the most recent interest rate hike, recession expectations have increased among economists and business leaders. Mild or severe is beside the point: twelve months ago, while much of the economy recovered and some sectors even prospered, recession was not seen as a near-term threat. It is now. Go-go tech companies have particularly felt the brunt of this, as their investor’s mantra has done an abrupt volte face from "grow" to "conserve cash". Tech went from unquestioned winner to loser just on the merits of policy responses to inflation alone.

Second, war is raging in Europe, and that war has global economic consequences. Both Ukraine and Russia are mass exporters of raw materials such as agricultural products and energy. A number of nations across the globe have prospered in no small part because of their ability to import cheap energy and cheap food, allowing them to concentrate on development of exporting industries of expensive engineering services and expensive manufactured products. Those nations have also had the luxury of time to chart a public policy course for evolving their economies toward things like renewable energy sources without disrupting major sectors of the population with things like unemployment, while domestic social policy has benefited from a "peace dividend" of needing to spend only minimally on defense. The prosperity of many of those countries is now under threat as war forces a re-sourcing of food and energy suppliers and threatens deprioritization of social policies. Worse still, input cost changes threaten the competitiveness of their industrial champions, particularly vis-a-vis companies in nations that can continue to do business with an aggressor state in Europe. The bottom line is, the economic parameters that we’ve taken for granted for decades can no longer factor into return-on-investment models. Tech as an optimizer and enabler of a better future is of secondary importance when countries are scrambling to figure out how to make sure there are abundant, cheap resources for people and production.

Tech went from darling to dreadful rather quickly.

It’s worth bearing in mind that these recent macro pressures could abate, quite suddenly. Recovery from a real economy recession tends to be far faster than recovery from a recession in the financial economy. Such a recovery - notwithstanding the possibility of secular stagnation - would bring the economic conversation back to growth in short order. Additionally, regardless the outcome, should the war in Europe end abruptly, realpolitik dictates a return to business-as-usual, which would mean a quick rehabilitation of Russia from pariah state to global citizen among Western nations. However, the longer these macro conditions last, the more they fog the investment horizon for any business.

Which brings us back to the investing challenge that we have today. In the current environment, an investment in tech is not a bet on how well it will perform under a relatively stable set of parameters such as pursuing stable growth or reducing costs relative to stable demand. A tech investment today is a bet on how well an investment’s means (the mechanisms of delivering that investment) and ends (the outcomes it will achieve) accurately anticipate the state of the world during its delivery and its operation. That’s not simple when so many things are in flux. We’re on our third “new normal” in two years. There is no reason to think a stable new normal is in the offing any time soon.

Saturday, April 30, 2022

Has Labor Peaked?

I wrote some time ago that labor is enjoying a moment. New working habits developed out of need during the pandemic that in many ways increased quality of life for knowledge workers. Meanwhile, an expansion of job openings and a contraction in the labor participation rate created a supply-demand imbalance that favored labor.

There appears to be confusion of late as to how to read labor market dynamics. With fresh unionization wins and increased corporate commitment to location independent working, is labor power increasing? Or with a declining economy and more people returning to the workplace (as evidenced by increases in the labor participation rate) is labor power near its peak?

The question, has labor peaked?, intimates a return to the mean, specifically that labor power will revert to where it was pre-pandemic (i.e., “workers won’t continue to enjoy so much bargaining power.”) The argument goes that fewer people have left the workforce than have quit jobs for better ones; that hiring rates have increased along with exits; that the labor participation rate has ticked up slightly; that labor productivity has increased (thus lessening the need for labor); and that demand is cooling (per Q1 GDP numbers). Toss in 1970s sized inflation compelling retirees to return to the workforce and there’s an argument to be made that labor’s advantages will be short lived.

But this argument is purely economic, focusing on scarcity in the labor market that has created wage pressure. For one thing, it ignores potential structural economic changes yet to play out, such as the decoupling of supply chains in the wake of new geopolitical realities. For another, it ignores real structural changes in the labor market itself, things like labor demographics (migrations from high-tax to low-tax states), increased workplace control by the individual laborer (less direct supervision when working from home), and improvements in work/life balance.

The question, has labor peaked?, becomes relevant only when there is an outright contraction in the job market. For now, the better question to ask is how durable are the changes in the relationship between employers and employees? It isn’t so much whether labor has the upper hand as much as labor has more negotiating levers than it did just a few years ago. The fact that there hasn’t been a mad rush to return to pre-pandemic labor patterns suggests employers are responding to structural changes in labor market dynamics.

Trying to call a peak in labor power is a task wide of the mark. And for now, the more important question still seems some way off from being settled.

Tuesday, November 30, 2021

Do we need IT Departments?

The WSJ carried a guest analysis piece on Monday proclaiming the need to eliminate the IT department. While meant to be an attention-grabbing headline, it is not a new proposition.

Twenty years ago, the argument for eliminating the IT function went like this: while IT was once a differentiator that drove internal efficiency, it was clearly evolving into utility services that could be easily contracted. And certainly, even in the early 2000s, the evidence of this trend was already clear: a great many functions (think eMail and instant messaging solutions) and a great many services (think software development and helpdesk roles) could be fully outsourced. Expansive IT organizations are unnecessary if tech is codified, standardized and operationalized to a point of being easily metered, priced and purchased by hourly unit of usage.

While the proponents of disbanding IT got it right that today’s differentiating tech is destined to become tomorrow’s utility, they missed the fact that tomorrow will bring another differentiating tech that must be mastered and internalized before it matures and is utilified. Proponents of eliminating the IT function also ignored the fact that metered services - particularly human services - have to be kept on a short leash lest spend get out of hand. That requires hands-on familiarity with the function or the service being consumed, not just familiarity with contract administration.

The belief that enterprise IT departments should be disbanded is back again. This time around, the core of the argument is that a silo’d IT organization is an anachronism in an era when all businesses are not just consumers of tech but must become digital businesses. There is merit in this. Enterprise IT is an organization-within-an-organization that imperfectly mirrors its host businesses. IT adds bureaucracy and overhead; hires for jobs devoid of the host business’ context; and by definition foments an arms-length relationship between “the business” and “IT” that stymies collaboration and cooperation and, subsequently, solution cohesiveness. Not a strong value prop there by today’s standards.

Today, [insert-your-favorite-service-name-here]-aaS has accelerated the utilifcation of IT even further than most could imagine two decades ago. And, or at least so the argument goes, modern no-code / low-code programming environments obviate the need for corporate IT functions to hire or contract for traditional language software developers. Higher-level languages that non-software engineers can create solutions with reduces the traditional friction among people in traditional roles of “business” and “IT”.

Best of all, there is a reference implementation for disbanding centralized IT: the modern digital-first firm. While a digital-first firm may have a centralized techops function to set policies, procure and administrate utility services, it is the product teams that are hybrids of business and tech knowledge workers create digital solutions that run the business.

If you had the luxury of starting a large enterprise from scratch in Q4 2021, you would have small centralized teams to create and evolve platform capabilities and standards from cloud infrastructure to UX design standards, while independent product teams staffed with hybrid business and technology knowledge workers to build solutions upon the platform. The no-code / low-code tech notwithstanding (these tend to yield more organizational sclerosis and less sustainable innovation, but that’s a post for another day), this is a destination state many of us in the tech industry have advocated for years.

So why not model legacy enterprise IT this way?

Why not? Because enterprise IT isn’t the problem. I wrote above that enterprise IT is an imperfect mirror of its host organization. However, the converse is not also true: the host business is not an imperfect mirror of its enterprise IT function. In the same way, enterprise IT is a reflection of an enterprise problem; the enterprise problem is not a reflection of an IT problem.

Companies large and small have been reducing equity financing in favor of debt for over a decade-and-a-half now. A company with a highly-leveraged capital structure runs operations to maximize cash flow. That makes the debt easily serviceable (high debt rating == low coupon), which, in turn, creates cash that can be returned to equity holders in the form of buybacks and dividends. Maximizing cash flows from operations is not the goal of an organization designed for continuous learning, one that moves quickly, makes mistakes quickly, and adapts quickly. Maximizing cash flow is the goal of an organization designed for highly efficient, repetitive execution.

The "product operating model" of comingled business and tech knowledge workers requires devolved authority. Devolved authority is contrary to the decades-long corporate trend of increased monitoring and centralized control to create predictability, and consolidated ownership to concentrate returns. Devolved decision-making is anathema to just about every large corporate.

Framing this as an “IT phenomenon” is the tail wagging the dog. As I wrote above, enterprise IT is an imperfect reflection of its host organization. Enterprise IT is a matrix-within-a-matrix, with some parts roughly aligned with business functions (teams that support specific P&Ls, others that support business shared services such as marketing), while other IT teams are themselves shared services across the enterprise (in effect, shared services across shared services). Leading enterprise change through the IT organization is futile. Even if you can overcome the IT headwinds - staffing lowest-cost commodity labor rather than sourcing highest-value capability, utility and differentiating tech under the same hierarchy - you still have to overcome the business headwinds of heavy-handed corporate cultures ("we never making mistakes"); studying mistakes and errors for market signals indicating change rather than repressing them as exceptions to be repressed; and capital structures that stifle rather than finance innovation. Changing IT is not inherently a spark of change for its host business, if for no other reason than no matter how much arm waving IT does, IT in the contemporary enterprise is a tax on the business, a commitment of cash flows that the CEO would prefer not to have to make.

To portray enterprise IT as an anachronism is accurate, if not a brilliant or unique insight. To portray enterprise IT as the root of the problem is naive.

Wednesday, June 30, 2021

Labor's New Deal

The pandemic has created a lot of interesting labor market dynamics, hasn’t it? Week after week brings a new wave of employee survey results that make it clear a lot of workers want to retain a great deal of the location independence they have experienced over the past year. Multiple studies report roughly the same results among knowledge workers, globally: 75% want flexibility in where they work, 30% don’t want to return to an office, and 1 in 3 won’t work for an employer that requires them to be on site full time. In addition, 1 in 5 workers expect to be with a different company in the next year, as many as 40% are thinking about quitting and over half are willing to listen to offers.

This isn’t just sentiment: employees are voting with their feet. The Wall Street Journal reported a few weeks ago that the share of the workforce leaving their jobs is the highest it has been in over twenty years.

Labor wants a new pact.

The post-COVID recovery is a once-in-a-decade economic recovery. To the extent that a company’s growth is indexed to the growth of its labor force (where near-term automation is not an option), a company has to hire. If it doesn’t, it’s going to sit out this recovery. That means businesses are motivated buyers of labor.

The American economy is surging, but employers are struggling to fill skilled and unskilled positions alike. One factor is the absence of slack in the labor market. Curiously, the labor participation rate is plumbing levels not seen since the 1970s. The number of 18 to 65 year olds actively working has been in steady decline since the mid-2000s, a few years before the 2008 financial crisis. It dropped significantly again with the pandemic, and has not yet recovered to pre-pandemic levels. Statistically, there should be labor market slack, but there is no slack as quite a few working age people are electing not to rejoin the workforce. Another factor is that with every company hiring it’s hard for any one employer to achieve visibility among job seekers. A simple search for “product manager” positions in Chicago yields over 6,300 openings; in New York over 6,800 openings; and in Dallas over 5,800 openings. Social media banners announcing “we’re hiring” are useless when every company is hiring.

Labor market tightness and difficulty in differentiating is forcing companies to raise wages. Large, deep-pocketed employers of unskilled labor including WalMart, McDonalds and Amazon have raised their entry level labor wages. Mid-tier and mom-and-pop competitors will be forced to do the same. And, many employers are responding to their own captive surveys yielding results like those mentioned above, offering greater workplace and working hour flexibility to existing staff and recruits. Average wages are going up, and workplace policies are changing to be more accommodative to labor.

With labor tight and economic expansion all around, employers will become increasingly competitive for labor. They will have to be aggressive just to stay in place. Imagine a company with, say, 100 experienced software engineers, project managers, QA engineers and the like that expects to add a dozen more people to the team in the next year. If they lose 20% of this knowledge workforce per the survey results, and assuming 10% of the people they put on the payroll are dud hires, they’ll have to hire upwards of 35 people to achieve a net gain of just 12.

All of this means that labor is having a once-in-a-generation moment.

Labor's power in America arguably peaked in the 1960s and has been on the wane since, the striking Air Traffic Controllers getting fired in the early 80s often held out as a seminal moment in labor's multi-decade decline. But some of you may recall that in the late 1990s, labor briefly had a moment. That was not only the go-go days of the dot-com era, but domestic US call centers were going up in all kinds of American cities, big box retailers wanted their customers to know they were "always open" and kept stores open for 24 hours a day (somebody just might be itching to buy a circular saw at 2a), and fast food drive thrus were kept open 2 hours longer than the dining rooms (conveniently, 'til after the pubs closed). For a brief period, "Sales Associate" positions came with medical and retirement benefits. Well, labor is back. The WSJ made the point last week that labor has power today that it has not enjoyed in decades. And, per the aforementioned statistics, labor is exercising that power.

With so much agitation among workers and demand for labor high, conditions are ripe for labor market “disruptors”. Some employers will simply become very aggressive recruiters of employees of other firms. If disruptive recruiting, employment and retention practices prove successful, we will see winners and losers emerge in “the war for talent.” And it isn’t start up or fringe firms taking aggressive postures. According to the WSJ, Allstate has determined that 75% of the positions they employ can be done remotely, while another 24% can be done in a hybrid fashion. That’s 99% of a traditional employer’s workforce that will have location flexibility. This means location independence may not be a worker bonus as much as it may simply be the new norm. It also means that a company may not simply struggle to hire, but that a failure to adequately adjust to the future of work will make a company vulnerable to disruption as its work force is an easy target for other employers.

History tells us that labor’s moment may not last for very long. But the longer that labor shortages last, and particularly with so much competition for knowledge workers, labor won’t come away empty handed.

Sunday, January 31, 2021

Distribution

In the past year, I’ve written several times about changes taking place and likely to be long lasting in the wake of the COVID-19 pandemic. One area I’ve touched on, but haven’t delved into much, is distribution.

Producers of all kinds have had to create new ways of connecting with their customers. Restaurants lost and had to contend with severely curtailed access to their primary distribution channel, the dining room. Countless manufacturers lost and had to contend with complete loss of their primary distribution channel, small retail and department stores. Digital products like movies lost and had to contend with severely curtailed access to their primary distribution channel, theatres.

Producers have responded by finding, creating and reprioritizing alternative means of distribution. All restaurants (well, those still in business) increased their distribution through take-away meals. eCommerce retail sales from groceries to gadgets skyrocketed last year. In December, Warner Communications announced they will distribute movies simultaneously to theaters and on their HBO streaming service in 2021.

And the opportunities to innovate in distribution are far from over. To wit: in December, the FAA relaxed guidelines for drone delivery, expanding the potential for commercial drone use in the US (and of course, drone delivery has expanded in the rest of the world).

Distribution will be among the primary ways the chattering classes characterize the post-pandemic world. To what extent do people return to old - largely physical - forms of distribution? Are the new forms of distribution long-lasting, or are they just phenomenon of their times, like Sherry served at Christmastime in Britain?

Change is like a regenerative fire. Fire needs three things: fuel, spark, and accelerant.

The fuel is policies in response to COVID-19 and, in turn, the individual and business reaction to them. Companies need to sell and people want to buy. The longer those policies remain in place, the more significant these new forms of distribution become to producers. Those policies also depress the prices for assets tightly coupled to past forms of distribution, things like commercial airplanes and shopping malls.

The spark is the realization that businesses don’t need to operate in the ways that they have for decades, e.g., expenses don't need to be so high, a company doesn’t need as much square footage or needs to be able to use its physical space differently. The evidence exiting 2020 supports this: even though revenues declined in 2020, earnings per share for the S&P 500 for the year look great.

The accelerant is cheap capital: Fed policy will maintain cheap capital for the foreseeable future. This creates liquidity that has to go somewhere, and will find every nook and cranny.

Distribution has changed. Policy is entrenching those changes. The businesses that didn’t collapse survived in large part because they found new distribution channels. Innovation in distribution is accelerating. Cheap capital will finance more innovation in distribution. It’s a reinforcing cycle. And it’s just beginning.

Monday, November 30, 2020

Listen

For two decades now, we’ve heard about the threat of tech disruption to established industries and incumbent firms. Yet it isn’t the tech that disrupts, it’s socio-economic change that creates conditions that a technology can exploit. Tech isn’t the catalyst, but it can be the beneficiary.

COVID may turn out to be the greatest global catalyst of socio-economic change since the middle of the 20th century. As the pandemic has continued and the numbers have risen, the chattering classes are now asking what the lasting changes will be. These can be useful exercises, certainly to the business leaders who’ve got to find their customers or compete against rivals with slimmed down cost structures. Not to mention, the acceleration of innovation - a WSJ article recently cited a McKinsey study that had suggested 10 years of innovation was compressed into a 3 month window - has created opportunities that were not practical just a year ago.

No surprise that the analyses range from the very narrow to the very broad. The narrow ones are easy to comprehend and useful for specific industries. For example, I’ve read projections that anywhere from 15% to 50% of all business travel isn’t going to return. Although a wide range, it suggests that airlines and hotels will have to appeal to leisure travelers to fill seats and beds. Leisure travelers are more price sensitive and less brand loyal than business travelers, so even if volume recovers, revenue will lag, which portends more cost cutting or in-travel sales or on-demand activations (you have to swipe a credit card to get the TV to work in coach on the airline, why not require a customer to swipe a credit card to get the TV to work in the discounted-rate hotel room?) It also suggests that a startup airline with a clean balance sheet, a fleet of fresh planes requiring little maintenance (there’s a desert parking lot loaded with low mileage 737MAX jets), able to draw on a large experienced labor force of laid off travel workers could create significant heartache for incumbents.

At the other end of the extreme are the macro analyses asking The Big Questions. Are cities dead? Is cash dead? Is brick-and-mortar retail dead? These are less useful. The Big Questions are too big. They require far more variables and data than can be acquired let alone thoughtfully considered in a coherent analysis. The authors traffic in interesting data, but either lack the courage to draw any conclusion beyond Things Might Change But Nobody Knows (thanks for that, so helpful), or use the data selectively to present defenses for their preference of what the future will be.

In the middle are Big Question headlines with narrow questions posed, even if not answered. Analyses on “the future of work” cite specific employer examples to posit what is now possible (e.g., specific roles that gain nothing from being in an office and lose nothing by being distributed) and broad employee survey data to suggest their potential scale (e.g., 25% of employees in such-and-such industry want working from home to be a permanent option on a part- or full-time basis). These are useful analyses when they highlight future challenges on management and supervision, collaboration and communication. Economically, employer and employee alike win when a person chooses to relocate to a lower cost-of-living area for quality of life purposes. But that only works if the physical separation causes minimal, if any, impact to career growth, skill acquisition, productivity and participation, and corporate culture. A company believing it can espouse even moderately aggressive distributed workforce policies must be aware that these are specific problems to be solved.

What I’ve yet to see is an analysis of how the institutions that are benefitting and the institutions that are suffering will influence the micro-level trends and, by extension, influence the answers to The Big Questions.

Consider a large universal bank that employs hundreds of thousands of people in cities round the world. One way its retail bank makes money by converting deposits into loans. One way its commercial bank makes money is by making mortgage loans to businesses. One way its investment bank makes money is by underwriting debt issued by municipalities. It may look as if the bank can reduce its operating costs by institutionalizing a work-from-home policy for a large portion of its workforce. But doing so is self-destructive to its business model. Fewer employees in office towers means fewer people to patronize the businesses to which the bank lends, fewer public transport and sales tax receipts to the municipalities whose debt they underwrite, and less demand for construction and renovation of mixed use commercial properties. The bank stands to lose a lot more in revenue than it would gain in reduced costs, so as a matter of policy, a universal bank will want its employees back in their offices in full numbers. The bank will set the same expectation to vendors, particularly those supplying labor.

But other companies are benefiting from this change and will want permanency of these new patterns. Oracle provides the cloud infrastructure that Zoom operates on. More Zoom meetings not only means more revenue for Oracle’s cloud business (investors will pay a premium for a growth story in cloud services), it gives their cloud infrastructure business a powerful reference case as they pursue new clients. It comes as no surprise that Oracle’s executive chairman Larry Ellison is a vocal proponent of lasting change.

And, of course, nobody knows what public policy will look like, which will play a huge role in what changes are permanent and what reverts to the previous definition of normal. State and municipal governments are facing significant tax receipt shortfalls as a result of COVID policies. Many have also suffered a depletion of residents and small businesses. They may offer aggressive tax incentives to encourage new business formation or expansion as well as commercial property development. At the same time, there are states that have received an influx of population and cities that have seen residential property price increases. They will be reluctant to see their newly arrived neighbors leave, so they, too, will offer incentives for them to stay.

It isn’t difficult to imagine there will be aggressive new forms of competition. Suppose firm A is adamant about employees returning to the office. If the employee survey data is to be believed, it’s possible that as much as 25% of firm A’s labor force prefers to work from home a majority of the time. Firm B can aggressively use that as a recruiting wedge to not only lure away firm A’s talent, but offer them relocation packages to lower cost-of-living areas, expanding and potentially upgrading their talent pool at a lower price.

Or, suppose that city C imposes putative taxes on companies employing a distributed workforce. It’s not unprecedented. Several cities already charge a “commuter tax” (also known as a “head tax”) on employers with workers who travel into the city. This would instead be a “can’t-be-bothered-to-commute tax” levied on employers in a city whose workers do not travel into the city. Meanwhile, near-west suburb D of city C entices a WeWork-like firm to develop a property that can house several businesses with partially distributed workforces, offering a smaller physical office space with fully secure physical and digital premises. This would lure midsized employers whose labor force lives largely in the western suburbs, reducing not only their rents but avoiding the “headless tax” imposed by city C.

The analyses of what will or will not change and why it will or will not change is only going to increase in the coming months. And, because some stand to lose significantly from change while others stand to benefit handsomely, the debate will only intensify. For those without the balance sheet and political clout to write the future, a firmly held opinion about the future isn’t worth very much. But the ability to study, process, absorb, investigate and prove ways of exploiting heretofore unrealizable opportunities is priceless.

Saturday, October 31, 2020

Playing the Cards You're Dealt

Some years ago, I was working with a company automating its customer contract renewal process. It had licensed a workflow technology and contracted a large number of people to code and configure a custom solution around it. This was no small task given the mismatch between a fine granularity of rules on the one hand and a coarse granularity of test cases on the other. The rules were implemented as IFTTT statements in a low-code language that did not allow them to be tested in isolation. The test cases consisted of clients renewing anywhere from one to four different types of contracts, each of which had highly variable terms and interdependencies on one another.

At the nexus of this mismatch was the QA team, which consisted almost entirely of staff from an outsourcing firm. An vendor had sold the company on QA capacity at a volume of 7 test scripts executed per person per day. They had staffed 50 total people to the program team, while the company had staffed four QA leads (one for each contract team). The outsourcing vendor was reporting no less than 350 test scripts executed by their staff every day, yet the QA managers were reporting very low test case acceptance and the development team was reporting the test case failures could not be replicated.

A little bit of investigation into one of the four teams exposed the mismatch. The outsourcing staff of this one team consisted of 10 people, contractually obligated to execute 70 test scripts. The day I joined, the team reported 70 test scripts executed, of which 5 passed and 6 failed.

Eleven being a little short of seventy, I wanted to understand the discrepancy. The answer from the contracted testers was, "we have questions about the remaining 59." The lead QA analyst - an employee, not a contractor - spent the entire day plus overtime investigating and responding to the questions pertaining to the 59. And then the cycle would start all over again. The next day it was 70 executed with 3 passed and 4 failed. The day after it was 70 executed with 1 passed and 9 failed. And the lead QA would spend the day - always an overtime day - responding to the questions from the outsourced team.

Evidently, this cycle had been going on for some time before I arrived.

We investigated the test cases that had been declared passed and failed. Turns out, those tests that were reported as having passed hadn't really passed: the tester had misinterpreted the results and reported a false positive. And those reported as failed hadn't actually failed for the reason stated: the tester had misinterpreted those results as well. On some occasions, it was the wrong data to test the scenario; in others, it had failed, but it was because a different rule should have executed. In just about every circumstance, it was false results. The outsourced testers were expending effort but yielding no results whatsoever. A brief discussion with the QA lead in each of the other three teams confirmed that they were experiencing exactly the same phenomenon.

After observing this for a week and concluding that no amount of interaction between the QA lead and the outsourced staff was going to improve either the volume of completions or fidelity of the results, I asked the one lead QA to stop working with the outsourced team, and instead to see how many test cases she could disposition herself. The first day, she conclusively dispositioned 40 test scripts (that is, they had a conclusive pass or fail, and if they failed it was for reasons of code and not of data or environment). The second day, she was up to 50. The third, she was just over 50. She was able to produce higher fidelity and higher throughput at lower labor intensity and for lower cost. And she wasn't working overtime to do so.

The outsourced testing capacity was net negative to team productivity. That model employed eleven people to do less than the work of one person.

This wasn't the answer that either the outsourcing vendor or the program office wanted. The vendor was selling snake oil - the appearance of testing capacity that simply did not exist in practice - and was about to lose a revenue stream. The program office was embarrassed for managing the maximization of staff utilization rather than outcomes (that is, relying on effort as a proxy for results).

The reaction of both vendor and program office weren't much of a surprise. What was a surprise was the fact that nobody had called bullshit up to that point. Experimenting with change wasn't a big gamble. The program had nothing to lose except another day of frustration rewarded by completely useless outputs from the testing team. So why hadn't anybody audited the verifiable results? Or made a baseline of testing labor productivity without the participation of the outsourcing team?

This wasn't a case of learned helplessness. The QA leads knew they were on the hook for meaningful testing throughput. The program office believed they had a lot of testing capacity that was executing. The vendor believed the capacity they had sold was not properly engaged. Nobody was going the motions, and everybody believed it would work. The trouble was, they were playing the cards they'd been dealt.

Some years later, I was working with a corporate IT department trying to contain increasing annual spend on ERP support. Although they had implemented SAP at a corporate level and within a number of their subordinate operating companies, they still had some operating companies using a legacy homespun ERP and all business units still relied on decades of downstream data warehouses and reporting systems. Needless to say, there were transaction reconciliation and data synchronization problems. The corporate IT function had entered into a contract with a vendor to resolve these problems. In the years following the SAP implementation, vendor support costs had not gone down but had gone up, proportional to the increase in transaction volume. The question the company was asking was why the support labor couldn't respond to more discrepancies given they had so many years experience with resolving them?

It didn't take a stroke of genius to realize that the vendor stood to gain from their customer's pain: the greater the volume of discrepancies, the more billing opportunities there were for resolution. Worse still, the vendor benefited from the same type of failure recurring again and again and again. The buyer had unwillingly locked themselves into a one-way contract: their choices were to live with discrepancies or pay more money to the vendor for more labor capacity to correct them. The obvious fix was to change the terms of the contract, rewarding the vendor for resolving the discrepancies at their root cause rather than rewarding the vendor for solving the same problem over and over and over. This they did, and the net result was a massive reduction of recurring errors, and a concomitant reduction in the contract labor necessary to resolve errors.

This was, once again, a problem of playing the cards that had been dealt. For years, management defined the problem of containing spend on defect / discrepancy resolution. They hadn't seen it as a problem of continuous improvement in which their vendor was a key partner in that improvement rather than a cost center to be contained.

There are tools that can help liberate us from constraints, such as asking the Five Why's. But such tools are only as effective as the intellectual freedom as we're allowed to pursue them in the first place. If the root question is "why is test throughput so low given the high volume of test capacity and the high rate of test execution", or "how can the support staff resolve defects more quickly to create more capacity", the exercise begins with confirmation bias, in this case that the operating model (the test team composition, the defect containment team mission) is correct. The Five Why's are less likely to lead to an answer that existentially challenges the paradigm in place if the primary question is too narrowly phrased. When that happens, the exercise tends to yield no better than "less bad."

It's all well and good for me to write about how I saw through a QA problem or a support problem, but the fact of the matter is we all fall victim to playing the cards that we're dealt at one time or another. A vendor paradigm, a corporate directive, a program constraint, a funding model, an operating condition limits our understanding of the actual problem to be solved.

But reflecting on it is a reminder that we must always be looking for different cards to play. Perhaps now more than ever, as low contact and automated interactions permanently replace high contact and manual ones in all forms of business, we need to be less intellectually constrained before we can be more imaginative.