I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Tuesday, January 31, 2023

Relics

I recently came across a box of very old technology tucked away in my basement: PDAs, mobile phones, digital cameras and even a couple of old laptops, all over two decades old. It was an interesting find, if slightly disturbing to think this stuff has moved house a couple of times. Before disposing of something, I try to repurpose it if I can. That's hard to do with electronics once they're orphaned by their manufacturers. Still, electronics recycling wasn't as easy to do twenty years ago, so perhaps just as well that I held onto them until long after it was.

In addition to bringing back fond memories, finding this trove got me thinking about about how rapidly mobile computing evolved. In the box from the basement were a couple of PDAs, one each by HP and Compaq; phones by Motorola, Nokia (including a 9210 Communicator) and Ericcson; and a digital video recorder by Canon. The Compaq brand has all but disappeared; the makers of two of the three phones exited the mobile phone business years ago; the Mini-DV technology of the camcorder was obsolete within a few years of its manufacture.

There were also a couple of laptops in the box, one each made by Compaq and Sony. The interesting thing about the laptops is how little the form factor has changed. My first laptop was a Zenith SuperSport 286. The basic design of the laptop computer hasn't changed much since the late 1980s (although mercifully they weigh less than 17 lbs). The Compaq and Sony laptops in that box from the basement are not physically different from the laptops of today: the Sony had a square screen and lots of different ports, where a modern laptop has a rectangular screen and a few USB ports.

The laptop, of course, replaced the luggable computer of the 1970s and early 1980s made by the likes of Osborne and Kaypro and Compaq. The luggable was a statement for the era: what compels a person to haul around disk drives, CPU, keyboard and a small CRT? Maybe it was the free upper-body workout. The laptop was a quantum improvement in mobile computing.

But once that quantum improvement happened, the laptop became decidedly less exciting. As the rate of change of capabilities in the laptop slowed, getting a new laptop became less of an event and more of a pain in the ass. Not to mention that, just like the PDA and phone manufacturers mentioned above, the pioneers and early innovators didn’t survive long enough to reap the full benefits of the space maturing.

And the same phenomenon happened in the PDA/Phone/camera space. The quantum leap was when these converged with the original iPhone. Since then, a new phone has become less and less of an event. Yes, just like laptops, they get incrementally better. Fortunately, migration via cloud makes upgrading less of a pain in the ass.

The transition from exciting to ordinary correlates to the utility value of technology in our lives: in personal productivity, entertainment, and increasingly as the primary (if not only) channel for doing things. There are, of course, several transformative technologies in their nascent stages. Somehow, I don’t think any are spawning the Zenith Data Systems and Compaqs making a future relic that somebody someday will be slightly amused to find in a box in their basement.

Saturday, December 31, 2022

Reinvention Risk Trade

Southwest Airlines has made headlines in recent days for all the wrong reasons: bad weather impacted air travel, which required Southwest to adjust plane and crew schedules. Those adjusted schedules were often logistically flawed because the planes and crews matched at a specific place and time didn’t make sense in the real world. Making matters worse, those adjusted schedules had to be re-(and re- and re-)adjusted every time either the weather changed or operations changed (ie., more flight cancellations), and both the weather and operations were changing throughout Southwest's route network. The culprit, according to people at Southwest quoted by the Wall Street Journal, was scheduling technology that could not sufficiently scale and is nearing end-of-life. Whether a problem of rapid growth or neglected investment, everybody seems to agree that Southwest has been living on borrowed time.

The neglect of core technology is an all too common a practice by virtually every company: the technology becomes more complex than its foundational architecture was ever intended to support, the team familiar with the technology erodes through layoff and attrition, and as a result a technology become more vulnerable to failure. But it still works day in and day out, so there is no incentive to invest in repair or replacement.

Unfortunately, vulnerability of an aging technology isn't a financial statement phenomenon; it is at best one risk mentioned among many in the 10-K. However, money spent on the labor to reduce that vulnerability is a financial statement phenomenon. Add to that the opportunity cost: every dollar spent on risk mitigation is a dollar that doesn't go toward a net new investment in the business, or a dollar that can't be returned to investors. While it doesn’t cost anything for a technology to fall into a state of disrepair, it sure costs a lot to rehabilitate it. Conversely, neglect is not only free, it’s cash flow positive: i.e., the company can claim victory for streamlining tech spend.

But as mentioned above, neglect creates business risk. And risk is a peculiar thing.

There have been dozens of massive macroeconomic risks realized in the past 25 years - acts of terror, acts of war, financial crises, environmental disasters, viral pandemics - that have made a mockery of the most sophisticated of corporate risk models. Yet risk is still no better an investment proposition than it was a quarter of a century ago: investing to be prepared for "black swan" events (i.e., robustness) is still an uncommon practice (n.b. perhaps inventory build-up and multiple sourcing practices in response to supply chain disruption in recent years will change this, but it remains to be seen how durable this turns out to be). And anyway, dilapidated internal systems are self-inflicted exposures: even if they can talk about such risks publicly, CEOs aren't paid for their acumen at developing and executing remediation strategies. Plus, just about every company will accept exposure to technology risk as business as usual. Business is risk. If a company spent to mitigate every last risk, it would be wildly unprofitable. There's an amount the company budgets annually for maintaining the status quo and every now and again the company will try staffing some up-and-coming manager or hire some hotshot consultants to figure out a way to make things a little less bad. This is great, but it amounts to pennies spent mitigating very large dollar amounts of exposure. In other words, hope is all too often the insurance policy against having a huge hole blown in the income statement by the failure of a high-risk technology.

While risk is generally not an investible proposition for technology (unless business operations are being wildly disrupted because of it, such as is happening to Southwest this week), sometimes there is a golden ticket that promises to make the risk simply go away, such as when a company has a legitimate case to make that it can reposition itself as an ecosystem if only it were built on a cloud-based platform. With consistent cash flows and an existing - and under-leveraged - network of partners, the right leader can motivate investors to pony up to make a wholesale replacement of existing technology. It's a growth story with a side order of risk mitigation through modernization. And with the appropriate supporting data, this is an attractive proposition to risk capital.

Investible, yes, especially since it is more than just an investment that makes the business less bad than it need be. But the headline doesn't tell the whole story. Switching from one technology to another is not a trade of one set of business parameters (the company's current business and operating model) for another (the company's future business and operating model). It is more accurately a trade of risk profiles: exposure to a current technology (the tech and operations supporting current cash flows) versus exposure to aspirational technology (the tech and operations supporting aspirational cash flows).

The magnitude of the technology risk between the two is really no different. It is, optimistically, an exchange of current system sustainability risk for the combination of development risk and future system sustainability risk. System fragility and key person risk may make the status quo highly unattractive, but software development has long track record of cost overruns and failure. In practice, of course, development risk and current system sustainability risk are carried at the same time, and current system risk may be carried for a very long time if it proves difficult to fully retire some legacy components. The true exposure is therefore far more complex than current versus future technology. In practical terms, this means is that just because “reinventing the business” makes legacy modernization more palatable to investors doesn’t mean it offers the business a safe way out of technology risk.

It bears mentioning that a business electing to mitigate existing technology risk through reinvention is taking on a new set of challenges, especially if that company has not made such an investment in recent years. It must be ready to deal with contemporary software delivery patterns and practices that are much different from those of even a decade ago. It must know how to avoid the common mistakes that plague replatforming initiatives. It must be prepared to deal with knowledge asymmetry vis-a-vis vendor partners. It must know how to set the expectation for transparency in the form of working software, not progress against plans. And it must be prepared to practice an activist form of governance - not the bullshit spewed by vendors passed off as governance - to make those investments a success.

Reinvention promises freedom from the shackles of the status quo, but while going about that reinvention, exposure to technology risk vastly increases and stays at an elevated level for a long period of time. The future awaits the replatformed business, but do be careful what you get investors to agree to let you sign up to deliver.

Wednesday, November 30, 2022

You should...

Our favorite craft brewer has a tap room. They never have more than a dozen beers on tap. They only serve their own brewed beers, never anything sourced from another producer. They have only marginal amounts of product distribution; for all intents and purposes, they sell only through their tap room. While they’ll fill a growler or crowler, they do not keep inventory in cans, only kegs. They turn over 2/3rds - maybe it’s 3/4ths, maybe it’s 7/8ths - of their taps seasonally, where a season might be as short as a month or as long as half a year, depending on the beer. They have a flat screen but never broadcast sports or politics, only streamed images of nature or trains or the like. They stream their own custom audio playlist to provide ambient noise.

They run the business this way because this is the business they want to run. They have direct access to 99.9% of their customers (not 100% as once it leaves the premises, the contents of a crowler could end up in anybody’s stomach…) They’re not committed to provide beer to other businesses on any kind of a product mix by volume, let alone date delivery schedule. They get to experiment with product, constantly. They don’t make what they sell, they sell what they make.

On any given day in the taproom, a customer will give them advice, a sentence that always begins with the words “you should.” Such as, “you should distribute this and that beer to these bars in Madison and Milwaukee - you’d sell 20x as much as you do in a single tap room.” Or, “you should have a small electric oven and sell food.” Or “you should have dozens of TVs with football and this place would be packed on weekends.”

They are every bit as good at customer interaction as they are with making and serving beer. They listen patiently, smile, and reply with “thank you, we’ll think about that.”

* * *

The software business has long been intertwined with management consulting to one degree or another. Decades ago, tech automated tasks that changed long standing business processes; management was fascinated as this made businesses more efficient. The dot-com era (followed by mobile, and shortly thereafter by social media) ushered in changes in corporate <— —> customer and —> employee interactions. The contemporary tech landscape (cloud, AI, distributed ledger tech) - and not for the first time in the history of tech - promises to “reinvent the business.” ‘Twas ever thus: tech has long been, and sought out as, a source of business advice.

On the whole, tech is not a source of bad advice. When tech gets close to a problem space, it brings a different and generally value generative solution. Why do that work manually when we can easily automate and orchestrate that? Why have this customer talk to that salesperson when the customer can do that for themselves 99% of the time? Why have people churn through that data when a machine can learn the patterns?

But sometimes, advice from tech is truly value destructive.

I wrote about this some years ago, but standing next to me in the queue for a flight out of Dallas were a couple of logistics consultants lamenting the fact that a client had taken a tech consultancy’s advice and prioritized flexibility over volume in their distribution strategy. It sounds great in a windowless conference room: why let restaurants (who are 80% of the clientele) run out of branzino before the night is over? You should run a fleet of small delivery trucks to top up their stock of branzino for the night in near real time. Except, the distribution cost for a few branzino to that restaurant - even if we put it in a small truck with a few packages of great northern beans to the restaurant down the street and some basmati to a restaurant a few blocks away - is bloody expensive. The economics of distribution are based on volume, not flexibility. That restaurant will have to put a lot of adjectives in front of the noun to justify the cost of limited-supplied branzino on a Tuesday in November. ’tis far more economically efficient for the waitstaff to push the red snapper when the branzino runs out.

Another time, I was working with a manufacturer of very large equipment. The manufacturer sold through a dealer network. Dealers are given guidance from the manufacturer’s sales forecasting division as to the volume of each type of machine they should expect to sell in the next two years, by quarter. Dealers order machines with that guidance as an input (their balance sheet being another input), and over the course of time dealer orders get routed to a manufacturing plant to the dealer sales lot. The tech people couldn’t grok this. Manufacturing something without a specific end customer? You should have just-in-time manufacturing, so a customer order goes directly to a manufacturing facility. That way there is no finished goods inventory collecting dust on a dealer lot and the component supply chain can be somehow further optimized. Except, that exposes the manufacturer to demand swings. As it is, the manufacturer has hundreds of dealer P&Ls to which it can export its own P&L. They’ll build give or take 250,000 units of this model, and give or take 160,000 units of that model, and give or take 90,000 units of that other model, and 000,000s of all those other models, year in and year out, with minor modifications in major product cycles in an industry regulated by, among other things, emission standards. That’s a lot of machine volume, especially when there are dozens and dozens of models of tens of thousands of unit volume. The manufacturer has a captive dealer network that will buy 100% of what the manufacturer produces. The dealer network acts as a buffer on the manufacturer’s P&L: while the good years may not be maximally great for the manufacturer, the bad years aren’t too terribly bad, let alone event horizons on the income statement. That, in turn, creates consistency of cash flows for the manufacturer, which investors reward with a high credit rating, which makes debt more easily serviceable, which leaves money to reward equity holders. Just-in-time manufacturing exposes the manufacturer to end-customer market volatility, which would require a substantial change in capital structure, which would penalize both equity and debt holders. Markets go up, but markets also go down: minimizing the downside was of more value than maximizing the upside. Tech has known these swings (anyone remember the home computer revolution?), but the commercial landscape is so destructive, there is a lack of instititional memory.

There was the insurance company implementing a workflow management system for automating policy renewal. Although insurance data is highly structured, there are a lot of rules and conditions on the rules governing renewal, spanning the micro (e.g., geographic location in a city and number of employees) and macro (discounting and payout rules in the event a customer has a property & casualty policy as well as an umbrella policy, as opposed to just a property & casualty policy). There are a lot of policy renewal rules that go very deep into the very edge cases of the edge cases (e.g., a policy that renews on February 29). Well, the boss wants this policy automation thing done quickly, because we have a great story to share with investors that we’ve reduced the labor intensity of policy renewal. Along comes a tech vendor with a compelling suggestion: insurance company, you should incentivize your process automation vendor by rewarding them for the shortest time to development of each codified rule. (The operative word here is development, which is not the same as production delivery: delivery was deemed out of the control of the development partner.) Except, the contract the insurance company signed indexed cash payable to the vendor for development complete of each rule. Within three months, the vendor had tapped out over 80% of the cash for software development, yet each rule that was dev complete had on average over five severity-1 defects associated with it and was therefore unsuitable for deployment. Worse still, one third of those defects were blocking, meaning there were countless other defects to discover once the blocking defect was removed.

Then there is the purely speculative pontification. I wrote three years ago that management consultants love to advise customers to get into the disruption game. Consider what was happening in home meals and transportation and the like 5 years ago: this is coming for your industry, so you better get in the game. To wit: hey financial services firm, you should invest in developing your own line of disruptive fintech. Except, in practice it turned out to be far more prudent for incumbents to colonize startup firms by placing people on startup firms boards and then co-opt them to the credit cycle through greenmail policies. The latter strategy was a hell of a lot cheaper than the former. And those home-meal- and food-delivery tech firms who were the reference implementation for disruption? They ended up disrupting one another, more than they disrupted the incumbents. Come to think of it, the winning strategy was that of the wise fighting fish in the movie From Russia, With Love: the stupid ones fight; the exhausted winner of that fight is easy prey for the smart fighting fish who sat out the fight and waited patiently. (Note to self: this is two consecutive months that I’ve used FRWL as an analogy, I really need to diversify my analogies. That said, Eric Pohlmann’s voiceover is truly underrated in cinematic history.)

This is, arguably, playing out today as auto manufacturers pull back from autonomous vehicle investments. Hey automotive firm, you should invest in autonomous vehicle delivery because it will totally disrupt the industry. Except, it’s proving to be much further away from reality than thought. It was great as long as delivery expectations were low and valuations were high. It isn’t so lucrative now.

Obviously, all advice has to meet a company where it’s at. Generic assertions of impending tech disruption in a well established industry crater instantly (even faster than crypto during a bank run) when they meet incumbent economic dynamics. People (especially long term employees) resist operational change; debt cycles outright crush those changes. Not meeting a company where it’s at renders the advisor a curious (and at best mildly amusing) pontificator.

At the same time, advice also has to meet the industry that consumer is in where it’s at. That’s not so easy when the advisor can only think transactionally. “Digital disruption” and “omnichannel” are, thankfully, out of favor now. They were ignorant of the industry dynamics at play, as mentioned earlier: co-opt the disruptor to the prevailing industry trends and the aspirant tech cycle is subservient to the credit cycle. It is (if ironically in evolutionary terms) well captured by Opus the penguin’s response to the allegedly inevitable.

* * *

One thing about being in the advisory space is that at a micro level, just about every firm has something - many somethings - unique to offer. (The caveat “just about” is intentional: it’s just about, but not all: as Mojo Nixon pointed out, Elvis is everywhere, but not in everyone.) “You should” advice that does not reflect that uniqueness - the expression of the company itself - is bound to fall flat. Yes, macro trends matter, but start with the business itself. If the people in that business know who they are and who they are not, you’ve got a great place to start. And if they don’t, the most Hyde Park Corner prophet of “disruption” isn’t going to hold an audience for long.

* * *

In the interest of full disclosure, we have, as you might well expect, been sources of what we deem brilliant “you should” advice to the aforementioned craft brewer. You should:

  1. Have a beer that incorporates cough syrup as an ingredient, a beer version of a Flaming Moe.
  2. Let me put my head underneath the taps like Barney Gumble when Moe isn’t around.
  3. Have drone delivery of your beer. Because drones.
  4. Have a trap door you can open that drops egregious “you should” pontificators into a pool of hungry alligators.

We’ve been assured that the proprietors are giving serious thought to every one of these.

Monday, October 31, 2022

Strategy

A few months ago I was asked to review a product strategy a team had put together. I had to give them the unfortunate feedback that what they had created was a document with a lot of words, but those words did not articulate a strategy.

There is a formula for articulating strategy. In his book Good Strategy, Bad Strategy, Richard Rumelt puts forward the three essential elements of a strategy. It must:

  1. Identify a need or opportunity (the why)
  2. Offer a guiding policy for responding to the need or opportunity (the what)
  3. Define concrete actions for executing the policy (the how)

There’s more to it, of course. The need or opportunity has to be well structured and specific. The guiding policy must be focused on the leverage that a company can uniquely bring to bear (this is effectively the who that a company is) as well as anticipate the reaction of other market participants. The actions must be, well, actionable.

What we see too often passed off as strategy are goals (“grow the business by xx% in the next y years” is a goal, not a strategy); vision statements (“we want to be the premier provider of aquatic taxidermy products” is a lofty if vain ambition); or statements that are effectively guiding policies (“to be the one stop shop for all of our customer’s aquatic taxidermy needs”) without the need (why) articulated or actions (how) defined.

I’ve seen the aftermath of a number of failed strategic planning initiatives. Each time, the initiatives failed to articulate at least two, and sometimes all three, of the aforementioned elements that compose a strategy. The postmortems to understand why these initiatives failed exposed a few consistent patterns.

One pattern is that the people involved in the strategic thinking did not truly come to grips with what is actually going on in a company’s environment. To understand “what’s going on” requires collating the relevant facts (internal and external) into a cohesive analysis. That, in turn, requires a great deal of situational awareness: an honest assessment of a company’s capabilities, a high degree of customer empathy, and a fair bit of macroeconomic understanding. It also requires a sense of timeliness: not too immediate so as to be just a tactical assessment (your competitors are easier to do business with through digital channels than you are), not too far in the future to be purely speculative (ambient computing). All too often, the definition of the opportunity is derived - in many cases, copied verbatim - from some other source, such as an analyst report, somebody else’s PowerPoint making the rounds inside the company, the company’s most recent annual report. Or it is a truism (the world of aquatic taxidermy is going digital). Or worse still, it is a tautology (customers will buy aquatic taxidermy products through digital channels and from physical store locations from specialist retailers and general merchandisers).

Defining the opportunity through a thorough understanding what’s going on is hard. It’s also awkward, an exercise of the blindfolded people describing the pachyderm. And that’s ok. It takes several iterations, it requires diversity of participants, and while there will be many moments when the activity feels like churn (and not the kind of churn that yields butter or ice cream), it is worth the investment of time. The “what’s going on” is, arguably, the most important thing in formulating a strategy. If the “what’s going on” is wrong, the opportunity isn’t clear, and as a result the most eloquent guiding policy and the most definitive of actions will not solve the right problem. By way of analogy, directional North stars are great, but in the field we still largely navigate by compass. A compass is low tech. It works throgh attraction to a magnetic field that serves as a close enough proxy to true north, which we correct with declination. As Dr. John Kay showed, the most successful companies navigate by muddling through.

Another pattern: whereas the would-be strategic thinkers spend comparatively little time defining the opportunity, they are obsessed with formulating the equivalent of the guiding policies. Some of this is likely a function of professional programming: if, for the totality of your career, the boss has supplied you with the reason why you do the things that you do, it isn’t natural to start a new initiative by asking “why”. Just the opposite. But the biggest reason for focusing on the guiding policies is that the strategic thinkers believe they are being paid to come up with clever statements of what a company should do. No surprise that strategic planning exercises tend to produce a lot of “what to do” options, which they present as a portfolio of strategic opportunities. Yes, the portfolio passes the volume test applied to any strategic planning initiative: too few slides suggests the team just faffed about for several weeks. So what we get is a shotgun blast of strategy: dozens of “what to do” options, only some (not many, let alone all) of which are complimentary to one another. Plenty of things to try, but they’re just that: things to try. They don’t converge at cohesive interim states where the company is poised to engage in a next level of exploitation of an opportunity or need, exploitation that is amplified through development of the unique capabilities the company brought to the table in the first place. This is not a strategy as much as it is a task list of very coarsely grained things to maybe do, at some point, and see what happens.

The fear of not having a sufficient quantity of clever “whats” is understandable, but misplaced. ‘Tis better to have a few very powerful statements of “why” that tell the executive leadership team and the board very concrete things they do not know about their company or market, with very strong statements of “what” to do about them.

The third pattern contributing to strategic planning failure is the aversion to defining the concrete actions necessary to operationalize a strategy. As damaging as getting the why wrong is to the validity of the what, glossing over or ignoring the how renders a guiding policy into a fairy tale. Figuring out the how is, for a lot of people, the least attractive part of strategy formulation: it requires coming face to face with the organizational headwinds such as the learned helplessness, the dearth of domain knowledge, the resistance to change that characterize legacy organizations. Operationalization - especially in an environment with decades old legacy systems compounded on top of one another - is where great ideas go to die: we could never do that here, you don’t know the history, it doesn’t work like that, and so forth. Yet a strategy without a clear path of execution is just a theory. No company has the luxury of not starting from where it is today. Strategy has to meet a company where it is at. This isn't big up-front design; it's just the first iteration of the end-to-end to establish that execution is in fact plausible, supplemented with a now / next / later to define a plausible path of evolution.

The aversion to defining execution of a proposed strategy stems from at least two sources. One is the tedium of deep diving into operational systems to figure out what is possible and what is not, and to then delve into the details to turn tables to interrogate in detail the things we can do, changing the question from "why we can't" into “why we can”. But the more compelling reason that strategic thinkers avoid detailing execution that I’ve observed is the fear that a single ground truth could undo the brilliance of a strategy. Strategy is immutably perfect in the vacuum of a windowless conference room. It doesn’t do so well once it makes first contact with reality. And that is the real world problem to the person academically defining strategy in the absence of execution: when given a choice, a company will always choose as Ernst Stavro Blofeld did in the movie version of From Russia With Love: although Kronsteen’s plan may very well have been perfect, it was Klebb who, despite execution failure (engineered through improvisation by James Bond), was the only person who could achieve the intended objective. Strategy doesn't achieve outcomes. Delivery does.

I’ve worked with a number of people who insist they no longer wish to work in execution or delivery roles, only strategy. Living in an abstract world detached from operational constraints is great, but abstract worlds don’t generate cash flow for operating companies. The division of strategy and delivery is a professional paradox: if you do not wish to work in delivery, by definition you cannot work in strategy.

Strategy is genuinely hard. It isn’t hard because it bridges the gap between what a company is today and what it hopes to be in the future (the what). It’s hard because good strategy clearly defines what a company is and is not today (the who), what the opportunities are and are not for it in the future (the why), and the actionable steps it can take to making that future a reality (the how), orchestrated via compelling guiding policies (the what).

Successful business execution is difficult. Successful business strategy is even more difficult. If you want to work in strategy, you better know what it is you're signing up for.

Friday, September 30, 2022

Management is Getting Things Done Through People

Last year, I wrote that one of the core capabilities of an Agile manager is to "create and adapt the mechanical processes and social systems through which the team gets things done." I went on to describe a little bit of what allows the Agile manager to succeed at this; it merits a bit more commentary.

The mechanical processes a team performs matter because orchestrating the right activities in the right sequence at the right times are essential to exploring a problem space and evolving solutions. The Agile toolkit is well established and doesn't merit detailing here, but it is worth pointing out there are a lot of different techniques, activities, and ceremonies (and a multitude of variations therein) that the Agile manager can reach for; the Agile manager must be sufficiently fluent in the mechanics to know which are appropriate, and which are not, for the circumstances. Yes, process matters.

But the mechanical processes themselves aren't enough: the manager has to create the right social system in which people can participate effectively. That's much harder than executing to a script prescribed by a mechanical process because the manager has to understand the skills and aptitudes, strengths and weaknesses, competencies and limitations of the people within the team.

Sometimes people are exactly as advertised: a subject matter expert, a department director, a knowledge worker. And sometimes they're not. This person isn't an expert in supply chain, familiar with the abstract patterns and business processes of supply chain management, with first hand experience in a variety of implementations; they actually only have experience in how this one company manages its supply chain, and only in how it operates, not how it was designed in the first place. That department director is director in title only, because they've delegated all responsibility to subordinates and require decisions they are required to make to be framed as single-alternative choices. And that knowledge worker is really only fluent in which buttons to press at which part of a process and how to fix common exceptions, but has no idea why they do what they do.

To create functional social systems, the Agile manager must be able to meet the people in their team where those people are at. That means some degree of fluency in the subject matter for which an expert has been staffed, some degree of familiarity with the types of decisions the department director makes, some recognition of the wisdom a seasoned knowledge worker possesses. In last year's post, I suggested this is a function of both EQ and professional experience. EQ is key to awareness, but not technical understanding. Professional experience can be the source of techncial understanding, but is limited because no manager has experience with every domain and every context they're asked to manage.

What the manager does not know through experience the manager must try to learn through theory by conducting independent research and investigation into the respective domains to understand the context of the various participants. The important thing for the manager isn't to become a domain expert, but to be sufficiently fluent in the terminology to use and the questions to ask to assess, engage and manage people of various backgrounds and various capability levels.

Finding the right level of fidelity with which to engage members of the team is a critical component of social system formation. To wit: asking people without first hand knowledge of “what good looks like” in supply chain management to design a next generation process will design a "faster horses" solution that is, at best, less bad than what the company has today. Suppose the manager is able to recognize this deficiency; that recognition is fantastic, but it doesn't give the manager license to tell the team to down tools until the SME who isn't quite a SME is replaced. The fact is, no team is perfect, and the manager has to work with the people they’re staffed with. If a genuine SME can’t be sourced full time, it is incumbent on the manager to (a) create a social system within which the not-quite-a-SME is able to contribute to the best that their knowledge allows them to so that the team can make meaningful progress; and (b) in recognition of the not-quite-a-SME's limitiations, to see that the team validates proposed solutions and models against established industry models (that the manager may very well have to self-source), potentially supplemented with slivers of time from experts from analyst organizations or specialists (that may have to be sourced from the manager's network).

How the manager deals with circumstances like this is the difference between a person mindlessly executing a mechanical process and a person steering a team toward an outcome. The prior is an executor; the latter is a manager.

Sometimes, of course, it just isn't there to be done. About a decade ago, I was part of a technology team partnering with a financial services data business to replatform its operations and core systems. Before the first day was out, it was patently obvious the SMEs and knowledge workers could regurgitate the keys they pressed in the monthly process they followed, but had no understanding of why they did the things they did, nor could they articulate the value of the data they provided to the services their customers consumed that data for. Unsurprisingly, the management - none of whom had first hand knowledge of the business itself, having been installed following the acquisition of the company by private equity - refused to accept our day one conclusion. So on day two we performed workshops that laid bare the knowledge deficit. We abandoned the inception because the people they had brought simply lacked the wisdom that comes with knowledge, and no amount of sourced content and slivers of expert time could compensate: we concluded we weren’t even going to get a faster horses solution out of that group. Can't get blood from a stone, so there's no point continuing to squeeze.

Trust the process? Sure. But any process is only as effective as the people participating in it, and participation is a function of the underlying social system of the team. Creating an effective social system is at the heart of the definition of what management is: getting things done through people.

Taking people at face value based on their title is an abrogation of management's responsibility. There are no free rides in management, be it project, program, product, department, division, or executive. You have to know your people; to know your people you have to meet them where they’re at; to meet them where they’re at you have to understand their context; to understand their context you have to have some familiarity with their domain. The manager who fails to do these things is not a manager, they’re an individual contributor with a highfalutin' title.

Wednesday, August 31, 2022

What Does God Need With a Starship?

Andy Kessler wrote in the WSJ this month about the value of being a contrarian. Contrarians have a reputation for being cynics or curmudgeons because they’re out of step with mainstream thinking. And it’s true that being contrarian solely for the purpose of resisting or denying change is generally not helpful. But contrarian thinking can bring a lot of constructive insight.

For quite a few years now, I’ve written about the value of activist investing, which at its best challenges institutional thinking - and, when necessary, institutional reporting - for the benefit of those invested in the business outcome. Activist investors are contrarian thinkers. An effective activist investor sources their own ground truths, creates their own hypotheses from the data, and advocates for those alternative hypotheses. This is true for public company investors and captive IT investors alike. The activist investor in a public company visits company installations, talks to customers, analyzes the footnotes of SEC filings, and develops hypotheses that management may not see or may be choosing not to report. The activist investor in a captive IT investment does the same things: interviews members of the team, reviews code, and analyzes status reports to develop alternative interpretations about the actual progress of and threats to a program or product. The formula is the same: scrutinize the data you’re provided, get some of your own, recontextualize it, and draw your own conclusions. This is critical thinking technique as we were all taught in high school.

But every silver lining has a cloud. The activist isn’t always right. David Einhorn raised questions derived from ground truths and got it right about Allied Capital, St. Joe Company, and Lehman Brothers, but he got it wrong about Keurig. The value of Mr. Einhorn’s contrarian thinking wasn’t in its accuracy as much as it was in offering a fact-based challenge to management’s narrative. The activist articulates a narrative that reframes a situation and draws attention to a risk or deficiency that others don’t see or that management may be obfuscating. The thought exercise is helpful for all investors to re-evaluate what they believe their risk exposure in that specific position to really be.

And it’s important to note that, like anything else, investor activism can be a charade. The public company investor may simply be generating doubt about a company to bolster a short position that can be quickly liquidated. Similarly, the captive IT steering committee member who is also a vendor rep may simply be fostering fear, uncertainty and doubt to drive more services revenue from an existing customer.

Perhaps most important of all, the activist investor isn’t very popular. Contrarian thinking takes us out of our comfort zone, makes us consider difficult possibilities, forces us to have data to support the thing that we desperately want to be true, and reminds us that we’re not as smart as we want to believe that we are. But more banally, challenging the board and management meeting-in and meeting-out wears on people. Contrarian thinkers are irritating. 'tis best that you enjoy dining alone.

Being a contrarian is not the easiest path to take. John Kay once wrote that regulators, if they are not to be co-opted by the regulated, require "...both an abrasive personality and considerable intellectual curiosity to do the job." Contrarian thinking at its best.

Because sometimes, when everybody is too mesmerized or beat down or overwhelmed, or simply can't be bothered, for the sake of everybody concerned, somebody has to ask: “what does God need with a starship?

Sunday, July 31, 2022

Shadows

One of the benefits of being an agile organization is the elimination of IT shadows: the functions and activities that crop up in response to the inadequacy of the plans, competency and capacity of captive IT.

IT shadows appear in a lot of different forms. There are shadow IT teams of developers or data engineers that spring up in areas like operations or marketing because the captive IT function is slow, if not outright incapable, of responding to internal customer demand. There are also shadow activities of large software delivery programs. The phases that get added long after delivery starts and well before code enters production because integrating the code produced by dependent teams working independently is far more problematic than anticipated. The extended testing phases - or more accurately, testing phases that extend far longer than anticipated - because of poor functional and technical quality that goes undiscovered during development. The scope taken out of the 1.0 release resulting in additional (and originally unplanned) releases to deliver the initially promised scope - releases that only offer the promise to deliver in the future what was promised in the past, at the cost of innovation in the present.

None of these functions and activities are planned and accounted for before the fact; they manifest themselves as expense bloat on the income statement as no-alternative, business-as-usual management decisions.

The historical response of captive IT to these problems was to pursue greater control: double down on big up-front design to better anticipate what might go wrong so as to prevent problems from emerging in the first place, supplemented with oppressive QA regimes to contain the problems if they did. Unfortunately, all the planning in the world can’t compensate for poor inter-team collaboration, just as all the testing in the world can’t inspect quality into the product.

Agile practices addressed these failures through teams able to solve for end-to-end user needs. The results, as measured and reported by Standish, Forrester, and others, were as consistent as they were compelling: Agile development resulted in far fewer delays, cost overruns, quality problems and investment cancellations than their waterfall counterparts. With enough success stories and experienced practitioners to go round, it’s no surprise that so many captive IT functions embraced Agile.

But scale posed a challenge. The Agile practices that worked so well in small to midsize programs needed to support very large programs and large enterprise functions. How scale is addressed makes a critical distinction between the truly agile and those that are just trying to be Agile.

Many in the agile community solved for scale by applying the implicit agile value system, incorporating things like autonomous organizations (devolved authority), platforms (extending the product concept into internally-facing product capabilities) and weak ownership of code (removing barriers of code ownership). Unfortunately, all too many went down the path of fusing Agile with Waterfall, assimilating constructive Agile practices like unit testing and continuous build while simultaneously corrupting other practices like Stories (which become technical tasks under another name) and Iterations (which become increments of delivery, not iterations of evolving capability), ultimately subordinating everything under an oppressive regime pursuing adherence to a plan. Yes, oppressive: there are all too many self-proclaimed "Agile product organizations" where the communication flows point in one direction - left to right. These structures don’t just ignore feedback loops, they are designed to repress feedback.

If you’ve ever worked in or even just advocated for the agile organization, this compromise is unconscionable, as agile is fundamentally the pursuit of excellence - in engineering, analysis, quality, and management. Once Agile is hybridized into waterfall, the expectation for Agile isn’t excellence in engineering and management and the like; it is instead a means of increasing the allegenice of both manager and engineer to the plan. Iteration plans are commitments; unit tests are guarantees of quality.

Thus compromised, the outcomes are largely the same as they ever were: shadow activities and functions sprout up to compensate for IT’s shortcomings. The captive IT might be Agile, but it isn’t agile, as evidenced by the length of the shadows they cast throughout the organization.