I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

Sunday, June 30, 2024

The yield curve is inverted. Tech's problem is asset price inflation.

The business of custom software development is, at its core, an asset business. Software development is the business of converting cash to intangible assets by way of human effort. Plenty of people opine about how important human labor is to software, and of course it is. Good development practices reduce time to delivery and create low-maintenance, easy-to-evolve software. What labor does and does not do is extremely important to the viability of software investments.

But software is an asset, not an operating expense. If there is no yield on a software asset, investing in software is a bad use of capital. No yield, no capital, no cash for salaries for people developing software. Money matters, whether or not we like to admit it.

This is a stark reversal for tech. When money was cheap and abundant as it was for over a decade, tech had the opposite problem: no yield, no problem! When capital wasn’t a constraint, the investment qualification wasn’t “what is this asset going to do for us” but “what are we denying ourselves if we don’t try to do something in this area.” Trying was more important than succeeding.

There are those who want to believe that financial markets are unemotional, but they are not. Momentum is a crucial factor in finance. Momentum is what gets investors to pile into the same position. Momentum turns a $100k plot of land into a $2m real estate “investment”. Momentum is an emotional justification in that the rationalization is hope, not fundamentals.

Tech rode momentum for a long, long time. Before COVID, the story that built momentum for tech was disruption. During COVID, the story was tech as a commercial coping mechanism. Momentum put abundant amounts of cash into the tech sector. Abundant cash inflated more than just salaries: it also inflated technical architectures and solution complexity. Money distorts.

That momentum has run its course. Tech is reaching - grasping - for any growth story. To wit: GenAI here, there, everywhere.

There are two winning hands in momentum trades: “hold to maturity” and “greater fool theory”. The prior requires a lot of intestinal - not to mention free cash flow - fortitude. The latter requires finding somebody foolish enough to spend as much (and ideally more). Nearly two years of contraction in the tech sector indicates a shortage of greater fools. Yes, some subsets of tech still command premium pricing; suffice to say there is no rising tide lifting all boats, and has not for quite some time.

Tech rode the wave of price inflation. The yield curve indicates that the wave has crested.

Friday, May 31, 2024

I can explain it to you, but I can't comprehend it for you

I’ve given my share of presentations over the years. I am under no illusions that I am anything more than a marginal presenter. My presentations are information dense, a function of how I learn. Many years ago, I realized that I learn when I’m drinking from the fire hose, not when content is spoon fed to me. I am focused and engaged with the prior; I become disinterested and disengaged with the latter. Of all the recommended presentation styles I’ve been exposed to over the years, I find the “tell them what you’re going to tell them / tell them / tell them what you just told them” pattern intellectually insulting. I prefer to treat my audience with respect and assume they are intelligent, so I err on the side of content density.

For this style to be effective, the audience has to also want to drink from the fire hose. If they do not, you won’t get past the first couple of paragraphs. But in over 30 years in the tech business, I find tech audiences generally respond to high-content-density presentations.

As the person leading a briefing or presentation, it is your responsibility to connect with the audience. However, there are limitations. The content as prepared is only as good as the guidance you’ve received to shape the subject and depth of detail. A presenter with subject matter expertise isn’t (or at any rate, should not be) wed to the content and can generally shift gears to adjust when there is a fidelity mismatch between content and audience. But being asked, even demanded, to explain even a moderately advanced concept in a limited amount of time to an audience lacking in subject matter basics is going to fall flat every single time.

* * *

People buy things - large capital things - for which they have little or no qualifications to purchase other than the fact that they have money to spend. Few people who buy houses are carpenters, plumbers or electricians. Few people who buy used cars are mechanical engineers.

This expertise disconnect plagues the software business. There are, unfortunately, times when contracts for custom software development are awarded by individuals or committees who have (at best) limited understanding of the mechanics of software delivery. And there are times when contracts for software product licenses are awarded by individuals or committees who have (at best) limited understanding of the complexity of the domain into which the licensed product must work.

An egregious, although not atypical, example is an 8-figure custom software development contract with payouts indexed to “story points developed”. Not “delivered”, not “in production.” The delivery vendor tapped the contract for cash by aggressively moving high-point story cards to “dev complete”. Never mind that nothing had reached production, never mind that nothing had reached UAT. By the time I got a look at it (they were looking - hoping - for process improvements that would yield deployable software rather than deplorable software), every story had an average of 7 open defects with a nearly 100% reopen rate. And yes, smart reader, ignore the fact that the apparent currency was “story points,” because it was not. The currency was the cash value of the contract; story points were simply a proxy for extracting that cash. Ironically, the buyer thought the arrangement was shrewd because it tied cash to work. Sadly it failed to tie cash to outcomes. In the event, the vendor had the buyer hostage: there were no clawbacks, so the buyer would either have to abandon the investment or have to sign extension after extension in the hopes of making good on it.

Licensed software products are no different. I’ve seen too many occasions where a buyer entered into a license agreement for some product without first mapping out how to integrate that product into their back office processes. When the buyer doesn’t come to the table prepared with a detailed understanding of their as-is state, they default into allowing the vendor to take the lead in designing solution architecture for the to-be state based entirely on generic and simplistic use cases, with disastrous outcomes to the buyer. Licensed products tend not to be 100% metered cost, and the vendor sales rep has a quota to meet and a commission to earn, so the buyer commits to some minimum term-based subscription spend with metered usage piled on top of that. In practice this means the clock is ticking on the buyer to integrate the licensed product the second the ink is drying on the contract. Finding out after the contract is signed that intrinsic complexity of the buyer environment is many orders of magnitude beyond the vendor supplied architecture is the buyer’s problem, not the vendor’s.

To level this information asymmetry between buyer and seller, buyers have independent experts they can call on to give an opinion of the contract or product or vendor or process. But of course there are experts and there are people with certifications. An expert in construction can look beyond things like surface damage to drywall and trim and determine whether or not a building is structurally sound. Then there are the “certified building inspectors” who look closely at PVC pipe covered in black paint and call it “cast iron plumbing.” All the certification verifies is that once upon a time, the certificate bearer passed a test. What is true in building construction is equally true in software construction. Buyers have access to experts but that doesn’t do them a bit of good if they don’t know how to qualify their experts.

Of course there’s a little more to it than that. Buyers have to be able to qualify their experts, want their expertise, and be willing and able to act on it. I’ve advised on a number of acquisitions. No person mooting an acquisition wants to hear “it’s a bad acquisition at any price”, especially if their job is to identify and close acquisitions. Years ago, I was asked to evaluate a company that claimed to have a messaging technology that could be used to efficiently match buyers and sellers of digital advertising space. They had created a messaging technology that was different from JMS only in that (a) theirs was functionally inferior and (b) it was not free. Instead of expressing relief at avoiding a disastrous deployment of capital, the would-be investor was desperate for justification that would overshadow these… inconveniences. As the saying goes, “you cannot explain something to somebody whose job depends on not understanding it.”

* * *

I have been fortunate to have worked overwhelmingly with experts and professional decision makers over the years, people who have been thoughtful, insightful, willing to learn, and who in turn have stretched me as well. I sincerely hope I have done the same for them.

Unfortunately, I have had a few brushes with those who fell irreconcilably short. The CTO of a capital markets firm who requested an advanced briefing on the mechanics of how distributed ledger technology could change settlement of existing and open the door for new complex financial products, but had done nothing before the briefing to learn the basics of what “this blockchain thing” is. The mid-level VP leading a RFP process who derailed a vendor presentation because she simply could not fathom how value stream mapping of business operations exposes inefficiencies that have income statement ramifications.

When we fail to connect with an audience, we have to first internalize the failure and look for what we might have done differently: what did we hear but not process at the time, what question should we have asked to clarify the perspective of the person asking. What is spoken is less important than what is heard.

At a certain point, though, responsibility for understanding lies with the listener. The audience member adamantly demanding further explanation may be doing so for any number of reasons, ranging from simple neglect (a failure to have done homework on the basics) to a deliberate unwillingness to understand (i.e., cognitive dissonance).

Which is where the title of this blog comes in. It’s a comment Ed Koch, the 105th mayor of New York, made to a constituent who demanded to know why the mayor’s office was introducing policy to lighten taxes, some time after New York had financially imploded and was still hemorrhaging high-income earners and businesses. “I can explain it to you” he told this constituent, “but I can’t comprehend it for you.”

Tuesday, April 30, 2024

The era of ultra low interest rates is over. Tech has painful adjustments to make.

Interest rates have been climbing for two years now. The Wall Street Journal ran an article yesterday with the headline the days of ultra low interest rates are over. Tech will have to adjust. It’s going to be painful.

When capital is expensive, we measure investments against the hurdle rate: the rate of return an investment must satisfy to exceed to be a demonstrably good use of capital. When capital is ridiculously cheap, we no longer measure investment success against the hurdle rate. In practice, cheap capital makes financial returns no more and no less valuable than other forms of gain.

There are ramifications to this. As fiduciary measures lapse, so does investment performance. We go in pursuit of non-financial goals like "customer engagement rate". We get negligent in expenditure: payrolls bloated with tech employees, vendors stuffing contracts with junior staff. We get lax in our standard of excellence as employees are aggressively promoted without requisite experience. We get sloppy in execution: delivery as a function of time is simply not a thing, because the business is going to get whatever software we get done when we get it done.

Capital may not be 22% Jimmy Carter era expensive, but it ain’t cheap right now. Tech has to earn its keep. That means a return to once familiar practices, as well as change that orchestrates purge of tech largesse. Business cases with financial returns first, non-financial returns second. Contraction of labor spend: restructuring to offload the overpromoted, and consolidation of roles or lower compensation for specialization. Transparency of what we will deliver when for what cost, and what the mitigation is should we not. An end to vanity tech investments, because the income statement, much less the balance sheet, can no longer support them.

Some areas of the tech economy will be immune to this for as long as they are thematically relevant. AI and GenAI are TINA (there is no alternative) investments: a lot of firms have no choice but to spend on exploratory investments in AI, because Wall Street rewards imagination and will reward the remotest indication of successful conversion of that imagination that much more. Yet despite revolutionary implications, AI enthusiasm is tempered compared to frothy valuations for tech pursuits of previous generations, a function of investor preference for, as James Mackintosh put it, profits over moonshots.. Similarly, businesses where there is a tech arms race on because innovation offers competitive advantage, such as in-car software, it will be business as usual. But these arms races will end, so it will be tech business as usual until it isn’t. (In fact, in North America, this specific arms race may not materialize for a long, long time as EV demand has plateaued, but that’s another blog for another day.)

Tech has had the luxury of not being economically anchored for a long time now. If interest rates settle around 400 bps as the WSJ speculated yesterday, those days are over. The adjustment to a new reality will be long and painful because there’s a generation of people in tech who have not been exposed to economic constraints.

This is the Agile Manager blog, as it has been since I started it in 2006. Good news, this change doesn’t mean a return to the failed policies of waterfall. Agile had figured out how to cope with these economic conditions. Tech may not remember how to use those Agile tools, but it has them in the toolkit. Somewhere.

That said, I also blog about economics and tech. If the Fed funds rate lands in the 400 bps range, tech is in for still more difficult adjustments. More specifically, the longer tech clings to hopes for a return to ultralow interest rates, the longer the adjustment will last, and the more painful it will be.

The ultralow rate party is over. It’s long past time for tech to sober up.

Sunday, March 31, 2024

Don’t queue for the ski jump if you don’t know how to ski

I’ve mentioned before that one of my hobbies is lapidary work. I hunt for stone, cut it, shape it, sand it, polish it, and turn it into artistic things. I enjoy doing this work for a lot of reasons, not least of which being I approach it every day not with an expectation of “what am I going to complete” but “what am I going to learn.”

As a learning exercise, it is fantastic. I keep a record of what I do on individual stones, on how I configure machines and the maintenance I perform on them, and for the totality of activities I do in the workshop each day. I do this as a means of cataloging what I did (writing it down reinforces the experience) and reflecting on why I chose to do the things that I did. Sometimes it goes fantastically well. Sometimes it goes very poorly, often because I made a decision in the moment that misread a stone, misinterpreted how a tool was functioning, or misunderstood how a substance was reacting to the machining.

My mistakes can be helpful because, of course, we learn from mistakes. I learn to recognize patterns in stone, to recognize when there is insufficient coolant on a saw blade, to keep the torch a few more inches back to regulate the temperature of a metal surface.

But mistakes are expensive. That chunk of amethyst is unique, once-in-a-lifetime; cut it wrong and it’s never-in-a-lifetime. If there isn’t coolant splash over a stone you’re cutting, you’re melting an expensive diamond-encrusted saw blade. Overheat that stamping to a point where it warps, or cut that half hard wire to the wrong length, and you’ve just wasted a precious metal that costs (as of today’s writing) $25+ per ounce for silver, $2,240+ for gold.

Learning out of a video or website or a good old fashioned book is wonderful, but that’s theory. We learn through experience. Whether we like to admit it or not, a lot of experiential learning results in, “don’t do it that way.”

Learning is the human experience. Nobody is omnipotent.

But learning can be expensive.

* * *

A cash-gushing company that has been run on autopilot for decades gets a new CEO who determines they employ thousands doing the work of dozens, and since most of these people can’t explain why they do what they do, the CEO concludes there is no reason why, and spots an opportunity to squeeze operations to yield even better cash flows. Backoffice finance is one of those functions, and that’s just accounting, right? That seems like a great place to start. Deploy some fintech and get these people off the payroll already.

Only, nobody really understands why things are the way they are; they simply are. Decades of incremental accommodation and adjustment have rendered backoffice operations extremely complicated, with edge cases to edge cases. Call in the experts. Their arguments are compelling. Surely, we can we get rid of 17 price discounting mechanisms and only have 2? Surely, we can we have a dozen sales tax categories instead of 220? Surely, we can get customers to pay with a tender other than cash or check? All plausible, but nobody really knows (least of all Shirley). Nobody on the payroll can explain why the expert recommendations won’t work, so the only way to really find out is to try.

Out comes a new, streamlined customer experience with simplified terms, tax and payments. Only, we lose quite a lot of customers to the revised terms, either because (a) two discounting mechanisms don’t really cover 9x% of scenarios like we thought or (b) we’re really lousy at communicating how those two discounts work. We lost transactions beyond that because customers have trust issues sharing bank account information with us. And don’t get started on the sales tax remittance Hell we’re in now because we thought we could simplify indirect tax.

Ok, we tried change, and change didn’t quite work out as we anticipated. It took us tens of millions of dollars of labor and infrastructure costs to figure out if these changes would actually work in the first place. Bad news is, they didn’t. Good news is, we know what doesn’t work. Hollow victory, that. That’s a lot of money spent to figure out what won’t work. By itself, that doesn’t get us close to what will work. Oh and by the way, we spent all the money, can we please have more?

Let’s zoom out for a minute. How did we get here? Since the employees don’t really know why they do what they do, and since all this activity is so tightly coupled, what is v(iable) makes the m(inimum) pretty large, leaving us no choice but to run very coarsely grained tests to figure out how to change the business with regard to customer facing operations that translate into back office efficiencies. Those tests have limited information value: they either work or they do not work. Without a lot of post-test study, we don’t necessarily know why.

This is not to say these coarse tests are without information value. With more investment of labor hours, we learn that there are really four discounting mechanisms with a side order of optionality for three of them we need to offer because of nuances in the accounting treatment our customers have to deal with. That’s not two but still better than the nineteen we started with. And it turns out with two factor authentication we can build the trust with customers to share their banking details so we can get out of the physical cash business. Indirect tax? Well, that was a red herring: the 220 categories previously supported is more accurately 1,943 under the various provincial and state tax codes. Good news is, we have a plan to solve for scaling up (scenarios) and scaling down (we’ll not lose too much money on a sales tax scenario of one).

Of course, we’ll need more money to solve for these things, now that we know what “these things” are.

That isn’t a snarky comment. These are lessons learned after multiple rounds of experiments, each costing 7 or 8 figures, and most of them commercially disappointing. We built it and they didn’t come, they flat out rejected it. We got it less wrong the second, third, fourth, fifth time around and eventually we unwound decades of accidental complexity that had become the operating model of both backoffice and customer experience, but that nobody could explain. Given unlimited time and money, we can successfully steer the back office and customers through episodic bouts of change.

Given unlimited time and money. Maybe it took five times, or seven times, or only three. None was free, and each experiment cost seven to eight figures.

* * *

There are a few stones I’ve had on the shelf for many, many years. They are special stones with a lot of potential. Before I attempt to realize that potential, I want to achieve sufficient mastery, to develop the right hypothesis for what blade to use and what planes to cut, for what shape to pursue, for what natural characteristics to leave unaltered and what surfaces to machine. Inquisitiveness (beginner’s mind) twined with experience on similar if more ordinary stones have led me to start shaping some of those special ones, and I’m pleased with the results. But I didn’t start with those.

Knowledge is power as the saying goes, and “learn” is the verb associated with acquiring knowledge. But not all learning is the same. The business that doesn’t know why it does what it does is in a crisis requiring remedial education. There is no shame in admitting this, but of course there is: that middle manager unable to explain why they do the things they do will feel vulnerable because their career has peaked as the “king of the how in the here and now.” Lessons learned from being enrolled in the master class - e.g., being one of the leads in changing the business - will be lost on this person. And when the surrogate for expertise is experimentation, those lessons are expensive indeed.

Leading change requires mastery and inquisitiveness. The prior without the latter is dogma. The latter without the prior is a dog looking at a chalkboard with quantum physics equations: it’s cute, as Gary Larson pointed out in The Far Side, but that’s the best that can be said for it. When setting out to do something different, map out the learning agenda that will put you in the position of “freely exercising authority”. But first, run some evaluations to ascertain how much “(re-)acquisition of tribal knowledge” needs to be done. There is nothing to prevent you from enrolling in the master class without fluency in the basics, but it is a waste of time and money to do so.

Thursday, February 29, 2024

Patterns of Poor Governance

As I mentioned last month, many years ago I was toying around with a governance maturity model. Hold your groans, please. Turns out there are such things. I’m sure they’re valuable. I’m equally sure we don’t need another. But as I wrote last month there seemed to be something in my scribbles. Over time, I’ve come to recognize it not as maturity, but more as different patterns of bad governance.

The worst case is wanton neglect, where people function without any governance whatsoever. The organizational priority is on results (the what) rather than the means (the how). This condition can exist for a number of reasons: because management assumes competency and integrity of employees and contractors; because results are exceedingly good and management does not wish to question them; because management does not know the first thing to look for. Bad things aren’t guaranteed to happen in the absence of governance, but very bad things can indeed (Spygate at McLaren F1; rogue traders at Société Générale and UBS). Worse still, the absence of governance opens the door to moral hazard, where individuals gain from risk borne by others. We see this in IT when a manager receives quid pro quo - anything from a conference pass to a promise of future employment - from a vendor for having signed or influenced the signing of a contract.

Wanton neglect may not be entirely a function of a lack of will, of course: turning a blind eye equals complicity in bad actions when the prevailing culture is “don’t get caught.”

Distinct from wanton neglect is misplaced faith in models, be they plans or rules or guidelines. While the presence of things like plans and guidelines may communicate expectations, they offer no guarantee that reality is consistent with those guidelines. By way of example, IT managers across all industries have a terrible habit of reporting performance consistent with plans: the “everything is green for months until suddenly it’s a very deep shade of red” phenomenon. Governance in the form of guidelines is often treated as “recommendations” rather than “expectations” (e.g., “we didn’t do it that way because it seemed like too much work”). A colleague of mine, on reading the previous post in this series, offered up that there is a well established definition of data governance (DAMA). Yes there is. The point is that governance is both a noun and a verb; governance “as defined” and “as practiced” are not guaranteed to be the same thing. Pointing to a model and pointing to the implementation of that model in situ are entirely different things. The key defining characteristic here is that governance goes little beyond having a model communicating expectations for how things get done.

Still another pattern of bad governance is governance theater, where there are governance models and people engaged in oversight, but those people do not know how to effectively interrogate what is actually taking place. In governance theater, some governing body convenes and either has the wool pulled over their eyes or simply lacks the will to thoroughly investigate. In regulated industries, we see this when regulators lack the will to investigate despite strong evidence that something is amiss (Madoff). In corporate governance, this happens when a board relies almost exclusively on data supplied by management (Hollinger International). In technology, we see this when a “steering committee” fails to obtain data of its own or lacks the experience to ask pertinent questions of management. Governance theater opens the door to regulatory capture, where the regulated (those subject to governance) dictate the terms and conditions of regulation to the regulators. When governance is co-opted, governance is at best a false positive that controls are exercised effectively.

I’m sure there are more patterns of bad governance, and even these patterns can be further decomposed, but these cover the most common cases of bad governance I’ve seen.

Back to the question of governance “maturity”: while there is an implied maturity to these - no controls, aspirational controls, pretend controls - the point is NOT to suggest that there is a progression: i.e., aspirational controls are not a precursor to pretend controls. The point is to identify the characteristics of governance as practiced to get some indication of the path to good governance. Where there is governance theater, the gap is a reform of existing institutions and practices. Misplaced faith requires creation of institutions and practices, entirely new muscle memories for the organization. Each represents a different class of problem.

The actions required to get into a state of good governance are not, however, an indication of the degree of resistance to change. Headstrong management may put up a lot of resistance to reform of existing institutions, while inexperienced management may welcome creation of governance institutions as filling a leadership void. Just because the governance gap is wide does not inherently mean the resistance to change will be as well.

If you’re serious about governance and you’re aware it’s lacking as practiced today, it is useful to know where you’re starting from and what needs to be done. If you do go down that path, always remember that it’s a lot easier for everybody in an organization - from the most senior executive management to the most junior member of the rank and file - to reject governance reform than to come face to face with how bad things might actually be.

Wednesday, January 31, 2024

Governance Without Benefit

I’ve been writing about IT governance for many years now. At the time I started writing about governance, the subject did not attract much attention in IT, particularly in software development. This was a bit surprising given the poor track record of software delivery: year after year the Standish CHAOS reports drew attention to the fact that the majority of IT software development investments wildly exceeded spend estimates, fell short of functional expectations, were plagued with poor quality, and as a result quite a lot of them were canceled outright. Drawing attention to such poor results gave a boost to the Agile community who were pursuing better engineering and better management practices. Each is clearly important to improving software delivery outcomes, but neither addresses contextual or existential factors to investments in software. To wit: somebody has to hold management accountable for keeping delivery and operations performing within investment parameters and, if it is not, either fix the performance with or without that management or negotiate a change in parameters with investors. Governance, not engineering or management, is what addresses this class of problem.

If IT governance was a fringe activity twenty years ago, it is everywhere today: we have API governance and data governance and AI governance and on and on. Thing is, there is no agreement as to what governance is. Depending on who you ask, governance is “the practice” of defining policies, or it “helps ensure” things are built as expected, or it “promotes” availability, quality and security of things built, or it is the actual management of availability, quality and security. None of these definitions are correct, though. Governance is not just policy definition. Terms like “promote” and “helps ensure” are weasel words that imply “governance” is not a function held accountable for outcomes. And governance intrinsically cannot be management because governance is a set of actions with concomitant accountability that are specifically independent of management.

That governance is still largely a sideline activity in IT is no surprise. For years, ITIL was the go-to standard for IT governance. ITIL defines consistent, repeatable processes rooted in “best practices”. The net effect is that ITIL defines governance as “compliance”. As long as IT staff follow ITIL consistent processes, IT can’t be blamed for any outcome that resulted from its activity: they were, after all, following established “best practices.” As there is not a natural path from self-referential CYA function to essential organizational competency, it is unrealistic to expect that IT governance would have found one by now.

I’ve long preferred applying the definition of corporate governance to IT governance. Corporate governance boils down to three activities: set expectations, hire managers to pursue those expectations, and verify results. When expectations aren’t met, management is called to task by the board and obliged to fix things. If expectations aren’t met for a long period of time, the managers hired to deliver them have to go or the expectations have to go. And if expectations aren’t met after that, the board goes. Before it gets to anything so drastic, governance has that third obligation, to “verify results.” Good governance sources data independently of management by looking directly at artifacts and constructing analyses on that data. In this way, good governance has early warning as to whether expectations are in jeopardy or not, and can assess management’s performance independently of management’s self-reporting. Governance is not “defining policies” or “helping to ensure” outcomes; governance is actively involved in scrutinizing and steering and has the authority to act on what it has learned.

Governance is concerned with two questions: are we getting value for money, and are we receiving things in accordance with expectations. Multiple APIs that do the same thing, duplicative data sources that don’t reconcile, IT investments that steamroll their business cases, all make a mockery of IT governance. We’ve got more IT “governance” than we’ve ever had, yet all too often it just doesn’t do what it’s supposed to do.

I’m picking up the topic of IT governance again because it does not appear to me that the state of IT governance is materially better than it was two decades ago, and this deserves attention. Soon after I started down this path, I thought it would be helpful to have a governance “maturity model.” No, the world does not need another maturity model, let alone one for an activity that is largely invisible and only conspicuous when it fails or simply isn’t present. It doesn’t help that good governance does not guarantee a better outcome, nor that poor governance does not guarantee a bad outcome. Governance is a little too abstract, difficult to describe in simple and concrete terms, and subsequently difficult for people to wrap their heads around. That, in turn, renders any “maturity model” an academic exercise at best.

Still, there is room for something that characterizes all this governance on an IT estate and frames it as an agent for good or bad. That is, in the as practiced state, is governance of this activity (say, API or appdev) materially reducing or increasing exposure to a bad outcome. That’s a start.

* * *

Dear readers,

I took extended leave from work last year, and decided to also take a break from writing the blog. I’m back.

Also, I do want to apologize that I’ve been unable all of these years to get this site to support https. It’s supposed to be a simple toggle in the Google admin panel to enable https, but for whatever reason it has never worked, which I suspect has to do with the migration of the blog from Blogger into Google. Despite admittedly tepid efforts on my part, I've not found a human who can sort this out at Google. I appreciate your tolerance.

Monday, July 31, 2023

Resistance

Organizational change, whether digital transformation or simple process improvement, spawns resistance; this is a natural human reaction. Middle managers are the agents of change, the people through whom change is operationalized. The larger the organization, the larger the ranks of middle management. It has become commonplace among management consultants to target middle management as the cradle of resistance to change. The popular term is “the frozen middle”.

There is no single definition of what a frozen middle is, and in fact there is quite a lot of variation among those definitions. Depending on the source, the frozen middle is:

  • an entrenched bureaucracy of post-technical people with no marketable skills who only engage in bossing, negotiating, and manipulating organizational politics - change is impossible with the middle managers in situ today
  • an incentives and / or skills deficiency among middle managers - middle managers can be effective change agents, but their management techniques are out of date and their compensation and performance targets are out of alignment with transformation goals
  • a corporate culture problem - it’s safer for middle managers to do nothing than to take risks, so working groups of middle managers respond to change with “why this can’t be done” rather than “how we can do this”
  • not a middle management problem at all, but a leadership problem: poor communication, unrealistic timelines, thin plans - any resistance to change is a direct result of executive action, not middle management

The frozen middle is one of these, or several of these, or just to cover all the bases, a little bit of each. Of course, in any given enterprise they’re all true to one extent or another.

Plenty of people have spent plenty of photons on this subject, specifically articulating various techniques for (how clever) “thawing” the frozen middle. Suggestions like “upskilling”, “empowerment”, “champion influencers of change”, “communicate constantly”, and “align incentives” are all great, if more than a little bit naive. Their collective shortcoming is that they deal with the frozen middle as a problem of the mechanics of change. They ignore the organizational dynamics that create resistance to change among middle management in the first place.

Sometimes resistance is a top-down social phenomenon. Consider what happens when an executive management team is grafted onto an organization. That transplanted executive team has an agenda to change, to modernize, to shake up a sleepy business and make it into an industry leader. It isn’t difficult to see this creates tensions between newcomers and long-timers, who see one another as interlopers and underperformers. Nor is it difficult to see how this quickly spirals out of control: executive management that is out of touch with ground truths; middle management that fights the wrong battles. No amount of “upskilling” and “communication” with a side order of “empowerment” is going to fix a dystopian social dynamic like this.

One thing that is interesting is that the advice of the management consultant is to align middle management’s performance metrics and compensation with achievement of the to-be state goals. What the consultants never draw attention to is executive management receiving outsized compensation for as-is state performance; compensation isn’t deferred until the to-be state goals are demonstrably realized. Plenty of management consultants admonish executives for not “leading by example”; I’ve yet to read any member of the chattering classes admonish executive to be “compensated by example”.

There are also bottom-up organizational dynamics at work. “Change fatigue” - apathy resulting from a constant barrage of corporate change initiatives - is treated as a problem created by management that management can solve through listening, engagement, patience and adjustments to plans. “Change skepticism” - doubts expressed by the rank-and-file - is treated as an attitude problem among the rank-and-file that is best dealt with by management through co-opting or crowding out the space for it. That is unfortunate, because it ignores the fact that change skepticism is a practical response: the long timers have seen the change programs come and seen the change programs go. The latest change program is just another that, if history is any guide, isn’t going be any different than the last. Or the dozen that came and went before the last.

The problematic bottom up dynamic to be concerned with isn’t skepticism, but passivity. The leader stands in front of a town hall and announces a program of change. Perhaps 25% will say, this is the best thing we’ve ever done. Perhaps another 25% will say, this is the worst thing we’ve ever done. The rest - 50% plus - will ask, “how can I not do this and still get paid?” The skeptic takes the time and trouble to voice their doubts; management can meet them somewhere specific. It is the passengers - the ones who don’t speak up - who represent the threat to change. The management consultants don’t have a lot to say on this subject either, perhaps because there is no clever platitude to cure the apathy that forms what amounts to a frozen foundation.

Is middle management a source of friction in organizational change? Yes, of course it can be. But before addressing that friction as a mechanical problem, think first about the social dynamics that create it. Start with those.