I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Thursday, February 29, 2024

Patterns of Poor Governance

As I mentioned last month, many years ago I was toying around with a governance maturity model. Hold your groans, please. Turns out there are such things. I’m sure they’re valuable. I’m equally sure we don’t need another. But as I wrote last month there seemed to be something in my scribbles. Over time, I’ve come to recognize it not as maturity, but more as different patterns of bad governance.

The worst case is wanton neglect, where people function without any governance whatsoever. The organizational priority is on results (the what) rather than the means (the how). This condition can exist for a number of reasons: because management assumes competency and integrity of employees and contractors; because results are exceedingly good and management does not wish to question them; because management does not know the first thing to look for. Bad things aren’t guaranteed to happen in the absence of governance, but very bad things can indeed (Spygate at McLaren F1; rogue traders at Société Générale and UBS). Worse still, the absence of governance opens the door to moral hazard, where individuals gain from risk borne by others. We see this in IT when a manager receives quid pro quo - anything from a conference pass to a promise of future employment - from a vendor for having signed or influenced the signing of a contract.

Wanton neglect may not be entirely a function of a lack of will, of course: turning a blind eye equals complicity in bad actions when the prevailing culture is “don’t get caught.”

Distinct from wanton neglect is misplaced faith in models, be they plans or rules or guidelines. While the presence of things like plans and guidelines may communicate expectations, they offer no guarantee that reality is consistent with those guidelines. By way of example, IT managers across all industries have a terrible habit of reporting performance consistent with plans: the “everything is green for months until suddenly it’s a very deep shade of red” phenomenon. Governance in the form of guidelines is often treated as “recommendations” rather than “expectations” (e.g., “we didn’t do it that way because it seemed like too much work”). A colleague of mine, on reading the previous post in this series, offered up that there is a well established definition of data governance (DAMA). Yes there is. The point is that governance is both a noun and a verb; governance “as defined” and “as practiced” are not guaranteed to be the same thing. Pointing to a model and pointing to the implementation of that model in situ are entirely different things. The key defining characteristic here is that governance goes little beyond having a model communicating expectations for how things get done.

Still another pattern of bad governance is governance theater, where there are governance models and people engaged in oversight, but those people do not know how to effectively interrogate what is actually taking place. In governance theater, some governing body convenes and either has the wool pulled over their eyes or simply lacks the will to thoroughly investigate. In regulated industries, we see this when regulators lack the will to investigate despite strong evidence that something is amiss (Madoff). In corporate governance, this happens when a board relies almost exclusively on data supplied by management (Hollinger International). In technology, we see this when a “steering committee” fails to obtain data of its own or lacks the experience to ask pertinent questions of management. Governance theater opens the door to regulatory capture, where the regulated (those subject to governance) dictate the terms and conditions of regulation to the regulators. When governance is co-opted, governance is at best a false positive that controls are exercised effectively.

I’m sure there are more patterns of bad governance, and even these patterns can be further decomposed, but these cover the most common cases of bad governance I’ve seen.

Back to the question of governance “maturity”: while there is an implied maturity to these - no controls, aspirational controls, pretend controls - the point is NOT to suggest that there is a progression: i.e., aspirational controls are not a precursor to pretend controls. The point is to identify the characteristics of governance as practiced to get some indication of the path to good governance. Where there is governance theater, the gap is a reform of existing institutions and practices. Misplaced faith requires creation of institutions and practices, entirely new muscle memories for the organization. Each represents a different class of problem.

The actions required to get into a state of good governance are not, however, an indication of the degree of resistance to change. Headstrong management may put up a lot of resistance to reform of existing institutions, while inexperienced management may welcome creation of governance institutions as filling a leadership void. Just because the governance gap is wide does not inherently mean the resistance to change will be as well.

If you’re serious about governance and you’re aware it’s lacking as practiced today, it is useful to know where you’re starting from and what needs to be done. If you do go down that path, always remember that it’s a lot easier for everybody in an organization - from the most senior executive management to the most junior member of the rank and file - to reject governance reform than to come face to face with how bad things might actually be.

Wednesday, January 31, 2024

Governance Without Benefit

I’ve been writing about IT governance for many years now. At the time I started writing about governance, the subject did not attract much attention in IT, particularly in software development. This was a bit surprising given the poor track record of software delivery: year after year the Standish CHAOS reports drew attention to the fact that the majority of IT software development investments wildly exceeded spend estimates, fell short of functional expectations, were plagued with poor quality, and as a result quite a lot of them were canceled outright. Drawing attention to such poor results gave a boost to the Agile community who were pursuing better engineering and better management practices. Each is clearly important to improving software delivery outcomes, but neither addresses contextual or existential factors to investments in software. To wit: somebody has to hold management accountable for keeping delivery and operations performing within investment parameters and, if it is not, either fix the performance with or without that management or negotiate a change in parameters with investors. Governance, not engineering or management, is what addresses this class of problem.

If IT governance was a fringe activity twenty years ago, it is everywhere today: we have API governance and data governance and AI governance and on and on. Thing is, there is no agreement as to what governance is. Depending on who you ask, governance is “the practice” of defining policies, or it “helps ensure” things are built as expected, or it “promotes” availability, quality and security of things built, or it is the actual management of availability, quality and security. None of these definitions are correct, though. Governance is not just policy definition. Terms like “promote” and “helps ensure” are weasel words that imply “governance” is not a function held accountable for outcomes. And governance intrinsically cannot be management because governance is a set of actions with concomitant accountability that are specifically independent of management.

That governance is still largely a sideline activity in IT is no surprise. For years, ITIL was the go-to standard for IT governance. ITIL defines consistent, repeatable processes rooted in “best practices”. The net effect is that ITIL defines governance as “compliance”. As long as IT staff follow ITIL consistent processes, IT can’t be blamed for any outcome that resulted from its activity: they were, after all, following established “best practices.” As there is not a natural path from self-referential CYA function to essential organizational competency, it is unrealistic to expect that IT governance would have found one by now.

I’ve long preferred applying the definition of corporate governance to IT governance. Corporate governance boils down to three activities: set expectations, hire managers to pursue those expectations, and verify results. When expectations aren’t met, management is called to task by the board and obliged to fix things. If expectations aren’t met for a long period of time, the managers hired to deliver them have to go or the expectations have to go. And if expectations aren’t met after that, the board goes. Before it gets to anything so drastic, governance has that third obligation, to “verify results.” Good governance sources data independently of management by looking directly at artifacts and constructing analyses on that data. In this way, good governance has early warning as to whether expectations are in jeopardy or not, and can assess management’s performance independently of management’s self-reporting. Governance is not “defining policies” or “helping to ensure” outcomes; governance is actively involved in scrutinizing and steering and has the authority to act on what it has learned.

Governance is concerned with two questions: are we getting value for money, and are we receiving things in accordance with expectations. Multiple APIs that do the same thing, duplicative data sources that don’t reconcile, IT investments that steamroll their business cases, all make a mockery of IT governance. We’ve got more IT “governance” than we’ve ever had, yet all too often it just doesn’t do what it’s supposed to do.

I’m picking up the topic of IT governance again because it does not appear to me that the state of IT governance is materially better than it was two decades ago, and this deserves attention. Soon after I started down this path, I thought it would be helpful to have a governance “maturity model.” No, the world does not need another maturity model, let alone one for an activity that is largely invisible and only conspicuous when it fails or simply isn’t present. It doesn’t help that good governance does not guarantee a better outcome, nor that poor governance does not guarantee a bad outcome. Governance is a little too abstract, difficult to describe in simple and concrete terms, and subsequently difficult for people to wrap their heads around. That, in turn, renders any “maturity model” an academic exercise at best.

Still, there is room for something that characterizes all this governance on an IT estate and frames it as an agent for good or bad. That is, in the as practiced state, is governance of this activity (say, API or appdev) materially reducing or increasing exposure to a bad outcome. That’s a start.

* * *

Dear readers,

I took extended leave from work last year, and decided to also take a break from writing the blog. I’m back.

Also, I do want to apologize that I’ve been unable all of these years to get this site to support https. It’s supposed to be a simple toggle in the Google admin panel to enable https, but for whatever reason it has never worked, which I suspect has to do with the migration of the blog from Blogger into Google. Despite admittedly tepid efforts on my part, I've not found a human who can sort this out at Google. I appreciate your tolerance.

Monday, July 31, 2023

Resistance

Organizational change, whether digital transformation or simple process improvement, spawns resistance; this is a natural human reaction. Middle managers are the agents of change, the people through whom change is operationalized. The larger the organization, the larger the ranks of middle management. It has become commonplace among management consultants to target middle management as the cradle of resistance to change. The popular term is “the frozen middle”.

There is no single definition of what a frozen middle is, and in fact there is quite a lot of variation among those definitions. Depending on the source, the frozen middle is:

  • an entrenched bureaucracy of post-technical people with no marketable skills who only engage in bossing, negotiating, and manipulating organizational politics - change is impossible with the middle managers in situ today
  • an incentives and / or skills deficiency among middle managers - middle managers can be effective change agents, but their management techniques are out of date and their compensation and performance targets are out of alignment with transformation goals
  • a corporate culture problem - it’s safer for middle managers to do nothing than to take risks, so working groups of middle managers respond to change with “why this can’t be done” rather than “how we can do this”
  • not a middle management problem at all, but a leadership problem: poor communication, unrealistic timelines, thin plans - any resistance to change is a direct result of executive action, not middle management

The frozen middle is one of these, or several of these, or just to cover all the bases, a little bit of each. Of course, in any given enterprise they’re all true to one extent or another.

Plenty of people have spent plenty of photons on this subject, specifically articulating various techniques for (how clever) “thawing” the frozen middle. Suggestions like “upskilling”, “empowerment”, “champion influencers of change”, “communicate constantly”, and “align incentives” are all great, if more than a little bit naive. Their collective shortcoming is that they deal with the frozen middle as a problem of the mechanics of change. They ignore the organizational dynamics that create resistance to change among middle management in the first place.

Sometimes resistance is a top-down social phenomenon. Consider what happens when an executive management team is grafted onto an organization. That transplanted executive team has an agenda to change, to modernize, to shake up a sleepy business and make it into an industry leader. It isn’t difficult to see this creates tensions between newcomers and long-timers, who see one another as interlopers and underperformers. Nor is it difficult to see how this quickly spirals out of control: executive management that is out of touch with ground truths; middle management that fights the wrong battles. No amount of “upskilling” and “communication” with a side order of “empowerment” is going to fix a dystopian social dynamic like this.

One thing that is interesting is that the advice of the management consultant is to align middle management’s performance metrics and compensation with achievement of the to-be state goals. What the consultants never draw attention to is executive management receiving outsized compensation for as-is state performance; compensation isn’t deferred until the to-be state goals are demonstrably realized. Plenty of management consultants admonish executives for not “leading by example”; I’ve yet to read any member of the chattering classes admonish executive to be “compensated by example”.

There are also bottom-up organizational dynamics at work. “Change fatigue” - apathy resulting from a constant barrage of corporate change initiatives - is treated as a problem created by management that management can solve through listening, engagement, patience and adjustments to plans. “Change skepticism” - doubts expressed by the rank-and-file - is treated as an attitude problem among the rank-and-file that is best dealt with by management through co-opting or crowding out the space for it. That is unfortunate, because it ignores the fact that change skepticism is a practical response: the long timers have seen the change programs come and seen the change programs go. The latest change program is just another that, if history is any guide, isn’t going be any different than the last. Or the dozen that came and went before the last.

The problematic bottom up dynamic to be concerned with isn’t skepticism, but passivity. The leader stands in front of a town hall and announces a program of change. Perhaps 25% will say, this is the best thing we’ve ever done. Perhaps another 25% will say, this is the worst thing we’ve ever done. The rest - 50% plus - will ask, “how can I not do this and still get paid?” The skeptic takes the time and trouble to voice their doubts; management can meet them somewhere specific. It is the passengers - the ones who don’t speak up - who represent the threat to change. The management consultants don’t have a lot to say on this subject either, perhaps because there is no clever platitude to cure the apathy that forms what amounts to a frozen foundation.

Is middle management a source of friction in organizational change? Yes, of course it can be. But before addressing that friction as a mechanical problem, think first about the social dynamics that create it. Start with those.

Friday, June 30, 2023

How Agile Management Self Destructs

I’ve been writing about Agile Management for over 15 years. Along the way, I’ve written (as have many others) how to get Agile practices into a business, how to scale them, how to overcome obstacles to them, and so forth. I’ve also written about how Agile gets co-opted, and a few months ago I wrote about how Agile erodes through workforce attrition and lowered expectations. I’ve never written about how Agile management can self-destruct.

The first thing to go are results-based requirements. Stories are at the very core of Agile management because they are units of result, not effort. When we manage in units of result, we align investment with delivery: we can mark the underlying software asset to market, we can make objective risk assessment and quantify not only mitigation but the value of mitigation. Agile management traffics in objective facts, not subjective opinion.

The discipline to capture requirements as Stories fades for all kinds of reasons. OKRs become soft measures. “So that” justification become tautologies. Labor specialization means that no developer pair, let alone a single person, can complete a Story. Team boundaries become so narrow they’re solving technical rather than business problems. And, you know what, it takes discipline to write requirements in this way.

Whatever the reason, when requirements no longer have fidelity to an outcome, management is back to measuring effort rather than results. And effort is a lousy proxy for results.

The next thing to go is engineering excellence. Agile management implicitly assumes excellence in engineering: in encapsulation, abstraction, simplicity, build, automated testing, and so forth. Once managers stop showing an active interest in engineering discipline, the symbiotic relationship between development and management is severed.

The erosion of engineering discipline is a function - directly or indirectly - of a lapse of management discipline. Whereas a highly-disciplined team decides where code should reside, an undisciplined team negotiates who has to do what work - or more accurately, which team doesn’t have to do what work. This is how architectures get compromised, code ends up in the wrong place, and abstraction layers create more complexity than simplicity.

The loss of engineering excellence is traumatic to management effectiveness. How something is built is a good indicator of the outcomes we can expect. Is the software brittle in production? Expensive to maintain? Does it take forever to get features released? Management has to reinforce expectations, verify that things are being built in the way they’re expecting them to be built, and make changes if they are not. When excellence in engineering is gone, management is no longer able to direct delivery; it is instead at the mercy of engineering.

The third thing to go is active, collaborative management. I’ve previously described what Agile management is and is not, so I’ll not repeat it here. The short version is, Agile management is a very active practice of advancing the understanding of the problem (and solution), protecting the team’s execution, and adjusting the team as a social system for maximum effectiveness. Now, management can check-out and just be scorekeepers even when there is engineering excellence and results-based requirements. But suffice to say, when saddled with crap requirements and becoming a vassal to engineering, management is reduced to the role of passenger. There is no adaptability, no responding to change, beyond adding more tasks to the list as the team surfaces them. Management is reduced to administration.

Agile requires discipline. It also requires tenacity. If management is going to lead, it has to set the expectations for how requirements are defined and how software is created and accept nothing less.

Wednesday, May 31, 2023

Give us autonomy - but first you've gotta tell us what to do

The ultimate state for any team is self-determination: they lead their own discovery of work, self-prioritize that work, self-organize their roles and self-direct the delivery.

Self-determination requires meta awareness. The team knows the problem space - the motivations of different actors (buyers, users, influencers), the guiding policies (regulatory and commercial preference), the tech in place, and so forth. Conversely, they are not whipsawed by external forces. They do not operate at the whim of customers because they evolve their products across the body of need and opportunity. They do not operate at the mercy of regulation because they know the applicable regulation and how it applies to them. They know their integrated technology’s features and foibles and know what to code with and code around. They know where the bodies are buried in their own code. And the members of the team might not be friends or even friendly to one another, but they know that no member of the team will let them down.

In the early 1990s, a Chief Technology officer I knew once replied to a member of his customer community during the Q&A at their annual tech meetup thusly: “there is nothing you can suggest that we’ve not already thought of.” Arrogance incarnate. But he and his entire team knew the product they were trying to build and they product they were not trying to build. They were comfortable with the tech trends they were latching onto and those they were not. They had not only customer intimacy but intimacy with prospective customers. They sold what they made; they did not make what they sold. Best of all, they’re still in business today, over 30 years later. They achieved a state of sustainable economic autonomy.

Freedom is most often associated with financial independence. There is a certain amount of truth to this. Financial independence means you can reach as far as “esteem” in Maslow’s hierarchy without doing much more than lavish spending. Unfortunately, money only buys autonomy for as long as the money lasts. History is littered with case studies where it did not. Sustained economic performance - through business cycles and tech cycles - yields the cash flow that makes self-determination possible. That requires evolution and adaptability, and those are functions of meta awareness.

Which brings us to the software development team that insists on autonomy. The team wants the freedom to tell their paymasters how a problem or opportunity space is defined, what to prioritize, how to staff, how much funding they need, and when to expect solution delivery will begin. That’s a great way to work. And, for decades now, management consultants have advised devolving authority to the people closest to the need or opportunity. But that proximity is only as valuable as the team’s comprehension of the problem space, familiarity with the domain, experience with similar engineering challenges, and the ability to think abstractly and concretely. When a team lacks in these things, devolving authority will simply yield a long and expensive path of discovery while the team acquires this knowledge. The less a priori knowledge the team has, the less structured and more haphazard the learning journey; so when the problem space is complex, this becomes a very long and very expensive discovery path indeed - and one that sometimes never actually succeeds.

Autonomy increasingly became the norm in tech as a result of the shortage of capable tech people, driven by the combination of cheap capital and COVID fueling tech investments. Under those conditions, a long learning journey was the price of admission; meta awareness was no longer a requirement for autonomy. With capital a lot more expensive today, tech spending has cooled and returns on tech investments are under much tighter scrutiny, and the longer the learning journey the less viable the tech investment. This is having the effect of exposing friction between how tech expects to operate and how business buyers expect for it to operate. Business buyers financing tech investments want tighter business cases that define returns and provide controls for capital spend. Tech employees want the space to figure out the domain, increasing expense spend and lengthening the time (and therefore cost) to deliver. With capital now having the upper hand, it is not uncommon for tech to be dichotomously demanding both autonomy and to be told exactly what to do.

While autonomy is the ultimate state of evolution for any team, the prerequisites to achieve it are extraordinarily high. It doesn’t require omnipotence, but it does require sufficient fluency with the tech and the domain to know the question to ask, to make appropriate assumptions, to anticipate the likely risks, and to know the sensible defaults to make in design. Devolved authority is a fantastic way to work, but autonomy must be earned, never granted.

Sunday, April 30, 2023

Measured Response

Eighteen months ago, I wrote that there is a good case to be made that the tech cycle is more economically significant than the credit cycle. By way of example, customer-facing tech and corporate collaboration technology contributed far more to robust S&P 500 earnings during the pandemic than the Fed’s bond buying and money supply expansion. Having access to capital is great; it doesn’t do a bit of good unless it can be productively channeled.

Twelve months ago, I wrote a piece titled The Credit Cycle Strikes Back. This time last year, rising interest rates and inflation reminiscent of the 1970s cast a pall over the tech sector, most obviously with tech firms laying off tens of thousands. Arguably, it cast a pall over the tech cycle in its entirety, from households forced to consolidate their streaming service subscriptions to employers increasingly requiring their workforce to return to office. Winter had come to tech, courtesy the credit cycle.

Silicon Valley Bank collapsed last month. The balance sheet, risk management, and regulatory reasons for its collapse are well documented. The Fed responded to SVB’s collapse by providing unprecedented liquidity in the form of 100% guarantees on money deposited at SVB. The headline rationale for unlimited deposit insurance - economic policy, political exigence - are also well documented elsewhere. Still, it is an economic event worth looking into.

An interesting aspect to the collapse of SVB is the role that social media played in the run on the bank. A recent paper presents prima facie evidence that the run on SVB was exacerbated by Twitter users. In a pre-social media era, SVB’s capital call to plug a risk management lapse may very well have been a business as usual event; that is, at least, what it appears SVB’s investment banking advisors anticipated. Instead, that capital call was a spark that ignited catastrophic capital flight.

If the link between Tweets and capital flight from SVB is real, the Fed’s decision looks less like a backstop for bank failures caused by poor risk management decisions, and more a pledge to contain the impact of a technology cycle phenomenon on the financial system. As the WSJ put it this week, “… Twit­ter’s role in the saga of Sil­i­con Val­ley Bank re­it­er­ated that the dy­nam­ics of fi­nan­cial con­ta­gion have been for­ever changed by so­cial me­dia.” Most banks had paid attention to the fact that Treasurys had declined in value and took appropriate hedge positions to protect their core business of maturity transformation. Based on fundamentals it wasn’t immediately obvious there was a systemic crisis at hand. Yet the rapidity with which SVB had collapsed was unprecedented. The Fed’s response to that rapidity was equivalent to Mario Draghi’s “whatever it takes” moment.

Social media-fueled events aren’t new in the financial system; by way of example: meme stock inflation. And assuming SVB’s collapse truly was a social media phenomenon, the threat was still at human scale: even if those messengers had a more powerful megaphone than the newspaper reporter of yore observing a queue of people outside a bank branch, it was a message propagated, consumed and acted upon by humans. Thing is, the next (or more accurately, the next after the next) threat will be AI driven, the modern equivalent to program trading that contributed to Black Monday in 1987. Imagine a deepfake providing the spark fueling adjustments by like-minded algorithms spanning every asset class imaginable.

As tech has become an increasingly potent economic force, it represents a bigger and bigger challenge to the financial system. To wit: eventually there will be a machine scale threat to the financial system, and human regulators don’t have machine scale. As the saying goes, regulation exists to protect us from the last crisis - as in, regulations are codified well after the fact; the scale mismatch we’re likely to face implies a low tolerance for delay. The last line of defense are kill switches, and given the tightly coupled, interconnected, and digital nature of the modern financial system, orchestrating kill switches presents a machine scale problem itself. The Fed, the Department of the Treasury, the OCC, the FDIC, the European Central Bank, and all the rest need new tools.

Let’s hope they don't build HAL.

Friday, March 31, 2023

Competency Lost

The captive corporate IT department was a relatively early adopter of Agile management practices, largely out of desperation. Years of expensive overshoots, canceled projects, and poor quality solutions gave IT not just a bad reputation, but a confrontational relationship with its host business. The bet on Agile was successful and, within a few years, the IT organization had transformed itself into a strong, reliable partner: transparency into spend, visibility into delivery, high-quality software, value for money.

Somewhere along the way, the “products not projects” mantra took root and, seeing this as a logical evolution, the captive IT function decided to transform itself again. The applications on the tech estate were redefined as products, assigned delivery teams responsible for them with Product Owners in the pivotal position of defining requirements and setting priorities. Product Owners were recruited from the ranks of the existing Business Analysts and Project Managers. Less senior BAs became Product Managers, while those Project Managers who did not become part of the Product organization were either staffed outside of IT or coached out of the accompany. The Program Management Office was disbanded in favor of a Product Portfolio Management Office with a Chief Product Officer (reporting to the CIO) recruited from the business. Iterations were abandoned in favor of Kanban and continuous deployment. Delivery management was devolved, with teams given the freedom to choose their own product and requirements management practices and tools. With capital cheap and cashflows strong, there was little pressure for cost containment across the business, although there was a large appetite for experimentation and exploration.

As job titles with "Product" became increasingly popular, people with work experience in the role became attractive hires - and deep pocketed companies were willing to pay up for that experience. The first wave of Product Owners and Managers were lured away within a couple of years. Their replacements weren't quite as capable: what they possessed in knowledge of the mechanical process of product management they lacked in fundamentals of Agile requirements definition. These new recruits also had an aversion to getting deeply intimate with the domain, preferring to work on "product strategy" rather than the details of product requirements. In practice, product teams were "long lived" in structure only, not in institutional memory and capability that matter most.

It wasn't just the product team that suffered from depletion.

During the project management years of iterative delivery, something was delivered every two weeks by every team. In the product era, the assertion that "we deploy any time and all the time" masked the fact that little of substance ever got deployed. The logs indicated software was getting pushed, but more features remained toggled off than on. Products evolved, but only slowly.

Engineering discipline also waned. In the project management era, technical and functional quality were reported alongside burn-up charts. In the product regime, these all but disappeared. The assumption was, they had solved their quality problems with Agile development practices, quality was an internal concern of the team, and primarily the responsibility of developers.

The hard-learned software delivery management practices simply evaporated. Backlog management, burn-up charts, financial (software investment) analysis and Agile governance practices had all been abandoned. Again, with money not being a limiting factor, research and learning were prioritized over financial returns.

There were other changes taking place. The host business had settled into a comfortable, slow-growth phase: provided it threw off enough cash flow to mollify investors, the executive team was under no real pressure. IT had decoupled itself from justifying every dollar of spend based on returns to being a provider of development capacity for an annual rate of spend. The definition of IT success had become self-referential: the number and frequency of product deployments and features developed, with occasional verbatim anecdotes that highlighted positive customer experiences. IT's self-directed OKRs were indicators of activity - increased engagement, less customer friction - but not rooted in business outcomes or business results.

The day came when an ambitious new President / COO won board approval to rationalize the family of legacy of products into a single platform to fuel growth and squeeze out inefficiency. The board signed up provided they stayed within a capital budget, could be in market in less than 18 months, and could fully retire legacy products within 24 months, with bonuses indexed to every month they were early.

About a year in, it became clear delivery was well short of where it needed to be. Assurances that everything was on track were not backed up by facts. Lightweight analysis led to analysis work being borne by developers; lax engineering standards resulted in a codebase that required frequent, near-complete refactoring to respond to change; inconsistency in requirements management meant there was no way to measure progress, or change in scope, or total spend versus results; self-defined measures of success meant teams narrowed the definition of "complete", prioritizing the M at the expense of the V to meet a delivery date.

* * *

The sharp rise of interest rates has made capital scarce again. Capital intensive activities like IT are under increased scrutiny. There is less appetite for IT engaging in research and discovery and a much greater emphasis on spend efficiency, delivery consistency, operating transparency and economic outcomes.

The tech organization that was once purpose built for these operating conditions may or may not be prepared to respond to these challenges again. The Agile practices geared for discovery and experimentation are not necessarily the Agile practices geared for consistency and financial management. Pursuing proficiency of new practices may also have come at the cost of proficiency of those previously mastered. Engineering excellence evaporates when it is deemed the exclusive purview of developers. Quality lapses when it is taken for granted. Delivery management skills disappear when tech's feet aren't held to the fire of cost, time and, above all, value. Domain knowledge disappears when it walks out the door; rebuilding it is next to impossible when requirements analysis skills are deprioritized or outright devalued.

The financial crisis of 2008 exposed a lot of companies as structurally misaligned for the new economic reality. As companies restructured in the wake of recession, so did their IT departments. Costly capital has tech in recession today. The longer this condition prevails, the more tech captives and tech companies will need to restructure to align to this new reality.

As most tech organizations have been down this path in recent memory, restructure should be less of a challenge this time. In 2008, the tech playbook for the new reality was emerging and incomplete. The tech organization not only had to master unfamiliar fundamentals like continuous build, unit testing, cloud infrastructure and requirements expressed as Stories, but improvise to fill in the gaps the fundamentals of the time didn't cover, things like vendor management and large program management. Fifteen years on, tech finds itself in similar circumstances. Mastering the playbook this time round is regaining competency lost.