I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Monday, August 31, 2020

Legacy Modernization

I've worked with quite a few companies for which long-lived software assets remain critical to day-to-day operations, ranging from 20-year-old ERP systems to custom software products that first processed a transaction way back in the 1960s. In some cases, if only a very few, these assets continue to be used because they still work very well, were thoughtfully designed, and have been well cared for over the years. The vast majority of the time, though, they continue to be used because the cost to replace them is prohibitive. Decades of poor architecture guidelines and lax developer discipline resulted in the commercial-off-the-shelf components of an ERP becoming inseparable from the custom code built around it. Decades of upstream and downstream systems resulted in point-to-point and often database-level integrations to legacy systems. Several turns through the business cycle starved those legacy applications for investment at critical junctures, and the passage of time has left few people remaining with intimate knowledge of how they work. The assets are not current with business needs, it will require a long time to re-acquire the knowledge of how they work, and it will require a fair bit of investment if they are to be leveled up to meet current needs. Operations suffer an increased labor intensity needed to compensate for feature gaps and repetitive, systemic errors.

It may seem like common sense that this is an asset that must be replaced for the betterment of the business, but the economics don't necessarily justify doing so. It's cheaper to hire administrative labor to make up for systemic shortcomings than it is to replace all the features and functions of an asset that has been decades in development. Risk may be legitimate, but risk is a threat that merits cost for preparation and containment, not necessarily cost for elimination. Justifying a pricey legacy migration principally on an expectation of risk realization is a huge career gamble, especially if there is plausible deniability (e.g., annual independent audits that flag the exposure "high" but the probability "low") to the decision-maker should the risk actually materialize. By and large, until the asset fails in spectacular fashion, there's no real motivation for a business to invest in a replacement.

Enter the "legacy modernization" initiative. A basic premise is that there are alternatives to traditional "lift and shift" strategies by retiring legacy assets in a piecemeal fashion. Several things stand out about these initiatives.

The first and most obvious is that they are long on how to start, but short on how to finish. The exercise of assessing, modeling and dispositioning the landscape does offer valuable new ways of looking at legacy assets. Collectively mapping an ERP, CRM and a handful of custom applications to logical domains as opposed to traditional thinking that portrays them as closed systems with territorial control over data offers a different perspective on software capabilities. The suitability of the investment profile (sustain, evolve or upgrade) of underlying assets can also change when seen in the light of how those assets enable or impair the evolution of those domains in response to business need. Code translators (for example, from COBOL to J2EE) can make legacy code accessible to a new generation of developers to aid with reverse-engineering. All useful stuff, but it's still just assessment work. A lot has to be true for a piecemeal strategy to be viable, and that isn't knowable until coding begins in anger.

The one (and as near as I an tell, only) alternative pattern to "lift and shift" is strangulation in one form or another. Or perhaps more accurately, asset capture through strangulation. The preferred form is the gradual retirement of code through encapsulation, interception and substitution. When it is viable, it is pretty straightforward: isolate key functionality, decouple it from things like direct database calls, wrap it in an API, create a battery of automated tests around it, have old and new systems invoke the API, rewrite the functionality in a modern architecture using a modern language, and redirect the API to the new code from the old once ready. To truly incrementally retire the legacy code, however, requires that the legacy asset itself be both decomposable and (at least to some extent) recomposable. The integration of orchestration logic with functional logic common in legacy code makes decomposition very difficult. Tight coupling of code to data (think COBOL programs to VSAM files, or ABAP programs to customized tables in SAP) makes code difficult to recompose into new structures. Plus, both decomposition and recomposition require people fluent in the legacy code, with the skills to re-engineer it to do things such as redirect to an abstraction layer that enables functionality to be migrated, and with the confidence to do so without causing any damage. Of course, this can be side-stepped, at least to a degree, by building agnostic user interfaces that invoke APIs over robotic process automation, but the lack of malleability of the underlying code will preclude eclipsing a legacy asset at a finely-grained and highly tunable manner. It can only be strangled at a more coarsely-grained manner. By way of example, a credit-card processor supporting both house-branded cards and white-labeled third party cards that wants to replace its legacy transaction processing assets might be able to migrate volume brand-by-brand. This is preferable to migrating all-or-nothing, but it lacks the granularity of migrating function-by-function within the asset itself. Excluding any card-specific custom capabilities, 100% of the legacy codebase will remain in production until all transactions are redirected toward the replacement and all in-flight transactions are fully processed.

Something often overlooked is that the tight coupling of software assets is a mirror of the tight coupling of business functions. A company with a legacy procurement system integrated to a legacy ERP may find that it can modernize purchase orders but not purchase order receiving because of tight coupling of receiving to inventory, accounts payable, manufacturing and supply chain management. It is one thing to strangle purchase order functionality or migrate purchase order volume from legacy to modern systems, but it is entirely another to do so over core accounting functions that are, by their nature, tightly coupled capabilities regardless the fact those capabilities are the purview of different departments or functions. Integrated finance really does have its benefits.

Another challenge to modernization is finding relevant reference cases. The conditions surrounding one company's legacy assets - sophistication and quirks of the business model, erosion of asset and / or business process fluency, complications from prior modernizations that started but stalled, complications from acquisitions, complications from regulatory restrictions, and on and on - are not identical to those surrounding another. Finding a success story is great, but applicability of any methodology is a function of the similarity of conditions. As was written many decades ago by the great IT philosopher L. Tolstoy, "all IT organizations are dysfunctional in their own way." There isn't a legacy migration playbook. At best, there are individual plays, or more likely things to be learned from individual plays.

Perhaps the biggest challenge is that the value proposition for legacy modernization is thin. A quick survey of consulting firms pitching legacy modernization services reveals the primary benefits to be things like "reduce IT complexity and costs", "improve flexibility and collaboration", "increase data consistency." These are woolly statements, difficult to quantify, and not necessarily true. Replacing the long-since-paid-for on-prem mainframe with IaaS will bulge the expense line of the income statement as accountants treat IaaS as an annually-funded subscription, not an asset that can be bought and paid for and capitalized. While there may indeed be a step-by-step path to legacy modernization, the cost of not completing the journey is additional layers of code that need to be maintained, redundant production systems and infrastructure, and additional layers of transaction reconciliation. Reduction of IT costs requires legacy system retirement. Retirement requires feature parity of core cases and edge cases and a complete migration of volume. While any individual step of modernization may be inexpensive, the enterprise must still sign up for the entire journey if the justification is "reduce IT complexity and costs".

This makes the value case that much more important. Legacy modernization performed in pursuit of digital modernization - that is, in pursuit of changing the fundamental business model - can be a path to that value. My colleague Ranbir Chawla pointed out to me a couple of years ago that a company we were working with was steadfast in espousing technology paradigms abandoned a long time ago by the vendors who sold them. Those paradigms - even technological paradigms - are blinders that lock people's understanding of how their company transacts business. Re-imagine the business - goodness, it doesn't even have to be so ethereal, just look at what competitors are doing today - and it is possible to expose the opportunities those legacy paradigms self-select you out of doing.

However, recasting "legacy modernization" as "digital modernization" is no easy task. It's one thing to redefine an operation from linear batch processes performed in an on-premise mainframe to parallel event queues in an elastic cloud. There is bound to be lift in capacity, efficiency, and recoverable revenue for the architectural change, and this is easily projected against current state. Unfortunately, the benefits for digital modernization are harder to prove. For example, it sounds great that the company could pursue adjacent markets by exposing APIs to those event queues and selling subscriptions with variable pricing to 3rd party providers of ancillary services. There may even be a large addressable market of 3rd party providers of ancillary services. Potential is valueless without a plausible path to conversion. As it is highly resource- and labor-intensive to get hard evidence on the nature of those adjacent markets given the current state of the business, those opportunities are just conjecture. A CFO being asked to approve multi-million-dollar spend to modernize legacy assets is not a CFO who will do so based on somebody's conjecture.

Still, this does not negate the role that digital modernization has to play in making the case for legacy modernization. If the legacy modernization provides just enough lift to make a case on its own merits (and in the absence of a bloated cost base or significantly eroded revenue that can be won back quickly that is a best-case scenario) it is a cost-neutral to marginally cost-positive investment that opens the door to digital modernization and the multitude of potential benefits that lie beyond. Restated, it doesn't hurt the business to invest in modernization, and the odds are that it will be better off for it.

Most corporate IT departments would prefer not to start from the position they're currently in, so "legacy modernization" will find a willing audience. Unfortunately, there are no modern technological silver bullets that make legacy modernization any less onerous than it was the last 10 times IT proposed doing so. Not to mention, the financial analyses that dictate how capital is allocated in legacy firms are geared heavily toward "certainty" and not at all toward "speculative". What a legacy modernization initiative needs is a legitimate path to paying for itself. Provided that there is one, ceteris paribus, spending $100 today for $102.76 of value in two years leaves the firm no better off but also no worse off than had it spent $0. But no companies operate in a static world and every executive knows as much today. That means it's a pretty safe bet for the CFO that $100 on a cost neutral legacy modernization is a free call option on a lot of upside gain, if not outright survival of the business itself.