I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

Sunday, July 31, 2022

Shadows

One of the benefits of being an agile organization is the elimination of IT shadows: the functions and activities that crop up in response to the inadequacy of the plans, competency and capacity of captive IT.

IT shadows appear in a lot of different forms. There are shadow IT teams of developers or data engineers that spring up in areas like operations or marketing because the captive IT function is slow, if not outright incapable, of responding to internal customer demand. There are also shadow activities of large software delivery programs. The phases that get added long after delivery starts and well before code enters production because integrating the code produced by dependent teams working independently is far more problematic than anticipated. The extended testing phases - or more accurately, testing phases that extend far longer than anticipated - because of poor functional and technical quality that goes undiscovered during development. The scope taken out of the 1.0 release resulting in additional (and originally unplanned) releases to deliver the initially promised scope - releases that only offer the promise to deliver in the future what was promised in the past, at the cost of innovation in the present.

None of these functions and activities are planned and accounted for before the fact; they manifest themselves as expense bloat on the income statement as no-alternative, business-as-usual management decisions.

The historical response of captive IT to these problems was to pursue greater control: double down on big up-front design to better anticipate what might go wrong so as to prevent problems from emerging in the first place, supplemented with oppressive QA regimes to contain the problems if they did. Unfortunately, all the planning in the world can’t compensate for poor inter-team collaboration, just as all the testing in the world can’t inspect quality into the product.

Agile practices addressed these failures through teams able to solve for end-to-end user needs. The results, as measured and reported by Standish, Forrester, and others, were as consistent as they were compelling: Agile development resulted in far fewer delays, cost overruns, quality problems and investment cancellations than their waterfall counterparts. With enough success stories and experienced practitioners to go round, it’s no surprise that so many captive IT functions embraced Agile.

But scale posed a challenge. The Agile practices that worked so well in small to midsize programs needed to support very large programs and large enterprise functions. How scale is addressed makes a critical distinction between the truly agile and those that are just trying to be Agile.

Many in the agile community solved for scale by applying the implicit agile value system, incorporating things like autonomous organizations (devolved authority), platforms (extending the product concept into internally-facing product capabilities) and weak ownership of code (removing barriers of code ownership). Unfortunately, all too many went down the path of fusing Agile with Waterfall, assimilating constructive Agile practices like unit testing and continuous build while simultaneously corrupting other practices like Stories (which become technical tasks under another name) and Iterations (which become increments of delivery, not iterations of evolving capability), ultimately subordinating everything under an oppressive regime pursuing adherence to a plan. Yes, oppressive: there are all too many self-proclaimed "Agile product organizations" where the communication flows point in one direction - left to right. These structures don’t just ignore feedback loops, they are designed to repress feedback.

If you’ve ever worked in or even just advocated for the agile organization, this compromise is unconscionable, as agile is fundamentally the pursuit of excellence - in engineering, analysis, quality, and management. Once Agile is hybridized into waterfall, the expectation for Agile isn’t excellence in engineering and management and the like; it is instead a means of increasing the allegenice of both manager and engineer to the plan. Iteration plans are commitments; unit tests are guarantees of quality.

Thus compromised, the outcomes are largely the same as they ever were: shadow activities and functions sprout up to compensate for IT’s shortcomings. The captive IT might be Agile, but it isn’t agile, as evidenced by the length of the shadows they cast throughout the organization.

Thursday, June 30, 2022

The New New New Normal

My blogs in recent months have focused on macroeconomic factors affecting tech, primarily inflation and interest rates and the things driving them: increased labor power, supply shortages, expansion of M2, and unabated demand. The gist of my arguments has been that although the long-term trend still favors tech (tech can reduce energy intensity as a hedge against energy inflation, and reduce labor intensity as a hedge against labor inflation, and so forth), there is no compelling investment thesis at this time, because we’re in a state of both global and local socio-economic transition and there is simply too much uncertainty. Five year return horizons are academic exercises in wishful thinking. Do you know any business leader who, five years ago, predicted with any degree of accuracy the economic conditions we face today and the conditions we experienced on the way to where we are today?

It is interesting how the nature of expected lasting economic change has itself changed in the last 2+ years.

A little over two years ago, there was the initial COVID-induced shock: what does a global pandemic mean to market economies? That was answered quickly, as the wild frenzy of adaptation made clear that supply in most parts of the economy would find a way to adapt, and demand wasn’t abating. Tech especially benefited as it was the enabler of this adaptation. Valuations ran wild as demand and supply quickly recovered from their initial seizures. Tech investments quickly became clear-cut winners.

As events of the pandemic unfolded, the question then became, "how will economies be permanently changed as a result of changes in business, consumer, labor, capital and government behavior?" The longer COVID policies remained in place, the more permanent the adaptations in response to them would become. For example, why live in geographic proximity to a career when one can pursue a career while living in geographic proximity to higher quality of life? Many asked this and similar questions, but not all did; among those that did, not all answered in the same way. This created an inevitable friction in the workforce. Not a year into the pandemic and the battle lines over labor policies were already being drawn between those with an economic interest in the status quo ante calling for a return to office (e.g., large banks) and those looking to benefit from improved access to labor and lower cost base embracing a permanent state of location independence (e.g., AirBNB). Similar fault lines appeared in all sorts of economic activity: how people shop (brick-and-mortar versus online), how people consume first-run entertainment (theaters versus streaming), how people vacation, and on and on. Tech stood to benefit from both lasting pandemic-initiated change (as the enabler of the new) and the friction between the new and reversion to pre-pandemic norms (as the enabler of compromise - that is, hybrid - solutions). Tech investments again were winners, even if the landscape was a bit more polarized and muddled.

Just as the battles to define the soon-to-be-post-COVID normal were gearing up for consumers and businesses and investors, they were eclipsed by more significant changes that make economic calculus impossible.

First, inflation is running amok in the US for the first time in decades. While tame by historic US and global standards, voters in the US have become accustomed to low inflation. High inflation creates political impetus to respond. Policy responses to inflation have not historically been benign: by way of example, the US only brought runaway 1970s inflation (in fact, it was stagflation - high unemployment and high inflation) under control with a hard economic landing in the form of a series of recessions in the late 1970s and early 1980s. With the most recent interest rate hike, recession expectations have increased among economists and business leaders. Mild or severe is beside the point: twelve months ago, while much of the economy recovered and some sectors even prospered, recession was not seen as a near-term threat. It is now. Go-go tech companies have particularly felt the brunt of this, as their investor’s mantra has done an abrupt volte face from "grow" to "conserve cash". Tech went from unquestioned winner to loser just on the merits of policy responses to inflation alone.

Second, war is raging in Europe, and that war has global economic consequences. Both Ukraine and Russia are mass exporters of raw materials such as agricultural products and energy. A number of nations across the globe have prospered in no small part because of their ability to import cheap energy and cheap food, allowing them to concentrate on development of exporting industries of expensive engineering services and expensive manufactured products. Those nations have also had the luxury of time to chart a public policy course for evolving their economies toward things like renewable energy sources without disrupting major sectors of the population with things like unemployment, while domestic social policy has benefited from a "peace dividend" of needing to spend only minimally on defense. The prosperity of many of those countries is now under threat as war forces a re-sourcing of food and energy suppliers and threatens deprioritization of social policies. Worse still, input cost changes threaten the competitiveness of their industrial champions, particularly vis-a-vis companies in nations that can continue to do business with an aggressor state in Europe. The bottom line is, the economic parameters that we’ve taken for granted for decades can no longer factor into return-on-investment models. Tech as an optimizer and enabler of a better future is of secondary importance when countries are scrambling to figure out how to make sure there are abundant, cheap resources for people and production.

Tech went from darling to dreadful rather quickly.

It’s worth bearing in mind that these recent macro pressures could abate, quite suddenly. Recovery from a real economy recession tends to be far faster than recovery from a recession in the financial economy. Such a recovery - notwithstanding the possibility of secular stagnation - would bring the economic conversation back to growth in short order. Additionally, regardless the outcome, should the war in Europe end abruptly, realpolitik dictates a return to business-as-usual, which would mean a quick rehabilitation of Russia from pariah state to global citizen among Western nations. However, the longer these macro conditions last, the more they fog the investment horizon for any business.

Which brings us back to the investing challenge that we have today. In the current environment, an investment in tech is not a bet on how well it will perform under a relatively stable set of parameters such as pursuing stable growth or reducing costs relative to stable demand. A tech investment today is a bet on how well an investment’s means (the mechanisms of delivering that investment) and ends (the outcomes it will achieve) accurately anticipate the state of the world during its delivery and its operation. That’s not simple when so many things are in flux. We’re on our third “new normal” in two years. There is no reason to think a stable new normal is in the offing any time soon.

Tuesday, May 31, 2022

The Credit Cycle Strikes Back

A few months ago, I wrote that the capital cycle has become less important than the tech cycle. I’d first come across this argument in a WSJ article in 2014, and, having lived through too many credit cycles, it took me some time to warm up to it. The COVID-19 pandemic laid this out pretty bare: all the cheap capital in the world provided by the Fed would have done nothing if there wasn’t a means of conducting trade. Long before the pandemic, tech had already made it possible to conduct trade.

Capital has flexed its muscles in recent months, and the results aren’t pretty. The Fed has raised interest rates and made clear its intention to continue to increase them to rein in inflation. The results are what you’d expect: risk capital has retreated and asset values have fallen. Tech, in particular, has taken a beating. Rising inflation was limiting household spending on things like streaming services, abruptly ending their growth stories. Tech-fueled assets like crypto have cratered. Many tech firms are being advised to do an immediate volte-face from “spend in pursuit of growth” to “conserve cash.”

But this doesn’t necessarily mean the credit cycle has re-established superiority over the tech cycle.

Capital is still cheap by historical standards. In real terms, interest rates are still negative for 5 and 10 year horizons. Rates are less negative than they were a year ago, but they’re still negative. Compare that to the relatively robust period of 2005, when real interest rate curves were positive. Less cheap isn’t the same as expensive. Plus, it’s worth pointing out that corporate balance sheets remain flush with cash.

Any credit contraction puts the most fringe (== high risk) of investments at greater risk, e.g., a business that subsidizes every consumption of its product or service is by definition operationally cash flow negative. Cheap capital made it economically viable for a company to try to create or buy a market until such time as they could find new sources of subsidy (i.e., advertisers) or exercise pricing power (start charging for use). If that moment didn’t arrive before credit tightening began, well, time’s up. Same thing applies to asset classes like crypto: when credit tightens, it’s risk off as investors seek safer havens.

The risk to the tech cycle is, how far will the Fed push up interest rates to combat inflation?

Supply chains are still constrained and labor markets are still tight. Demand is outstripping supply, and that’s driving up the prices of what is available. Raising rates is a tool for reducing demand, specifically reducing credit-based purchases. Higher interest rates won’t put more products on the shelves or more candidates in the labor pool. If demand doesn’t abate - mind you, this is still an economy coming out of its pandemic-level limitations - inflationary pressures will continue, and the Fed has made clear they’ll keep increasing rates until inflation cools off. With other shocks lurking - a war in Europe, the threat of food shortages, the threat of rolling electricity blackouts - inflation could remain at elevated levels while capital becomes increasingly expensive. Of course, sustained elevated interest rates would have negative consequences for bond markets, real estate, durable goods, and so on. The higher the rates and the longer they last, the harder the economic landing.

That said, tech is the driver of labor productivity, product reach and distribution, and a key source of corporate innovation. The credit cycle would have to reach Greenspan-era interest rates before there would be a material impact on the tech cycle. And even then, it’s worth remembering that the personal computer revolution took root during a period of high interest rates. Labor productivity improvement was so great compared to the hardware and software costs, interest rates had no discernible effect.

The credit cycle is certainly making itself felt in a big way. But it’s more accurate to say for now that capital sneezed and tech caught a cold.

Saturday, April 30, 2022

Has Labor Peaked?

I wrote some time ago that labor is enjoying a moment. New working habits developed out of need during the pandemic that in many ways increased quality of life for knowledge workers. Meanwhile, an expansion of job openings and a contraction in the labor participation rate created a supply-demand imbalance that favored labor.

There appears to be confusion of late as to how to read labor market dynamics. With fresh unionization wins and increased corporate commitment to location independent working, is labor power increasing? Or with a declining economy and more people returning to the workplace (as evidenced by increases in the labor participation rate) is labor power near its peak?

The question, has labor peaked?, intimates a return to the mean, specifically that labor power will revert to where it was pre-pandemic (i.e., “workers won’t continue to enjoy so much bargaining power.”) The argument goes that fewer people have left the workforce than have quit jobs for better ones; that hiring rates have increased along with exits; that the labor participation rate has ticked up slightly; that labor productivity has increased (thus lessening the need for labor); and that demand is cooling (per Q1 GDP numbers). Toss in 1970s sized inflation compelling retirees to return to the workforce and there’s an argument to be made that labor’s advantages will be short lived.

But this argument is purely economic, focusing on scarcity in the labor market that has created wage pressure. For one thing, it ignores potential structural economic changes yet to play out, such as the decoupling of supply chains in the wake of new geopolitical realities. For another, it ignores real structural changes in the labor market itself, things like labor demographics (migrations from high-tax to low-tax states), increased workplace control by the individual laborer (less direct supervision when working from home), and improvements in work/life balance.

The question, has labor peaked?, becomes relevant only when there is an outright contraction in the job market. For now, the better question to ask is how durable are the changes in the relationship between employers and employees? It isn’t so much whether labor has the upper hand as much as labor has more negotiating levers than it did just a few years ago. The fact that there hasn’t been a mad rush to return to pre-pandemic labor patterns suggests employers are responding to structural changes in labor market dynamics.

Trying to call a peak in labor power is a task wide of the mark. And for now, the more important question still seems some way off from being settled.

Thursday, March 31, 2022

Crowding Out

Tech has had a pretty easy ride for the last twenty years. It only took a couple of years for tech to recover from the 2001 dot-com crash. In the wake of the 2008 financial crisis, companies contracted their labor forces and locked in productivity gains with new tech. Mobile went big in 2009, forcing companies to invest. Then came data and AI, followed by cloud, followed by more data and AI. The rising tide has lifted a lot of tech boats, from infrastructure to SaaS to service providers.

The ride could get a little bumpy. Five forces have emerged that threaten to change the corporate investment profile in tech.

  1. Labor power: workers have power like they've not had since before the striking air traffic control workers were fired in the early 80s, from unions winning COLAs in their labor agreements to the number of people leaving their jobs and in many cases, leaving the workforce entirely. Labor is getting expensive.
  2. Interest rates: debt that rolls over will pay out a few more basis points in interest rates than the debt it replaces. Debt finance will become more expensive.
  3. Energy inflation: energy prices collapsed before the pandemic, only for supply to contract as energy consumption declined with the pandemic. It takes longer for production to resume than it does to shut it off. True, energy is less a factor on most company income statements than it was fifty years ago, but logistics and distribution firms - the companies that get raw materials to producers and physical products to markets - will feel the pinch.
  4. Supply chain problems: still with us, and not going away any time soon. Sanctions against Russia and deteriorating relations with China will at best add to the uncertainty, at worst create more substantive disruption. By way of example, the nickel market has had a rough ride. And there has been increasing speculation in the WSJ and FT of food shortages in parts of the world. As companies stockpile (inventory management priorities have shifted from “just in time” to “just in case”) and reshore supply chains, supply chain costs will rise. Supply will continue to be inconsistent at best, inflationary at worst.
  5. Increase in M2: adding fuel to all of these is a rise in M2 money supply. More money chasing fewer items drives up prices.

All of these except for interest rate rises have been with us for months, and we’ve lived with supply chain problems for well over a year now. These factors haven’t had much of an impact on corporate investment so far, largely because companies have successfully passed rising costs onto their customers. Even if real net income has contracted, nominal has not, so buybacks and dividends haven’t been crowded out by rising expenses.

But the economy remains in transition. Many companies are starting to see revenues fall from their pandemic highs. While rising interest rates may cool corporate spending, it has to cool a great deal to temper a labor market defined more by an absence of workers than an abundance of jobs. With real wages showing negative growth again, it will become more difficult for companies to pass along rising costs. Rising resistance to price increses will, in turn, put pressure on corporate income statements.

Of course, this could be the best opportunity for a company to invest in structural change to reduce labor and energy intensity, as well as to invest for greater vertical integration to have more control over upstream supply, with an eye toward ultimately changing its capital mix to favor equity over debt once that transformation is complete. That’s a big commitment to make in a period of uncertainty. Whereas COVID presented a do-or-die proposition to many companies, there is no cut-and-dried transformational investment thesis in this environment.

Monday, February 28, 2022

Shortage

Silicon chips are in short supply, ports are congested, and as a result new cars are expensive. The shortage of new cars has more people buying used, and as a result, used cars are fetching ridiculously high prices as well. The same phenomenon of supply shortages and logistics bottlenecks have been playing out across lots of basics, manufacturing and agricultural industries for months now.

At the same time, we have M2 money supply like we’ve never seen. All that cash is pursuing few investment opportunities, which bids them up. Excess liquidity seeking returns has inflated assets from designer watches to corporate equity.

Supply shortages twined with excess capital have created inflation like we’ve not seen in nearly 40 years.

Included among the supply shortages is labor. The headline numbers in the labor market have been the number of people leaving the workforce and the labor participation rate: fewer people of eligible age are working than before the pandemic, and many have simply checked out of the labor market forever, electing to live off savings rather than income. This means those who are working can command higher wages. In the absence of productivity gains, higher wages contribute to the inflationary cycle, because producers have to pass the costs onto consumers. Inflationary cycles can be difficult to stop once they start.

But labor market tightness can do something else: it can be the genesis of innovation. When a business cannot source the labor it needs to operate, it innovates in operations to reduce labor intensity. By way of example, businesses contracted their labor forces (including the ranks of their core knowledge workers) in the wake of the 2008 financial crisis. While this reduced corporate labor spend, it put remaining workers under strain. Soon after the reductions-in-force, companies invested in technology to lock in productivity gains of that reduced force. Capitalizing those tech assets reduced their impact on the income statement while those investments were being made. Once recovery began and revenues rose, that tech kept costs contained, resulting in better cash flow from operations after the financial crisis than before.

We are potentially in an inverse of the same labor dynamics. Whereas in 2008 the corporate innovation cycle was driven by corporate downsizing of the labor force, today it is driven by the labor market downsizing itself. And just as in 2008, when it was a secular problem (finance had an abundance of labor, while tech did not), it is secular again today.

Among the labor markets suffering a supply shortage is K-12 education. Education has become a less attractive occupation since the pandemic. A highly educated cohort disgruntled with work is an attractive recruiting pool for all kinds of employers.

The exodus of people from the teaching profession has created a shortage of teachers. The K-12 operating model is based on physical classroom attendance of teacher and student at increasingly high leverage ratios - 20, 30, 35 students to one teacher. This model becomes vulnerable with a scarcity of teachers. Classroom dynamics - not to mention physical facilities - don’t scale beyond 35 or 40 K-12 students in a single classroom. If there are fewer people willing to teach in the traditional paradigm, then the teaching profession will be under pressure to change its paradigm in one way or another.

I’ve written before that technology is generally not a disruptive agent. Technology that is present when socioeconomic change is happening is simply in the right place at the right time. Where there are acute labor shortages today - public safety, education, restaurant dining - the socioeconomic change is certainly afoot. What isn’t obvious is whether the right tech is present to capitalize on it.

Monday, January 31, 2022

How City Hall Can Fight City Hall

I live in a rural area. There isn’t a whole lot of agriculture or heavy industry, but there are a lot of inland lakes and national and state forest acreage. No surprise that one of the principal industries here is tourism. It’s a year-round industry as the area supports fishing and hunting, silent and motorized water- and winter-sports, youth summer camps and RV parks. A great many of the local businesses cater to tourists, from bait shops to bars, resorts to equipment rental, boat docks and off-season boat storage.

Like any community, there is tension. One source of tension has to do with how the land is used. There are those who advocate for more motorized activities (e.g., open more roads to ATV / UTVs) and those who advocate for less (e.g., more no wake zones on lakes). To some extent, the motorized v non-motorized debate is a proxy fight for the tourism industry. It is believed that opening more roads for ATV usage will bring more people into town centers where they’ll spend money, at the cost of noise pollution. Similarly, it is believed that creating more no wake zones will reduce shoreline erosion beneficial to homeowner and habitat alike, at the cost of vacationer experience. The extent to which the tourist is accommodated is, like any economic issue, very complex: the year-round resident who is dependent directly or indirectly on tourism has different goals from those of the year-round resident who is not dependent on tourism, or the non-residents with a second home here, or the tourists who visit here for a myriad of reasons. While an economic phenomenon, it is inherently political, and there are no easy answers.

Unsurprisingly, some flashpoints have emerged. One, specifically regarding land usage, has to do with income properties. From roughly 2008 until 2019 or so, real estate in this area was inexpensive, a long-lived aftereffect of the 2008 financial crisis (fewer buyers) as well as changes in where and how people vacationed (fewer tourists). COVID-19 changed this. With international borders closed and vacation options limited, people vacationed where they had once spent their summers. Some stayed permanently. The property market went from stone cold to red hot in a matter of months as people gobbled up properties as first and second homes as a means of social distancing while working or vacationing.

COVID-19 also put a premium on rental properties. This created an acute supply shortage. Low interest rates and cash accumulated by households made for a lot of willing property investors. Quite a few bought properties, hired tradespeople to fix them up (or fixed them up themselves), and listed them on vacation rental property sites.

The trouble is, while the properties may have been improved, many didn’t get the building inspections required for an income property, nor the permits required to rent out the properties.

The zoning commission for at least one county here is treating this as a compliance problem, which of course it is. They’ve done an analysis (more about that in a bit) and concluded there are hundreds, possibly thousands of properties that are out of compliance. They have also concluded that the task of (a) ascertaining whether they are in fact out of compliance and (b) bringing them into compliance will be time consuming and difficult.

A different way of looking at this is as a fraud problem. Property owners without permits are defrauding the county (out of tax revenues) and their customers (that the property is up to building code, health & safety code, and the like).

Fraud management consists of three types of activity: prevention, detection, and investigation. Let’s start with detection. The county entered into an arrangement with a software company that analyzes rental property sites and county tax filings to identify (that is, detect) which properties are out of compliance (committing fraud). According to their analysis, there are somewhere between 700 and 2,000 potential income properties in the county without the appropriate inspections and permits.

This brings us to investigation. Two thousand properties potentially out of compliance may not sound like a lot, but it is when there are only a few building inspectors who work for the county. Plus, many of the property owners receiving citations in the mail are disputing them in court, delaying resolution and tying up an already limited staff of inspectors. This doesn’t just point out the problem of labor intensity of inspections as much as it makes clear how the scale of the problem has changed: something that had for decades been a problem at a human scale is now at machine scale. While scaling the detection of the problem was relatively easy, scaling the inspection will not. Sure, a small fleet of drones could probably assist with investigation and alleviate some of the labor intensity, but that would require the county to spend money on both labor and tech for a limited solution with no guarantee of results.

Which leads us to prevention, the third activity of fraud. The best way to make the investigation activity manageable is to prevent it from getting out of control in the first place. Yes, the numbers suggest it is already excessive, but the amount of undeveloped land in this area up for sale suggests there is room for more property development. Plus, per our earlier definition, fraud is committed with every rental of an out-of-compliance property, so in theory the intent would be to prevent the next rental of an out-of-compliance property.

Practically speaking, there is very little a single county or even a handful of counties with small tax bases can do to prevent fraud like this. Prevention will probably require state-level-legislation, and by several states. There have been similar actions taken by state governments. For example, in the past five years most states have enacted marketplace facilitator laws to make it easier for the multitude of state, county and municipal level departments-of-revenue to collect sales taxes from online marketplace retailers: instead of needing to collect from the individual merchants, the marketplace facilitator is responsible for collecting and remitting sales taxes. States could similarly enact legislation obligating property rental booking services to require listing owners to register valid permits at risk of penalty for non-compliance, and report rental data to counties where properties are rented. The onus would then arguably be on the vacation property listing sites to confirm merchant compliance, which would be checked via periodic audit similar to a sales tax audit of an online marketplace. There would still be leakage (there will always be) but not likely as much as there is today.

“You can’t fight city hall” has a different meaning today. Half a century ago, it meant the individual couldn’t expect to win a fight with a government bureaucracy. Today, a county bureaucracy can’t expect to win a fight against the modern day equivalent (socioeconomic trends of cheap capital and changing vacation patterns amplified by tech). But one thing has not changed: the underdog can only win by redefining the problem, and collaborating with many others to change the rules of the game.

Friday, December 31, 2021

What does it let us do that we couldn't do before?

In the past year, activist investors have pushed for retailers like Macy’s and Kohl’s to separate their eCommerce operations into separate listed entities. The argument goes that eCommerce retail growth is rapidly outpacing bricks-and-mortar business growth, and saddling a high-growth business to an ex-growth legacy company depresses enterprise value. Separating them into two listed entities liberates the trapped value and allows investors to benefit: the eCommerce business for growth, the bricks-and-mortar business for its stable (if declining) cash flows, real estate holdings and intellectual property (e.g., brand) value.

Not so fast. There are counter-arguments to making this separation, and not just that a growing eCommerce division covers up for a struggling traditional retail operation.

Principal among the arguments for keeping the business whole is that even with - and perhaps especially because of - COVID, there’s a strong argument that the omnichannel strategy is the strongest hand to play. Omnichannel requires a seamless customer experience that independent eCommerce and physical store legal entities will struggle to curate. It stands to reason that what is good for the brick and mortar business is not necessarily the same as what is good for the eCommerce business, and vice-versa. Having eCommerce and brick-and-mortar working independently - if not at cross-purposes - will do little to harmonize the customer experience, not helpful at a time when doing so is deemed essential to survival.

An extension of this argument is that an omnichannel strategy doesn’t distinguish among channels, so separating the two - and thereby creating a distinction between them - is solely an act of financial engineering. Assessed as a financial act, the obvious question is, who wins? The consultants, attorneys and banks that collect fee income from the separation are clear beneficiaries: they’ll collect their fees regardless the outcome. Investors may or may not win out, as bond and equity prices in both legal entities may plummet after their separation, but at the start they will have no less value than they do today plus upside exposure through clearer value realization paths. Unfortunately, it’s hard to imagine how the pre-separation business itself gains from the separation: does it stand to reason that even more formalized organizational silos, redundant corporate overhead functions, and executives with polarized incentives are customer-value generative outcomes?

This flare-up in retail is interesting because it is the latest incarnation of a long-lived phenomenon of companies touting a change in their capital structure as a strategic initiative. I first wrote about this almost nine years ago. At the time, activist investors were attacking tech firms to create new classes of preferred shares or issue new bonds solely for the purpose of extracting cash flows from operations for the benefit of investors. But it wasn’t just an outside-in phenomenon of investors pressing tech firms: Michael Dell had at that time proposed to take Dell private, which did soon thereafter. With only wolly words to describe the justification for going private, it raised the question, what can Dell do as a private company that it cannot do as a public one?

The current kerfuffle in retail allows us to ask this question more broadly. Changing capital structure is no different from any other use of corporate cash, be it distribution of dividends, to replatforming operations, to simply strengthening the credit rating. Those bankers, lawyers and consultants don’t come cheap. The question is, what does it let the business do that it couldn’t do before?

With the benefit of hindsight, we know that Dell the publicly listed company became Dell the private equity fund. Among its acquisitions was EMC, and in particular EMC’s stake in VMWare, a position so lucrative that when Dell went public again in 2018 the implied value of the business excluding that holding was effectively nil. Dell the public company could have acquired EMC; publicly listed tech companies make acquisitions all of the time. What going private let the business do that it couldn’t do before was to concentrate ownership, and subsequently the returns from those acquisitions, in fewer people’s hands.

In the retail sector, the answer is not necessarily so cynical. Saks made the split into separate bricks & mortar and eCommerce legal entities earlier this year. In the words of the eCommece CEO, as quoted in the WSJ this week, both businesses benefit overall because they don’t have the same dollars chasing conflicting investment opportunities exclusively in an IRL and online realm, the eCommerce business has expanded its eCommerce offerings and reach, the brick and mortar business has better integration with eCommerce than it did before, and eCommerce now has an employer profile attractive to tech sector workers. In short, to destroy a longstanding phrase, by being two entities, the Saks eCommerce CEO argues that the sum of the parts is greater than the whole could ever have been. The CEO argues that the separation lets Saks do something - probably many somethings - it could not do before.

This, in turn, begs the question why.

There’s a quote attributed (quite probably erroneously) to the late Sir Frank Williams of the eponymous Williams Formula 1 team. When asked whether he approved of a proposed change to the race car, the legend is that his only response was, “does it make the car go faster?” It’s a deceptively simple question, one that I long misunderstood, because I took it at face value. Engineers can do any number of things to make a car go faster that also make the car less reliable, less stable, incompatible with sporting regulations, and so forth. While the question “does it make the car go faster” appears a simple up-or-down question, it actually questions the reasons behind the proposed change. How does it make the car go faster? Why hasn’t anybody thought to do this before? In answering those questions, we find out if the proposed change is clever, or too clever by half.

And that’s the question facing traditional retail. A commercial restructuring that alleges it creates value for the business (that is, not just investors) flies in the face of conventional wisdom. Sometimes that conventional wisdom is correct: Dell shareholders who accepted something less than $14 / share in 2013 lost out on a quadrupling of the enterprise value over an 8 year span (and no the S&P 500 didn’t perform quite that well over that same timeframe). But then, as John Kenneth Galbraith pointed out, conventional wisdom is valued because it is convenient, comfortable and comforting - not because it is necessarily right. Perhaps Saks and parent HBC are onto something more than just financial engineering, if in fact separating eCommerce from bricks & mortar let them do something they could not do before.

Tuesday, November 30, 2021

Do we need IT Departments?

The WSJ carried a guest analysis piece on Monday proclaiming the need to eliminate the IT department. While meant to be an attention-grabbing headline, it is not a new proposition.

Twenty years ago, the argument for eliminating the IT function went like this: while IT was once a differentiator that drove internal efficiency, it was clearly evolving into utility services that could be easily contracted. And certainly, even in the early 2000s, the evidence of this trend was already clear: a great many functions (think eMail and instant messaging solutions) and a great many services (think software development and helpdesk roles) could be fully outsourced. Expansive IT organizations are unnecessary if tech is codified, standardized and operationalized to a point of being easily metered, priced and purchased by hourly unit of usage.

While the proponents of disbanding IT got it right that today’s differentiating tech is destined to become tomorrow’s utility, they missed the fact that tomorrow will bring another differentiating tech that must be mastered and internalized before it matures and is utilified. Proponents of eliminating the IT function also ignored the fact that metered services - particularly human services - have to be kept on a short leash lest spend get out of hand. That requires hands-on familiarity with the function or the service being consumed, not just familiarity with contract administration.

The belief that enterprise IT departments should be disbanded is back again. This time around, the core of the argument is that a silo’d IT organization is an anachronism in an era when all businesses are not just consumers of tech but must become digital businesses. There is merit in this. Enterprise IT is an organization-within-an-organization that imperfectly mirrors its host businesses. IT adds bureaucracy and overhead; hires for jobs devoid of the host business’ context; and by definition foments an arms-length relationship between “the business” and “IT” that stymies collaboration and cooperation and, subsequently, solution cohesiveness. Not a strong value prop there by today’s standards.

Today, [insert-your-favorite-service-name-here]-aaS has accelerated the utilifcation of IT even further than most could imagine two decades ago. And, or at least so the argument goes, modern no-code / low-code programming environments obviate the need for corporate IT functions to hire or contract for traditional language software developers. Higher-level languages that non-software engineers can create solutions with reduces the traditional friction among people in traditional roles of “business” and “IT”.

Best of all, there is a reference implementation for disbanding centralized IT: the modern digital-first firm. While a digital-first firm may have a centralized techops function to set policies, procure and administrate utility services, it is the product teams that are hybrids of business and tech knowledge workers create digital solutions that run the business.

If you had the luxury of starting a large enterprise from scratch in Q4 2021, you would have small centralized teams to create and evolve platform capabilities and standards from cloud infrastructure to UX design standards, while independent product teams staffed with hybrid business and technology knowledge workers to build solutions upon the platform. The no-code / low-code tech notwithstanding (these tend to yield more organizational sclerosis and less sustainable innovation, but that’s a post for another day), this is a destination state many of us in the tech industry have advocated for years.

So why not model legacy enterprise IT this way?

Why not? Because enterprise IT isn’t the problem. I wrote above that enterprise IT is an imperfect mirror of its host organization. However, the converse is not also true: the host business is not an imperfect mirror of its enterprise IT function. In the same way, enterprise IT is a reflection of an enterprise problem; the enterprise problem is not a reflection of an IT problem.

Companies large and small have been reducing equity financing in favor of debt for over a decade-and-a-half now. A company with a highly-leveraged capital structure runs operations to maximize cash flow. That makes the debt easily serviceable (high debt rating == low coupon), which, in turn, creates cash that can be returned to equity holders in the form of buybacks and dividends. Maximizing cash flows from operations is not the goal of an organization designed for continuous learning, one that moves quickly, makes mistakes quickly, and adapts quickly. Maximizing cash flow is the goal of an organization designed for highly efficient, repetitive execution.

The "product operating model" of comingled business and tech knowledge workers requires devolved authority. Devolved authority is contrary to the decades-long corporate trend of increased monitoring and centralized control to create predictability, and consolidated ownership to concentrate returns. Devolved decision-making is anathema to just about every large corporate.

Framing this as an “IT phenomenon” is the tail wagging the dog. As I wrote above, enterprise IT is an imperfect reflection of its host organization. Enterprise IT is a matrix-within-a-matrix, with some parts roughly aligned with business functions (teams that support specific P&Ls, others that support business shared services such as marketing), while other IT teams are themselves shared services across the enterprise (in effect, shared services across shared services). Leading enterprise change through the IT organization is futile. Even if you can overcome the IT headwinds - staffing lowest-cost commodity labor rather than sourcing highest-value capability, utility and differentiating tech under the same hierarchy - you still have to overcome the business headwinds of heavy-handed corporate cultures ("we never making mistakes"); studying mistakes and errors for market signals indicating change rather than repressing them as exceptions to be repressed; and capital structures that stifle rather than finance innovation. Changing IT is not inherently a spark of change for its host business, if for no other reason than no matter how much arm waving IT does, IT in the contemporary enterprise is a tax on the business, a commitment of cash flows that the CEO would prefer not to have to make.

To portray enterprise IT as an anachronism is accurate, if not a brilliant or unique insight. To portray enterprise IT as the root of the problem is naive.

Sunday, October 31, 2021

Is the Tech Cycle More Important than the Fed Cycle?

In 2014, Andy Kessler wrote an intriguing op-ed in the WSJ, positing that beginning in the last half of the 20th century, the tech cycle had replaced the Fed cycle as the engine responsible for economic growth.

His argument went like this. Historically, the economy ran in 4 year cycles. Initially, cheap capital stimulated business investment and employment, which spurred spending, but increased spending eventually brought inflation. Inflation meant prices of goods rose and eventually tempered demand; lower demand meant inventories climbed, causing companies to slow the rate of production. Lower production forced companies to lay off workers, while the Fed raised interest rates to tame inflation which culled business investment. As inventories depleted and inflation abated, the cycle started all over again. Many interpreted this as the Fed cycle of interest rate adjustments. As it was once said, the Fed brings the punchbowl to the party before the guests arrive, and takes it away once the party heats up.

Seven years ago, Mr. Kessler pointed out that economic cycles are much longer today than they once were and attributed this to the tech cycle. His basic argument was that each new generation of tech - in his narrative (a) mainframes, (b) personal computers, (c) early internet, (d) mobile / cloud - had a greater influence on the longevity and vitality of economic performance than anything that the Fed did. The technology enabled changes in business models that made them less susceptible to traditional forces. His case study was that supply chain integration meant less inventory buildup, which meant less volatility, and subsequently longer cycles.

It’s a very intriguing proposition. I’ve wrestled with this from a few different perspectives. Yes, undoubtedly, new generations of tech have changed business models, making companies less vulnerable to the broader business (and subsequently capital) cycle. Technology has also increased worker productivity, which reduces labor intensity, which means less labor volatility when things slow down. Yet at the same time, quite a few tech firms have shown themselves to be vulnerable to the business cycle. To wit: the Fed cycle matters a great deal to tech firms dependent on benign credit conditions. Tech has no special immunity that way.

The traditional economist in me has two problems with Mr. Kessler’s argument. First, the “tech disruptor” mantra ignores financial orthodoxy - not to mention the over-abundance of other would-be disruptors - at its peril. It tends to be a self-referential argument that “tech is disruptive and is therefore ascendent.” Which is true, until the tech in question runs out of money or ends up in a bizarre stasis where a bunch of tech disruptors with overvalued equity deadlocked in internecine warfare, each simply waiting for all the others to run out of cash before they do. Second, long wave theory tends to read like narrative fallacity, something that Nassim Taleb specifically warned about. Nikolai Kondratiev was clearly onto something, but how much of a long-wave cycle is cherry-picking data points to fit a narrative rather than the data itself exposing the narrative?

That said, capital makes itself irrelevant when it is so cheap and so abundant for so long, as it has been for decades now. The traditional economist in me is an idiot for clinging to a set of parameters that have made themselves irrelevant to a broader set of trends.

That’s a long preamble to say that Mr. Kessler’s 2014 argument has contemporary relevance in light of economic performance during the COVID-19 pandemic.

The Federal Reserve’s response to the pandemic in 2020 was to apply the playbook it developed in response to the 2008 financial crisis: (a) expand the balance sheet through bond buying (this Fed page is representative of the period, look at the second and third columns); and (b) increase the money supply. Theoretically, cheap capital would mean that businesses and consumers would have no reason not to invest and spend.

But those businesses and consumers couldn’t invest or spend if they didn’t have the means of investing or spending. Traditional ways of working were analog, requiring people to conduct business in person. Fortunately, the technologies had long existed for commercial activity to continue despite people being unable to leave their homes. The existence of those technologies wasn’t just serendipitous: the fact that productivity tools enabling a remote, geographically distributed labor force to work collaboratively existed at all fits Mr. Kessler’s point that the tech cycle had far greater influence on economic performance during the pandemic than anything the Fed did. While some sectors of the economy did fall off a cliff (e.g., air travel, hospitality), most carried on. And despite the fact that the pandemic has been going on for nearly 21 months now, S&P 500 earnings are very strong. Without the technology the entire economy would have fallen off a cliff no matter how much money the Fed printed.

The pandemic also exposed winners and losers. Not created, exposed. Pre-pandemic, the tide in customer interaction, whether B2C or B2B, was already moving toward digital channels. The companies caught without viable digital channels were losers during the pandemic. The justification for digital channel development during the pandemic - and true right up to today - has less to do with beating the hurdle rate for investing capital, and more to do with simply staying in business. Sure, the decision to invest is easier to make when interest rates are meaningless, but it isn’t interest rates that make the investment in digital channels compelling. Survival makes them compelling.

The concern today - October of 2021 - is whether or not the Fed cycle has finally become inflationary. I write “finally” because Fed policy targets 2% personal consumption expenditure inflation, and PCE inflation has by and large fallen short of that target since 2008. In recent months, inflation has not only topped that 2% target but run a few laps round it. In the traditional Fed cycle, the measured policy response would be to raise interest rates, which will cool economic activity and bring an end to the cycle.

But how will this play out?

Let’s look at the drivers. Inflation, twined with a labor participation rate plumbing depths not seen since the early 1970s, is creating pressure for real wage increases. After decades of losing, labor is having a moment (link to blog). Unionized workforces are on strike. Amazon may have to increase warehouse labor comp.

Historically, the Fed response would be to increase interest rates aggressively to tame inflation. Yet markets are still pricing the Fed funds rate to rise only to about 1.20% by 2026. That might seem a huge jump from the 0.06% the Fed funds rate stands at today, but by historical standards 1.18% is ridiculously cheap capital, not the kind of rate that discourages spending. That means markets expect capital to be cheap (and therefore abundant) for the foreseeable future.

As labor costs rise, companies will look for ways to increase labor productivity so they can reduce labor intensity of operations. Labor productivity comes from increased tech density. Drones, robots, distributed ledger technology, vehicle electrification, and many more technologies will be the drivers of that labor productivity. If capital is cheap, the hurdle rate is low for productivity-enhancing investments. And even if the Fed upped interest rates much higher to tame inflation, corporate balance sheets are awash in cash. A lot of companies simply don’t need to raise capital to finance new investments.

Inflation may persist into 2022, and even beyond. But Mr. Kessler got it right in 2014: it won’t be the Fed that determines how the economy performs in this cycle, it will be tech.

Thursday, September 30, 2021

The Manager-Leader versus the Manager-Administrator

We saw last month that just because somebody is the manager of an Agile team does not make that person an Agile manager. The Agile manager does specific things: advances the understanding of the domain, has a symbiotic relationship with the do-ers, creates and adjusts the processes and social systems, and protects the team’s execution. By virtue of doing these things, the Agile manager is a leader, whereas the traditional manager is just an administrator.

I have been fortunate to watch the rise of Agile competencies over the last two decades. I have been less fortunate to watch Agile management competencies erode as Agile has spread in popularity. Although there are undoubtedly numerous reasons why, one immediately stands out: while the value system that underlies the behaviors described in the previous post is highly compatible with the creative company mindset, it is highly incompatible with the operating company mindset.

The creative company - e.g., a studio that yields original work, anything from entertainments to custom software assets - benefits from how much it exposes itself to environmental uncertainty and how effectively it internalizes relevant learnings from it to create a successful, if unique and one-off, solution. In contrast, the operating company - a firm that mass produces products or provides mass volumes of labor to deliver solutions - benefits from environmental certainty that allow it to apply patterns that create consistency enabling repeatable solutions at scale. The greater the consistency, the greater the automation, the lower the labor proficiency level required, the lower the cost of execution, the higher the margin and cash flow. Whereas the creative studio model thrives on chaos, the operating company thrives on consistency.

This is a tectonic fault line in the application of Agile management practices. When Agile is brought to bear in an operating company context with an overriding mission to provide consistency of outcomes, the value system that fosters the management behaviors that get things done through people in the face of a volatile set of circumstances is simply ignored. The words remain - adaptive planning, continuous feedback, and the like - but the values that give rise to them in the first place simply dissipate.

The presence of Agile terminology twined with the absence of the Agile value system gives license to people in management roles to do pretty much anything under the aegis of “being agile”. Take “adaptive planning”. In practice, “adaptive” is used to mean anything on a spectrum from no management and no plan (“we’ll figure it out as we go, on somebody else’s dime”) to dictatorial management-by-plan (“the team is free to meet the commitments they make during the planning exercise.”) Planning itself is an exercise in plausible deniability for managers: if the do-ers create the plan, management is the act of holding people accountable for the plan they came up with, not for the continuous adjustment of the plan or refinement of the business outcomes in the face of what is learned through execution. And reporting against a plan is somewhat perversely passed off as “governance”, itself an overloaded term with no actual meaning beyond “fancy word for management reporting.”

The net result is that managers of Agile teams have found ways to make themselves passengers in Agile teams because they only do administrative, communication, and reporting tasks. Former IBM CEO Lou Gerstner referred to these kinds of managers as “presiders”. Mr. Gerstner deemed presiders to be useless. Do-ers in Agile teams deem presides to be useless as well.

I wrote a decade ago that Agile gets corrupted when it goes corporate. That phenomenon is not unique to Agile, of course. By way of example, look at cloud computing: who knew that “migrating to the cloud” was as simple as moving a data center from owned on-prem to leased in somebody else’s facility? Yes, people still pass this off as “cloud migration”. Enterprise scale, enterprise politics and enterprise vendors have a tendency to dilute concepts - cloud, Agile, you name it - to a point of rendering them inert. ‘Twas ever thus.

Yet while Agile concepts are bound to be co-opted, this does not have to be the case for Agile execution. Managers can choose to be drivers of outcomes rather than plans, work with the team rather than outside of it, create social systems rather than schedule meetings, and protect the team’s execution from external forces rather than allowing them to steamroll the team. These, among other things too numerous to list here, define excellence in management.

Pursuing excellence is a choice that always rests with practitioners. There is a reason why the administrative burden of Agile has always been defined as “lightweight”, and isn’t to ease the workload of managers: it is to give managers the bandwidth to take a leadership role in a peer relationship with engineers, designers, analysts, and all the other do-ers in a team. That door is always open to managers in an Agile team, but the decision to walk through it or not rests with the individual manager.

Choose wisely.

Tuesday, August 31, 2021

What, Exactly, Is Agile Management?

Engineers have long had a poor relationship with managers. There are the stories long told of engineers unconstrained by management executing high stakes skunkworks projects, or more modestly using guerilla tactics to sneak unauthorized features into products that delight users. There are popular caricatures such as the pointy-haired manager in Dilbert, or Ford Motor’s management as portrayed in the movie Ford v Ferrari. Historically, management has never been loved, and in fact is often loathed, by engineers.

This all seems different in Agile. Management appears to be well integrated in Agile teams, not an overhead, nuisance, or encumbrance as it is in traditional project management. If we accept the fact that Agile is a value system and not a set of mechanical processes, it stands to reason that there must be something different about the norms and behaviors of Agile managers vis-a-vis traditional managers. What is it that makes Agile management different from traditional management?

A good place to start is by looking at the fundamentals of Agile team dynamics. First and foremost, as Paul Hammant is fond of saying, there are no passengers in an Agile team.

An Agile story is an expression of end-to-end business need. Although completing a story may require contribution from people in different roles - QA analyst and developer and experience designer, for example - each person is still responsible for the entire outcome of the story. Each person is responsible for the outcome because team performance is measured on collective output (specifically, stories in production) as opposed to the sum of individual output (tasks completed by individuals). Every member creates the most comprehensive understanding they can of each story, which they express both through artifact (story narrative or code for example) and collaboration with other team members (story walkthroughs and desk checks). A person driving goals orthogonal to the team goal, or not driving at all, does not last for very long in an Agile team.

A less charitable corollary to Paul’s statement is that nobody can hide in an agile team. What every person does and how they do it is fully exposed to every other member of the team. This is why safety is a key characteristic of an Agile team: by making it safe for any person to admit they don’t know how to do something, it is easier for the team to collectively adapt and make the most of the strengths of the people it has. It does mean, however, that mismatches - people who overstate their skills and competencies - will be exposed very quickly.

Combined, this means that no individual can phone it in. One person who does a substandard job affects the performance of the entire team. Consider a team using story narratives to surface complexities in business requirements. A business analyst who consistently does a lackadaisical job will effectively export the analyst responsibilities to a developer. Developer productivity - and therefore story card throughput - will suffer as developers compensate for substandard analysis work.

The obvious conclusions are (a) every individual drives to the collective output of the team; (b) every individual must strive to perform a high state of excellence; and (c) a team cannot maintain a level of performance any higher than the level of excellence achieved by its weakest performing member.

For people working in roles directly related to creation and evolution of a software asset - experience designers, developers, QA analysts - this is easy to understand. We all understand the consequences to an Agile team trying to compensate for inadequate design artifacts, vague requirements, poor quality code, and ineffective tests: rework cycles, additional handoffs, and poor quality, among other things.

But what do these three things - drive, excellence, and minimum collective effectiveness - mean for people in management roles?

I was fortunate to have worked with very strong Project Managers in my early Agile projects, people who understood the Agile value system and knew how to apply it as managers in a delivery team context. They all had characteristics that are applicable to all management roles. Among them:

  1. Managers advance the understanding of the problem / opportunity / solution domain. The best Agile managers are outstanding at scope definition and scope control. They do this by intelligently questioning different people in the team: business partners to ascertain what is truly important to the near-term business goals and is not, developers to ascertain what is tech goldplating and what is not, product managers on how can a story be split and part of it deprioritized, and so forth. It’s worth noting that codified accounting standards treat managers as overheads, for the simple reason that management is largely administrative work. By demonstrably working the details, some portion of an Agile manager's time is spent genuinely advancing the understanding of the problem/solution domain as part of a team collective. For this reason, a portion of an Agile project manager’s time can be capitalized, unlike traditional project managers who are purely expense. Traditional PMs create project plans, staffing plans and reports, schedule meetings and facilitate ceremonies. All useful, but not directly contributory to the technology asset itself. The difference is that the Agile project manager is hands-on with the problem and not simply performing tasks of administrative convenience - contracts, meetings, project plans - on the periphery of those hands on with the problem.

    It’s worth pointing out that this doesn’t happen by accident. Previous experience in a do-er role such as business analyst or developer certainly helps the manager to know the types of questions that need to be asked. But the key characteristic is both the ability and willingness to immerse and comprehend the details of the stories, the architecture, the code, the design, the tests. Good managers know how to work the problem and solution space. This applies to all levels of management, because “ground truths” always triumph over abstractions, right up to CEO and activist investor. Conversely, the manager who lacks the will or skill to internalize a domain is completely beholden to (and subsequently held hostage or simply played by) those in the team who do. This manager is the textbook definition of “overhead.”

  2. A symbiotic relationship with the do-ers in the team. PMs have to provide a variety of status reports to various sponsors and buyers. In the early days of Agile development, there were all kinds of new and novel measures: velocity for measuring throughput of stories, load factor for assessing the actual capacity available to the team, story points (or equivalent) for measuring relative story sizes, and many more. It was easy - and very satisfying - to nerd out on the management metrics, creating forecasts of scope expansion and capacity and so forth. But one thing that distinguishes the Agile manager from the traditional manager is that the team drives the metrics. The metrics are only very rarely used to drive the team. The Agile manager uses the metrics to detect a change in the situation such as domain complexity that was not previously understood, or team member skill that was over- or under-stated. Project management metrics are used by managers to ask “what has changed”, not “why are we off plan”. The objective of the Agile metrics has never been to keep the team on a plan and held to a plan. The objective of the Agile metrics is to increase the understanding of how execution is unfolding and make the necessary adjustments in one of the four variables - time, scope, capacity or quality - to respond. That’s a big difference between traditional project management and Agile project management.

    Metrics being all the rage among the management class these days, this bears a bit of analysis. Of course there are instances where metrics are used to drive the team, such as when three of the project management variables - time, scope and quality - require compensation from the fourth - capacity. By way of example, when the number of sev 2 and 3 defects escaping to production rises a bit - not unsustainable or threatening, just something that stands out - while the team focus is still on new story development, it isn’t uncommon to ask everybody in the team to try to resolve one defect every day before finishing up. True, in this case a rise in defects exposes something about how execution is unfolding, and the team needs to adapt (put in a little additional time to address quality). But in practice, this is an instance where the metrics are used to drive the team (e.g., to maintain story velocity while paying down defects), if for no other reason than the optics of few defects to go along with story throughput puts stakeholder minds at ease.

    The same nuance applies to measuring story throughput. A release plan that slots specific stories for specific future iterations sounds like a fixed delivery plan, but when scope and time are the priorities this is a reasonable exercise. It helps to answer the “will the team make it“ question, and as long as capacity and quality variables can be relaxed it is a way of depicting scenarios and anticipating responses. However, when stories are slotted to future iterations and all 4 variables are held constant by the manager, we no longer have Agile project management, we have traditional project management that has co-opted Agile terms in the pursuit of control. Obviously, this is not Agile.

    The four variables combined with appropriate interpretation and application of the metrics enable the manager to have a symbiotic relationship with executors. This manager is an Agile manager. However, when the metrics themselves are used as drivers, management has a one-way relationship with do-ers. This manager is a tyrant.

  3. Create and adapt the mechanical processes and social systems through which the team gets things done. Roy Sigham, the founder of ThoughtWorks, used to refer to the company as a “social experiment.” All teams are social experiments, random strangers brought together in combinations of varying skills, capabilities and personalities. There will be tension and conflict. There will be misrepresentations and misunderstandings. There will be volatility.

    Traditional project managers reach for plans against which they can hold people accountable. But this is not how managers fulfill their primary duty of “getting things done through people.” A high EQ allows a manager to recognize how people interact with one another. Professional experience allows the manager to recognize the skills people have and aspire to have and will never have. These and other factors let the manager create the circumstances for the right type of interactions and format - workshop, fishbowl, small group exercise, etc. - for the right subject in the right sequence at the right time to enable the team to get things done. Among other things, this takes listening and observation skills, the ability to design the mechanics and visualize how various activities will go, plus the ability to prioritize, facilitate, coach and intervene. This is management. This is how managers “get things done through people.” People who manage a plan are curators of documentation, not managers of a team of people.

  4. Protect the execution of the team. Creating a functional team is one thing. Protecting the team’s execution is entirely another. There are constant challenges to the integrity of the workings of a team. People ghosted to work on multiple teams (a “10% allocation” of a person’s time to a team is utterly meaningless). Stakeholders with differing and highly volatile priorities. Fear - or just tepid commitment - to an Agile way of working. Corporate politics. Low trust corporate cultures. The Agile manager manages upwards and outwards, keeping these threats and many others at bay, preventing them from invading the team.

    There are times when external forces must invade the team, such as changing stakeholder priorities. The Agile manager creates a constructive framework and mechanical process for the team to ingest and respond to things that cannot be deflected without creating chaos with how the do-ers are otherwise do-ing. The manager who does not protect the execution of the team from external drama is a puppet to any and all external stakeholder.

I wrote above that each person in an Agile team drives to the team outcome; that each person must strive to achieve excellence; and that the team will move only as effectively as their weakest performing member. We’ll look at patterns and antipatterns of management behavior in the next post.

Saturday, July 31, 2021

What Can You Do With Less?

In the heady days of the dot-com boom, it became common for investors to challenge would-be entrepreneurs with the question, “what can you do with more?” To the aspirant (and typically struggling) startup, the question was profound, if for no other reason than they were buried in the realities of trying to keep their own narrow universe from imploding on itself.

The question is still asked today, just in different ways. For example, the Wall Street Journal recently profiled the relationship between Masayoshi Son of Softbank and Adam Newmann of WeWork, specifically how Mr. Son prodded Mr. Newmann to pursue ever more ambitious goals in 2018. That ambition culminated in a goal for WeWork to grow revenue from $2b to over $350b in 5 years (making it larger than Apple), a mooted valuation of $10t (making it equal to about 1/3rd the total market valuation of all US equities), and a pitch for $70b in financing. It was certainly more; although, what actually followed was certainly less.

While the question remains the same, the question behind the question is not. A quarter of a century ago, it wasn’t clear what tech companies would succeed, and what success would look like, if in fact any would succeed at all. Spending more in the pursuit of success was a way to have less dependency on serendipitous technology and market phenomenon in the pursuit of what everybody knew was the future. Owning more of the value chain, spending more on marketing and awareness campaigns, signing up more partners, and so forth were ways to project influence over the things out of one company’s direct control. From a finance perspective, this was small beer: the price to try everything and fail wasn’t a whole lot more than the price to try a few things and fail.

Today, if every business must be a digital business, the question of success or even survival is not what is being put to the test. Instead, it is a test of one’s of ambition: if the future of [insert your industry name here] is digital, the question isn’t whether [insert your digital strategy here] has the potential to succeed or not, but whether it will become the dominant digital path in its industry or just an also ran that becomes a footnote in history. You must think bigger than a digital strategy: what are you going to do to impose your vision of the future on the commercial and non-commercial ecosystems relevant to your future? “What could you do with more?”

In rapidly growing markets, there is some wisdom in this. Industry lifecycles are characterized by relatively short periods of rapid growth pursued by hundreds of equity financed competitors that are followed by long periods of slow growth dominated by a handful of oligopolistic market participants sucking cash flow from operations to service debt, finance buybacks and pay dividends. An overwhelming majority of the small competitors that exist during the rapid growth phase won’t survive and the small ones that do won’t matter much. As Larry Ellison pointed out years ago, “The No. 1 software company in every segment makes all the money. We never buy anything where it doesn‟t put us in the No. 1 position or get us in such a strong No. 2 position that we think we can get to No. 1 very quickly.” When a clear market opportunity emerges, the stakes are very high indeed.

Of course, not all tech is about potential for world domination. Sometimes it is about utility. Utilities - think electricity, water, and the like - are taxes on a business. As we saw a couple of months ago, one of the overriding questions that dogs utility tech investments is, “do we have to do it now?” Another is, “what could you do with less?”

This latter question can be responded to as an appeal to value more than to cost. Even within the most mundane of utility tech opportunities are innovations, sometimes small, but innovations nonetheless that are legitimate sources of value. While many (if not most) utility tech investments will never have a comprehensive value proposition, often they can be unpacked so that the utility investment can lead with value realization. Decoupling legacy technologies through APIs and abstraction layers allows for creativity not just in how utility tech is delivered, but how fundamental business problems can be solved. When successful, a utility tech investment can be reframed from a single all-in commitment to a series of investment tranches that deliver both near-term value and long-term utility. When we do this type of analysis, we very often find there is quite a lot that can be done with less.

This draws a great deal of ire, of course. Enterprise IT doesn’t much like requesting a little more funding for the same initiative year after year after year, much less the specter of a long-lived hybrid tech landscape resulting from only partial modernization. Tech vendors prefer large commitments from their customers before they will offer discounts or commit top people. And, this appears to legitimize the lack of confidence that corporate capital allocators have in an IT function’s competency.

But when the relationship between enterprise IT and the rest of the business is characterized by low trust - still all too common to this day - it behooves IT to meet the trust deficit head on. Doing so demonstrates good stewardship of capital and provides transparency into why and how IT spends that capital. It also makes utility tech spend far less self-referential (I still see “we’re moving to the cloud because the cloud is better” as a business case justification) and far more aligned with business goals. And leading with value while asking for tranches of “less” is not an acknowledgement that IT isn’t trustworthy as much as it insists on a high-trust partnership with business and IT on achieving the outcomes. IT doesn’t get a blank check to do tech things, but then neither does the business get anything it might ever want. Both are equal partners in shared outcomes, and partnerships cannot function without trust.

“What can we do with less?” is a good question to ask. Because sometimes, less really is more.

Wednesday, June 30, 2021

Labor's New Deal

The pandemic has created a lot of interesting labor market dynamics, hasn’t it? Week after week brings a new wave of employee survey results that make it clear a lot of workers want to retain a great deal of the location independence they have experienced over the past year. Multiple studies report roughly the same results among knowledge workers, globally: 75% want flexibility in where they work, 30% don’t want to return to an office, and 1 in 3 won’t work for an employer that requires them to be on site full time. In addition, 1 in 5 workers expect to be with a different company in the next year, as many as 40% are thinking about quitting and over half are willing to listen to offers.

This isn’t just sentiment: employees are voting with their feet. The Wall Street Journal reported a few weeks ago that the share of the workforce leaving their jobs is the highest it has been in over twenty years.

Labor wants a new pact.

The post-COVID recovery is a once-in-a-decade economic recovery. To the extent that a company’s growth is indexed to the growth of its labor force (where near-term automation is not an option), a company has to hire. If it doesn’t, it’s going to sit out this recovery. That means businesses are motivated buyers of labor.

The American economy is surging, but employers are struggling to fill skilled and unskilled positions alike. One factor is the absence of slack in the labor market. Curiously, the labor participation rate is plumbing levels not seen since the 1970s. The number of 18 to 65 year olds actively working has been in steady decline since the mid-2000s, a few years before the 2008 financial crisis. It dropped significantly again with the pandemic, and has not yet recovered to pre-pandemic levels. Statistically, there should be labor market slack, but there is no slack as quite a few working age people are electing not to rejoin the workforce. Another factor is that with every company hiring it’s hard for any one employer to achieve visibility among job seekers. A simple search for “product manager” positions in Chicago yields over 6,300 openings; in New York over 6,800 openings; and in Dallas over 5,800 openings. Social media banners announcing “we’re hiring” are useless when every company is hiring.

Labor market tightness and difficulty in differentiating is forcing companies to raise wages. Large, deep-pocketed employers of unskilled labor including WalMart, McDonalds and Amazon have raised their entry level labor wages. Mid-tier and mom-and-pop competitors will be forced to do the same. And, many employers are responding to their own captive surveys yielding results like those mentioned above, offering greater workplace and working hour flexibility to existing staff and recruits. Average wages are going up, and workplace policies are changing to be more accommodative to labor.

With labor tight and economic expansion all around, employers will become increasingly competitive for labor. They will have to be aggressive just to stay in place. Imagine a company with, say, 100 experienced software engineers, project managers, QA engineers and the like that expects to add a dozen more people to the team in the next year. If they lose 20% of this knowledge workforce per the survey results, and assuming 10% of the people they put on the payroll are dud hires, they’ll have to hire upwards of 35 people to achieve a net gain of just 12.

All of this means that labor is having a once-in-a-generation moment.

Labor's power in America arguably peaked in the 1960s and has been on the wane since, the striking Air Traffic Controllers getting fired in the early 80s often held out as a seminal moment in labor's multi-decade decline. But some of you may recall that in the late 1990s, labor briefly had a moment. That was not only the go-go days of the dot-com era, but domestic US call centers were going up in all kinds of American cities, big box retailers wanted their customers to know they were "always open" and kept stores open for 24 hours a day (somebody just might be itching to buy a circular saw at 2a), and fast food drive thrus were kept open 2 hours longer than the dining rooms (conveniently, 'til after the pubs closed). For a brief period, "Sales Associate" positions came with medical and retirement benefits. Well, labor is back. The WSJ made the point last week that labor has power today that it has not enjoyed in decades. And, per the aforementioned statistics, labor is exercising that power.

With so much agitation among workers and demand for labor high, conditions are ripe for labor market “disruptors”. Some employers will simply become very aggressive recruiters of employees of other firms. If disruptive recruiting, employment and retention practices prove successful, we will see winners and losers emerge in “the war for talent.” And it isn’t start up or fringe firms taking aggressive postures. According to the WSJ, Allstate has determined that 75% of the positions they employ can be done remotely, while another 24% can be done in a hybrid fashion. That’s 99% of a traditional employer’s workforce that will have location flexibility. This means location independence may not be a worker bonus as much as it may simply be the new norm. It also means that a company may not simply struggle to hire, but that a failure to adequately adjust to the future of work will make a company vulnerable to disruption as its work force is an easy target for other employers.

History tells us that labor’s moment may not last for very long. But the longer that labor shortages last, and particularly with so much competition for knowledge workers, labor won’t come away empty handed.

Monday, May 31, 2021

Is There a Business Case for Utility Tech Investments?

Last year I wrote a piece on legacy modernization initiatives. Among the points I made was that legacy modernization is at best a break-even proposition: modernization is simply trading something old for its modern counterpart, getting the same capabilities in return. Of course, there are first order benefits to legacy modernization. Additional or more comprehensive capabilities that come standard with a new COTS product; lower labor intensity and less dependency on costly knowledge workers required to sustain legacy assets; and reducing systemic fragility (e.g., production downtime) are all very real economic benefits that have P&L impact. But by and large, these benefits at best cover the costs for a modernization effort: the new assets will come with a cost to acquire and customize, a cost to migrate, a cost to integrate with other systems, and annual costs to maintain, support and evolve. Software ain’t cheap to buy, implement and live with.

But one thing I did not point out in last year’s blog is that a legacy modernization - even a sweeping one - falls into the category of utility tech not value-generative tech.

A value-generative investment is a roll of the dice that, say, a new market opportunity can be developed or a cost efficiency can be made where none was possible before. There is some uncertainty whether a market opportunity can be converted or a cost efficiency can be realized because of factors outside of anybody’s control: that buyers will see the company as a provider of a new category service it has never offered before, that a problem space is sufficiently consistent enough to allow for systemic improvements, that the technology exists to perform the task in the environments and conditions where it must perform, and so on. A value-generative investment is the pursuit of something that may not have been necessary or possible, and therefore could not have been done before. A value-generative investment is an exercise in deploying risk capital through IT in the pursuit of extraordinary benefit that yields competitive advantage.

This does not describe legacy modernization. Investments in utility capabilities are the pursuit of improvements in the way things are done, because at present they are inadequate by contemporary standards. The risks in a legacy modernization investment are entirely to do with execution of the investment itself, not how well the investment performs post-production. For legacy modernization benefits to be realized, the assets must be built to be reliable and low-maintenance; and customization, conversion and cutover costs must not spiral out of control. The proximate causes for utility investment failure are all within the confines of the execution of the investment itself: that the people doing the work are competent in the domain and technologies; that there is low staff turnover for the duration; that the team is not creating an entire project phase of execution (in the form of unanticipated late-stage integration and testing) to solve problems entirely of its own making; and so forth. True, there are unknowns in the business domain and in the legacy systems, and such uncertainty does create the risk for costs to increase. However, uncertainty of this kind is generally covered by cost contingency in the investment proposal. Even in the extreme cases where legacy assets are completely unmaintainable, legacy system modernization is still the replacement of one known domain of capabilities with another.

The nature of the uncertainty in the investment matters because it changes the nature of the capital allocation question put to an investment committee. For value-generative investments, the investment committee is asked whether it wants to gamble some of the firm’s capital in the uncertain pursuit of extraordinary benefit. By and large, only the investment capital itself is at risk, because an investment committee can terminate an underperforming value-generative investment with little reputational and operational blowback. However, for utility investments, the investment committee is asked whether it wants to tie up corporate capital for an extended period of time to improve the quality of services within the firm. Utility investments tend to be all-in commitments, so the investment committee is also underwriting the risk that additional capital will be necessary and that it will be tied up for a longer period of time to make good on the modernization investment.

Hence these are two very different types of capital allocation. One is to bet some pocket change at a casino table with something less than 100% expectation of a full payoff, and perhaps any payoff at all. The other is to prepay for two years for a health club membership in the anticipation that regularly using the health club will result in lower insurance premiums. In capital terms, the prior is equity, the latter is debt.

The justification for each investment is markedly different. The upside potential - however remote - for a value-generative pursuit will eclipse its cost. The upside potential for a utility pursuit will be break-even at best. Even the most thorough of cost-benefit analyses will not make a utility investment a no-brainer. Look, even a value-generative pursuit that fails yields a good story for the CEO to tell the board, provided it wasn’t an outsized gamble of scarce corporate funds. But many a C-level exec has been fired for cost overruns on utility investments.

A compelling value-generative tech proposal gets the investment committee to ask, “we accept the possibility, how probable is the payoff and how long is the window of opportunity?” Yet even the most compelling utility tech proposal gets the investment committee to ask, “we accept the need to do this, but do we have to do this right now?

The question that a cost-benefit analysis for a utility tech investment must frame is, “why should we do this right now?” We’ll look at what that analysis consists of in a future post.