I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.
Showing posts with label Corporate Psyche. Show all posts
Showing posts with label Corporate Psyche. Show all posts

Monday, September 30, 2024

It isn’t “return to office.” It’s “malicious destruction of trust.”

Return-to-office mandates continue to trickle in, and every now and again a prominent employer makes a headline-grabbing decision that all employees must be back in office. The news articles focus on the obvious impact of RTO mandates: the threat to dual income families successfully managing school and daycare with career-advancing employment; loss of quality of life to nonproductive commute time. These are real. But RTO mandates also indicate something else: they are a public acknowledgement of an erosion of trust within a company.

Every company is a society unto itself, with values and social norms that determine how people behave and interact. There are high-integrity and low-integrity workplaces, the distinguishing characteristic being the extent to which people are “free from corrupting influences or motives”. Integrity manifests itself in how people interact with one another, in commitment to craft, and in the administration of the business itself.

First is interpersonal integrity, things that define whether the company is a toxic or fulfilling place to work. Do people take credit for the work of others? Do people want to look good to a point they will make others look bad? Is it safe for a person to acknowledge things they do not know, or to accept responsibility for a mistake?

There is operational integrity, things that define a firm’s commitment to excellence. Does recruiting pursue competent candidates or are they just filling vacancies? Do salespeople inflate the pipeline with low-probability or worthless leads? Do colleagues complete their work without taking shortcuts that could impair results? Does finance send invoices in the hopes the payer will pay without scrutinizing the bill?

Administrative systems indicate whether a workplace is high- or low-integrity because they communicate the extent to which trust is extended to individuals. Are the administrative systems an enabling mechanism for labor or a control mechanism over labor? For example, how highly restricted are employees in how they incur travel expenses? Is performance measurement designed to drive good practice, or confirm adherence to practice? Are annual reviews designed for personal development and advancement, or as a means of gathering structured data to rank order the workforce?

Societies work best when they run on trust. Companies cannot escape the need to spend money to demonstrate or investigate compliance - violations of trust are unfair and can be expensive when they occur - but it is not value generative expenditure. The more a company invests in controls and surveillance to compensate for a lack of trust, the higher the operating costs to the business. Conversely, the lighter the controls on labor, the lower the administrative burden, the greater the productive and creative output from labor. A company with few - ideally no - bad actors - has little real reason to incur cost to compensate for a trust void.

Which brings us back to the RTO mandates.

One of the primary justifications given for RTO is to increase worker collaboration with an eye toward driving creativity and innovation. That sounds plausible as most of us have had an experience where a creative solution came together quickly because of high-bandwidth, in-person collaboration. But if in-person collaboration is so compelling, having globally sourced teams and departments is an impairment of convenience, not a strategic advantage for sourcing best-in-class capability. Nor would firms severely cut employee travel budgets in the face of declining revenues - isn’t that precisely the time a company needs more innovation? Toss in the free productivity harvested from the individual laborer from flexible working, and justifying RTO because of a “paucity of innovation” is a bit of a stretch.

The more likely explanation is a desire for greater workforce control. Working from home proved that a lot of jobs can be done from anywhere. Knowledge is transferable. It isn’t a big stretch that jobs that can be done from anywhere can be done by anyone from anywhere. Physical supervision does not improve management’s ability to provide higher fidelity performance profiles, but it does allow management to assess performance with less friction. Is this person executing at the highest level of throughput, or are they dogging it? Is that person really an expert with deep knowledge, or are they expert at gaming the system? Spot productivity audits are a lot easier in cubeland than in Teamsland. If - when - the edict comes that we have to contract operational labor spend, middle managers may not have better data than they would have with a distributed workforce, but they'll have no excuse for not having it.

Why a push to increase control now? Because corporate income statements are still being buffeted about. Interest rates have cooled off but remain generationally high, depressing corporate capital spending. Price increases have masked drops in unit sales volume. Cumulative inflation has increased input costs. Management has little control over the topline, so it must exercise what control it can over the bottom line.

Labor is a big input cost, and labor working from home is an invisible workforce. Income statement pressures twined with future economic uncertainty make that invisible workforce an easy target. When asked about the number of people who work at The Vatican, Pope John XXIII is credited - probably erroneously - with having replied “about half”. It’s not hard for a COO to be cynical about labor productivity given supply chain, labor, price and cost roller coasters of the last four years.

Before RTO came shelter-in-place necessitating work from home. A lot of people who had never worked in a distributed fashion figured out how to make it work. They’re certainly not heroes in an altruistic sense as they were motivated by self-interest: preserving the company preserved the job. Still, this cohort kept the internal workings of the business functioning during a period of unprecedented uncertainty. That, in turn, merited an increase in operational trust (they responded with excellence) and interpersonal trust (they will do the right thing). RTO negates all of that earned trust.

Saturday, August 31, 2024

For years, tech firms were fighting a war for talent. Now they are waging war on talent.

In the years immediately following the dot-com meltdown, there was more tech labor than there were tech jobs. That didn’t last long. By 2005, the tech economy had bounced back on its own. After that, the emergence of mobile (a new and lucrative category of tech) plus low interest rate policy by central banks fueled demand for tech. Before the first decade of the century was out, “tech labor scarcity” became an accepted norm.

The tech labor market heated up even more over the course of the second decade of the century. Rising equity valuations armed tech companies with a currency more valuable than cash, a currency those companies could use to secure labor through things like aggressive equity bonuses or acqui-hires. COVID distorted this overheated tech labor market even further, as low interest rates for longer, a massive fiscal expansion, and even more business dependency on tech spurred demand. Growth was afoot, and this once-in-a-lifetime growth opportunity wasn’t going to be won with bog standard ways of working: it was going to be won with creativity, imagination and exploration. The tech labor pool expanded as tech firms actively recruited from outside of tech.

The point of this brief history of the tech labor market in the 21st century is to point out that it went from cold to overheated over the span of many years. Not suddenly, and not in fits and starts. And yes, there were a few setbacks (banks pulled back in the wake of the 2008 financial crisis), but in macro terms the setbacks were short lived. It was a gradual, long-lived, one-way progression from cold to super hot.

Then the music stopped, abruptly. COVID era spending came to an end, inflation got out of hand, and interest rates soared. Almost instantly, tech firms of all kinds went from growth to ex-growth. Unfortunately, they built businesses for a market that no longer exists. With capital markets unwilling to inject cash, tech companies need to generate free cash flow to stay afloat. Tech product businesses and tech services firms - those that haven’t filed for bankruptcy - as well as captive IT organizations all tightened operations and shed costs to juice FCF. (Tech firms and tech captives are also in mad pursuit of anything that has the potential to drive growth - GenAI, anyone? - but until or unless that emerges as The Rising Tide That Lifts All Tech Boats, it will not change the prevailing contractionary macroeconomic conditions tech is facing today.)

The operating environment has changed from a high tolerance for failure (where cheap capital and willing spenders accepted slipped dates and feature lag) to a very low - if not zero - tolerance for failure (fiscal discipline is in vogue again). Gone is the exception to spend freely in pursuit of hoped for market opportunities through tech products; tech must now operate within financial constraints - constraints for which there is very, very little room for negotiation. Everybody’s gotta hit their numbers.

While preventing and containing mistakes staves off shocks to the income statement, it doesn’t fundamentally reduce costs. Years of payroll bloat - aggressive hiring, aggressive comp packages to attract and retain people - make labor the biggest cost in tech. Wanton labor force expansion during the COVID years was done without a lot of discipline. Filling the role was more important than hiring the right person. A substantial number were “snowflakes”: people staffed in a role for an intangible reason, whether potential to grow into the role, possession of skills or knowledge adjacent to the position into which they were staffed, or appreciation for years of service - essentially, something other than demonstrable skill derived from direct experience. That means getting labor costs under control isn’t a simple matter of formulaic RIFs and opportunistic reductions with a minor reshuffling of the rank and file. Tech companies must first commoditize roles: define the explicit skills and capabilities an employee must demonstrate, revise the performance management system to capture and measure on structured evaluation data, and stand up a library of digital training to measure employee skill development and certification specifically in competencies deemed relevant to the company’s products and services. Standardizing roles, skills and training makes the individual laborer interchangeable. Every employee can be assessed uniformly against a cohort, where the retention calculus is relative performance versus salary. This takes all uncertainty out of future restructuring decisions - and as long as tech firms lurch between episodic cost cutting and bursts of growth, there will in fact be future restructuring decisions. For management, labor standardization eliminates any confusion about who to cut. The decision is simply whether to cut (based on sales forecasts) and when to cut (systemically or opportunistically to boost FCF for the coming quarter).

Of course, companies can reduce their labor force through natural attrition. Other labor policy changes - return to office mandates, contraction of fringe benefits, reduction of job promotions, suspension of bonuses and comp freezes - encourage more people to exit voluntarily. It’s cheaper to let somebody self-select out than it is to lay them off. FCF is a math problem.

These are clinical steps intended to improve cash generation so that a company can survive. While the company may survive, these steps fundamentally alter the social contract between labor and management in tech.

* * *

A lot of companies in tech used what they called “the war for talent” as marketing fodder, in both sales and recruiting. You should buy Big Consulting because it employs engineers a non-tech firm will never be able to employ on its own. Come to work for Big Software and get the brand on your resume. Every war has profiteers.

Small and mid sized tech has always had to be clever in how it competes for labor. Because it couldn’t compete with outsized comp packages, small tech relied on intangible factors, such as flexible role definitions and strong, unique corporate cultures.

The prior meant the employee would not only learn more because they had the opportunity to do more, they weren’t constrained by a RACI and an operating model that rewarded the employee for “staying in their lane” over doing what was necessary, best, right. This was a boon to the small tech employer, too, because one employee was doing the job of 2, 3, or even 8 employees at any other company, but not for 2x, 3x or 8x the comp.

The latter meant that by aggressively incubating well defined corporate norms and values, a smaller tech firm could position itself as a “destination employer” and compete for the strata of people it most wanted to hire. That might be a culture that values, say, engineering over sales. That might be a purpose-driven business prioritizing social imperatives over commercial imperatives. Culture was a material differentiator, and it’s fair to say that these values had some footing in reality: tech firms on the smaller end of the business scale had to mostly live their values or they wouldn’t retain their staff for very long given the increasing competition for tech labor. There was some “there” there to the culture.

Small and mid sized tech carved out a niche, but even these firms caught the growth bug. Where growth was indexed to labor, small and mid sized tech also went on a hiring spree. Again, where growth was the imperative, hiring lacked discipline. Bloated payrolls meant new people needed a corporate home; shortly after a hiring binge, the company is staffing twenty people to do the work of ten. In comes the RACI, out goes the self-organizing team. Plus the erosion of culture - the move away from execution that is representative of core values - was accelerated (if not initiated) by undisciplined hiring twined with natural labor attrition of long-timers during the go-go years for tech labor. Like it or not, the pursuit of growth is a factor in redefining culture: even if a growth agenda by itself injects no definitive identity, it does have a dilutive effect on established identity. To wit: new employees did not find the strong values-based culture described during the interview process, and long-time employees saw their values-based practices marginalized, because too many new hires with no first hand experience of cultural touch points to lean on were staffed on the same team. Culture devolves into a free-for-all that favors the newbie with the strongest will. The culture is dead, long live the growth agenda.

As mentioned above, the music stopped, and the company has to prioritize FCF. Prioritized over growth, because growth is somewhere between non-existent and just keeping pace with inflation. Prioritized over culture, because the culture prioritized people, and people are now a commodity.

Restated, labor gets the short end of the stick.

Employees recruited in more recent years from outside the ranks of tech were given the expectation that we’ll teach you what you need to know, we want you to join because we value what you bring to the table. That is no longer applicable. Runway for individual growth is very short in zero-tolerance-for-failure operating conditions. Job preservation, at least in the short term for this cohort, comes from completing corporate training and acquiring professional certifications. Training through community or experience is not in the cards.

For all employees, it means that the intangibles a person brings cannot be codified into a quarterly performance analysis and are completely irrelevant. The “X factor” a person has that makes their teams better, the instinct a person has for finding and developing small market opportunities, the open source product with the global community of users this person has curated for years: none of these are part of the labor retention calculus. It isn’t even that your first bad quarterly performance will be your last, it’s that your first neutral quarterly performance could very well be your last. The ability to perform competently in multiple roles, the extra-curriculars, the self-directed enrichment, the ex-company leadership - all these things make no matter. The calculus is what you got paid versus how you performed on objective criteria relative to your cohort. Nothing more. That automated testing conference for practitioners you co-organized sounds really interesting, but it doesn’t align with any of the certifications you should have earned through the commoditized training HR stood up.

Long time employees - those who joined years ago because they had found their “destination employer” - hope that “restructuring” means a “return to core values”. After all, those core values - strongly held, strongly practiced - are what made the company competitive in a crowded tech landscape in the first place. Unfortunately, restructuring does not mean a return to core values. Restructuring to squeeze out more free cash flow means bloodletting of the most expensive labor; longer tenured employees will be among the most expensive if only because of salary bumps during the heady years to keep them from jumping ship.

Here is where the change in the social contract is perhaps the most blatant. In the “destination employer” years, the employee invested in the community and its values, and the employer rewarded the loyalty of its employees through things like runway for growth (stretch roles and sponsored work innovation) and tolerance for error (valuing demonstrable learning over perfection in execution). No longer.

“Culture eats strategy for breakfast” is relevant when labor has the upper hand on management because culture is a social phenomenon: it is in the heads and hearts of humans. When labor is difficult to replace, management is hostage to labor, and culture prevails. But jettisoning the people also jettisons the culture. Deliberately severing the keepers of culture is not a concession that a company can no longer afford to operate by its once-strongly-held values and norms; it is an explicit rejection of those values and norms. By extension, that is tantamount to a professional assault on the people pursuing excellence through those values and norms.

Tech firms large and small once lured labor by values: who you are not what you know makes us a better community; how we work yields outcomes that are better value for our customers; how we live what we believe makes us better global citizens. Today, those same tech firms can’t get rid of the labor that lives those values fast enough.

Monday, July 31, 2023

Resistance

Organizational change, whether digital transformation or simple process improvement, spawns resistance; this is a natural human reaction. Middle managers are the agents of change, the people through whom change is operationalized. The larger the organization, the larger the ranks of middle management. It has become commonplace among management consultants to target middle management as the cradle of resistance to change. The popular term is “the frozen middle”.

There is no single definition of what a frozen middle is, and in fact there is quite a lot of variation among those definitions. Depending on the source, the frozen middle is:

  • an entrenched bureaucracy of post-technical people with no marketable skills who only engage in bossing, negotiating, and manipulating organizational politics - change is impossible with the middle managers in situ today
  • an incentives and / or skills deficiency among middle managers - middle managers can be effective change agents, but their management techniques are out of date and their compensation and performance targets are out of alignment with transformation goals
  • a corporate culture problem - it’s safer for middle managers to do nothing than to take risks, so working groups of middle managers respond to change with “why this can’t be done” rather than “how we can do this”
  • not a middle management problem at all, but a leadership problem: poor communication, unrealistic timelines, thin plans - any resistance to change is a direct result of executive action, not middle management

The frozen middle is one of these, or several of these, or just to cover all the bases, a little bit of each. Of course, in any given enterprise they’re all true to one extent or another.

Plenty of people have spent plenty of photons on this subject, specifically articulating various techniques for (how clever) “thawing” the frozen middle. Suggestions like “upskilling”, “empowerment”, “champion influencers of change”, “communicate constantly”, and “align incentives” are all great, if more than a little bit naive. Their collective shortcoming is that they deal with the frozen middle as a problem of the mechanics of change. They ignore the organizational dynamics that create resistance to change among middle management in the first place.

Sometimes resistance is a top-down social phenomenon. Consider what happens when an executive management team is grafted onto an organization. That transplanted executive team has an agenda to change, to modernize, to shake up a sleepy business and make it into an industry leader. It isn’t difficult to see this creates tensions between newcomers and long-timers, who see one another as interlopers and underperformers. Nor is it difficult to see how this quickly spirals out of control: executive management that is out of touch with ground truths; middle management that fights the wrong battles. No amount of “upskilling” and “communication” with a side order of “empowerment” is going to fix a dystopian social dynamic like this.

One thing that is interesting is that the advice of the management consultant is to align middle management’s performance metrics and compensation with achievement of the to-be state goals. What the consultants never draw attention to is executive management receiving outsized compensation for as-is state performance; compensation isn’t deferred until the to-be state goals are demonstrably realized. Plenty of management consultants admonish executives for not “leading by example”; I’ve yet to read any member of the chattering classes admonish executive to be “compensated by example”.

There are also bottom-up organizational dynamics at work. “Change fatigue” - apathy resulting from a constant barrage of corporate change initiatives - is treated as a problem created by management that management can solve through listening, engagement, patience and adjustments to plans. “Change skepticism” - doubts expressed by the rank-and-file - is treated as an attitude problem among the rank-and-file that is best dealt with by management through co-opting or crowding out the space for it. That is unfortunate, because it ignores the fact that change skepticism is a practical response: the long timers have seen the change programs come and seen the change programs go. The latest change program is just another that, if history is any guide, isn’t going be any different than the last. Or the dozen that came and went before the last.

The problematic bottom up dynamic to be concerned with isn’t skepticism, but passivity. The leader stands in front of a town hall and announces a program of change. Perhaps 25% will say, this is the best thing we’ve ever done. Perhaps another 25% will say, this is the worst thing we’ve ever done. The rest - 50% plus - will ask, “how can I not do this and still get paid?” The skeptic takes the time and trouble to voice their doubts; management can meet them somewhere specific. It is the passengers - the ones who don’t speak up - who represent the threat to change. The management consultants don’t have a lot to say on this subject either, perhaps because there is no clever platitude to cure the apathy that forms what amounts to a frozen foundation.

Is middle management a source of friction in organizational change? Yes, of course it can be. But before addressing that friction as a mechanical problem, think first about the social dynamics that create it. Start with those.

Tuesday, February 28, 2023

Shadow Work

Last month, Rana Foroohar argued in the FT that worker productivity is declining in no small part because of shadow work. Shadow work is unpaid work done in an economy. Historically, this referred to things like parenting and cleaning the house. The definition has expanded in recent years to include tasks that used to be done by other people that most of us now do for ourselves, largely through self-service technology, like banking and travel booking. There are no objective measures of how much shadow work there is in an economy, but the allegation in the FT article is that it is on the rise, largely because of all the fixing and correcting that the individual now must do on their own behalf.

There is a lot of truth to this. Some of the incremental shadow work is trivial, such as having to update profile information when an employer changes travel app provider. Some is tedious, such as when people must patiently hurdle through the unhelpful layers of primitive chat bots to finally reach a knowledge worker to speak to. Some is time consuming, such as when caught in an irrops travel situation and needing to rebook travel. And some is truly absurd, such as spending months navigating insurance companies and health care providers to get a medical claim paid. Although customer self-service flatters a service provider’s income statement, it wreaks havoc on the customer’s productivity and personal time.

But it is unfair to say that automated customer service has been a boon to business and a burden to the customer. Banking was more laborious and inconvenient for the customer when it could only be performed at a branch on the bank’s time. And it could take several rounds - and days - to get every last detail of one’s travel itinerary right when booking a business trip through a travel agent. Self-service has made things not just better, but far less labor intensive for the ultimate customer.

It is more accurate to say that any increase in shadow work borne by the customer is not really a phenomenon of the shift to customer self-service as much as it lays bare the shortcomings of providers that a large staff of knowledgable customer service agents were able to gloss over.

First, a lot of companies force their customers to do business with them in the way the company operates, not in the way the customer prefers to do business. A retailer that requires its customers to put an order on a specific location rather than algorithmically routing the order for optimal fulfillment to the customer - e.g., for best availability, shortest time to arrival, lowest cost of transportation - forces the customer to navigate the company’s complexity in order to do business. Companies do this kind of thing all the time because they simply can’t imagine any other way of working.

Second, edge cases defy automation. Businesses with exposure to a lot of edge cases or an intolerance to them will shift burden to customers when they arise. The travel industry is highly vulnerable to weather and suffers greatly with extreme weather events. Airline apps have come a long way since they made their debut 15 years ago, but when weather disrupts air travel, the queues at customer service desks and phone lines get congested because there is a limit to the solutions that can be offered through an app.

Third, even the simplest of businesses in the most routine of industries frequently manage customer service as a cost to be avoided, if not outright blocked. A call center that is managed to minimize average call time as opposed to time to resolution is incentivized to direct the caller somewhere else or deflect them entirely rather than resolve the customer problem. No amount of self-service technology will compensate for a company ethos that treats the customer as the problem.

There is no doubt that shadow work has increased, but that increase has less to do with the proliferation of customer self-service and more to do with the limitations of its implementation and the provider’s attitude toward their customer.

Perhaps more important is what a company loses when it reduces the customer service it provides through its people: the inability to immediately respond humanely to a customer in need; the aggregate loss of customer empathy through a loss of contact. This makes it far more difficult for a company to nurture its next generation of knowledge workers to troubleshoot and resolve increasingly complex customer service situations.

But of greater concern is that as useful as automation is from a convenience and scale perspective, its proliferation drives home the point that customers are increasingly something to be harvested, not people with whom to establish relationships. Society loses something when services are proctored at machine rather than human scale. In this light, the erosion of individual productivity is relatively minor.

Wednesday, September 30, 2020

All In

Immediately after World War II, Coca-Cola had 60% of the soft drinks market in the United States. By the early 1980s, it had about 25%. Not only had Coca-Cola been outmanouvered in product marketing (primarily by Pepsi), consumer concerns over sugar and calories drove consumers to diet soft drinks and to refreshment options outside of the soft drink category. The fear in the executive ranks was evidently so great that Coca-Cola felt it necessary to change its formula. Coca-Cola did the market research and found a formula that fizzy-drink consumers preferred over both old Coke and Pepsi. Coca-Cola launched the new product as an in-place replacement for their flagship product in 1985.

New Coke flopped.

Within weeks, consumer blowback was comprehensive and fierce (no small achievement when media was still analogue). Turns out there is such a thing as bad publicity if it causes people to stop buying your product. Sales stalled. Before three months were out, the old Coca-Cola formula was back on the shelves.

Why did Coca-Cola bet the franchise? The data pointed to an impending crisis of being an American institution that would soon be playing second fiddle to a perceived upstart (ironically an upstart founded in the 19th century). Modern marketing was coming into its own, and Pepsi sought to create a stigma among young people who would choose Coke by using a slogan and imagery depicting their cohort as "the Pepsi generation." Coca-Cola engineered a replacement product that consumers rated superior to both the classic Coke product and Pepsi. The data didn't just indicate New Coke was a better Coke than Coke, the data indicated New Coke was a better Pepsi than Pepsi. New Coke appeared to be The Best Soft Drink Ever.

Still, it flopped.

There are plenty of analyses laying blame for what happened. One school of thought is that the testing parameters were flawed: the sweeter taste of New Coke didn't pair as well with food as classic Coke, nor was a full can of the sweeter product as satisfying as one sip. Another is sociological: people had a greater emotional attachment to the product that ran deeper than anybody realized. Most of it is probably right, or at least contains elements of truth. There's no need to rehash any of that here.

New Coke isn't the only New thing that flopped in spectacular fashion. IBM had launched the trademarked Personal Computer in 1981 using an open architecture of widely available components from 3rd party sources such as Intel and the fledgling Disk Operating System from an unknown firm in Seattle called Microsoft. Through sheer brand strength, IBM established dominance almost immediately in the then-fragmented market for microcomputers. The open hardware architecture and open-ended software licensing opened the door for inexpensive IBM PC "clones” that created less expensive, equally (and sometimes more) advanced, and equally (if not superior) quality versions of the same product. IBM created the standard but others executed it just as well and evolved it more aggressively. In 1987, IBM introduced a new product, the Personal System/2. It used a proprietary hardware architecture incompatible with its predecessor PC products, and a new operating system (OS/2) that was only partially compatible with DOS, a product strategy not too dissimilar to what IBM did in the 1960s with the System/360 mainframe. IBM rolled the dice that it could achieve not just market primacy, but market dominance. They engineered a superior product. However, OS/2 simply never caught on. And, while the hardware proved initially popular with corporate buyers, the competitive backlash was fierce. In a few short years, IBM lost its status as the industry leader in personal computers, had hundreds of millions of dollars of unsold PS/2 inventory, laid off thousands of employees, and was forced to compete in the personal computer market on the standards now set by competitors.

These are all-in bets taken and lost, two examples of big bets that resulted in big routs. There are also the all-in bets not taken and lost. Kodak invented digital camera technology but was slow to commercialize it. The threat of lost cash flows from their captive film distribution and processing operations in major drug store chains (pursuit of digital photography by a film company meant loss of lucrative revenue to film distributors and processors) was sufficient to cow Kodak executives into not betting the business. Polaroid was similarly an leader in digital cameras, but failed to capitalize on their early lead. Again, there have been plenty of hand-wringing analyses as to why. Polaroid had a bias for chemistry over physics. Both firms were beholden to cash flows tied to film sales to distributors with a lot of power. While each firm recognized the future was digital, neither could fathom how rapidly consumers would abandon printed pictures for digital.

We see similar bet-the-business strategies today. In the early 2000s, Navistar bet on a diesel engine emission technology - EGR, or exhaust-gas-recirculation - that was contrary to what the rest of the industry was adopting - SGR, or selective catalytic reduction. It didn't pan out, resulting in market share erosion that was both substantial and rapid, while also resulting in payouts of hundreds of millions of dollars in warranty fees. Today, GM is betting its future on electronic vehicles: the WSJ recently reported that quite a few internal-combustion based products were cut from the R&D budget, while no EV products were.

All in.

The question isn't "was it worth betting the business." The question is, "how do you know when you need to bet the business."

There are no easy answers.

First, while it is easy to understand what happened after the fact, it is difficult to know what alternative would have succeeded. It isn't clear that either Kodak or Polaroid had the balance sheet strength to withstand a massive erosion in cash flows while flopping about trying to find a new digital revenue model. The digital photography hardware market was fiercely competitive and services weren't much of a thing initially. Remember when client/server software companies like Adobe and SAP transitioned to cloud? Revenues tanked and it took a few years for subscription volume to level up. It was, arguably, easier for digital incumbents to make a digital transition in the early 2010s than it was for an analogue incumbent to make the same move in the late 1990s. Both firms would have been forced to sacrifice cash flows from film (and Kodak in film processing) in pursuit of an uncertain future. As the 1990s business strategy sage M. Tyson observed, "Everyone has a plan until they're punched in the face."

To succeed in the photography space, you would have needed to anticipate that the future of photography was as an adjunct to a mobile computing device, twined with as-of-yet unimagined social media services. Nobody had that foresight. Hypothetically, Kodak or Polaroid execs could (and perhaps even did) anticipate sweeping changes in a digital future, but not one that anticipated the meteoric rise in bandwidth, edge computing capabilities, AI and related technologies. A "digital first" strategy in 1997 would have been short-term right, only to have been proven intermediate- and long-term wrong without a pivot to services such as image management and a pivot a few short years after that to AI. It's difficult to believe that a chemistry company could have successfully muddled through a physics, mathematics and software problem space. It's even more difficult to imagine the CEO of that company could successfully mollify investors again and again and again when asking for more capital because the firm is abandoning the market it just created because it's doomed and now needs to go after the next - and doing that three times over the span of a decade. In theory, they could have found a CEO who was equal parts Marie Curie, Erwin Schrödinger, Issac Newton, Thomas Watson, Jr., Kenneth Chenault, and Ralph Harrison. In practice, that's a real easy short position to take.

Second, it's all well and good when the threat is staring you in the face or when you have the wisdom of hindsight, but it's difficult to assess a threat let alone know what the threats and consequences really are, and are not. A few years ago, a company I was working with started to experience revenue erosion at the boundaries of their business, with small start-up firms snatching away business with faster performance and lower costs. It was a decades-old resource-intensive data processing function, supplemented with labor-intensive administration and even more labor-intensive exception handling. Despite becoming error-prone and slow, they had a dominant market position that was, to a certain extent, protected by exclusive client contracts. While both the software architecture and speed prevented them from entering adjacent markets with their core product, the business was a cash cow and financed both dividends and periodic M&A. They suffered from an operational bias that impaired their ability to imagine the business any differently that it was today, a lack of ambition to organically pursue adjacent markets, and a lack of belief that they faced an existential threat from competitors they saw as little more than garage-band operators. Yet both the opportunities and the threats looked very plausible to one C-level exec, to a point that he believed failure to act quickly would mean significant and rapid revenue erosion, perhaps resulting in there not being a business at all in a few years. Unfortunately, all unproveable, and by the time it would be known whether he was prophet or crazy street preacher, it would be too late to do anything about it: remaining (depleted) cash flows would be pledged to debt service, inhibiting any re-invention of the business.

Third, even the things you think you can take for granted that portend future change aren't necessarily bankable on your timeline. Some governments have already created legislation that all new cars sold must be electric (or perhaps more accurately, not powered by petroleum) by a certain date. A lot of things have to be true for that to be viable. What if electricity generation capacity doesn't keep up, or sufficient lithium isn't mined to make enough batteries? Or what if hydrocarbon prices remain depressed and emissions controls improve for internal combustion engines? Or what if foreign manufacturers make more desirable and more affordable electronic vehicles than domestic ones can? If they were to happen, it would increase the pressure that legislatures would feel to postpone the date for full electrification. For a business, going all-in too late will result in market banishment, but too early could result in competitive disadvantage (especially if a company creates the "New Coke" of automobiles... or worse still, The Homer). These threats create uncertainty in allocating R&D spend, risk of sales cannibalization of new products by old, and sustained costs for carrying both future and legacy lines for an extended period of time.

Is it possible to be balance sheet flexible, brand adaptable, and operationally lean and agile, so that no bet need be a bet of the business itself, but near-infinite optionality? A leader can be ready for as many possibilities as that person can imagine. Unfortunately, that readiness goes only as far as creditors and investors will extend the confidence, customers will give credibility to stretch the brand, and employees and suppliers can adapt (and re-adapt). To the stars and beyond, but if we're honest with ourselves we'll be lucky if we reach the Troposphere.

Luck plays a bigger role than anybody wants to acknowledge. The bigger the bet, the more likely the outcome will be a function of being lucky than being smart. The curious thing about New Coke is that it might have been the Hail Mary pass that arrested the decline of Coca-Cola. Taking away the old product - that is, completely denying anybody access to it - created a sense of catastrophic loss among consumers. Coca-Cola sales rebounded after its reintroduction. In the end, it proved clever to hold the flagship product hostage. Analyst and media reaction was cynical at the time, suggesting it was all just a ploy. Then-CEO Roberto Goizueta responded aptly, saying "we're not that smart, and we're not that dumb."

And that right there is applied business strategy, summed up in 9 words.

Saturday, February 29, 2020

To Transform, Trade Ego for Humility

Ten years ago, when the mobile handset wars were in full swing, I wrote a blog analyzing the differences among the leaders in the space. Each had come to prominence in the handset market differently: Nokia was a mobile telephony company, Blackberry a mobile email company, Apple a personal technology company, Google an internet search and advertising company.

With the benefit of hindsight, we know how it played out. Nokia hired a manager from Microsoft to wed the handset business to any alternative mobile operating system to iOS that wasn't made by Google. RIM initially doubled down on their core product, but eventually scotched their proprietary OS in favor of Android. Neither strategy paid off. Nokia exited the handset business in 2013. RIM exited the handset business in 2016. Both companies burned through billions of dollars of investor capital on losing strategies in the handset market.

There has been evidence published over the years to suggest that the self-identity of the losing firms worked against them: interactions via voice call and email had less overall share time on mobile devices, overtaken by emerging interactions such as social media. By providing a platform for independent software development, an entirely new category of software - the mobile app - was created. iOS and Android were well positioned to create and exploit the change in human interaction with technology. Nokia and Blackberry were not.

* * *

Earlier this week, Wolfgang Münchau posited that the European Union is at a cultural disadvantage to the United States and China in the field of Artificial Intelligence. Instead of finding ways to promote AI through government and private sector development and become a leader in AI technology, the EU seems intent on defending itself from AI through regulation. For that to be effective, as Mr. Münchau writes, technology would have to stop evolving. Since regulators tend not to be able to imagine a market differently than it is today, new AI developments will be able to skirt any regulation when they enter the market. It seems to be a Maginot Line of defense.

When it comes to technology, Mr. Münchau writes that the European mindset is still very much rooted in the analogue age, despite the fact that the digital age began well back in the previous century. This is somewhere on a spectrum of a lack of imagination to outright denial.

That begs the question: why does this happen? In the face of mounting evidence, why do people get their ostrich on and bury their heads in the sand? Why does a company double down instead of facing its new competitive landscape? Why does the leadership of a socio-economic community of nearly 450 million people simply check out?

Mr. Münchau points out three phenomenon behind cultural barriers to adaptability.

The dominant sentiment in modern-day Europe is anxiety. Its defining need is protection. And the defining feature of its collective mindset is complacency. In the European Commission’s white paper on artificial intelligence all three come together in an almost comical manner: the fear of a high-tech digital future; the need to protect oneself against it; and the complacency inherent in the belief that regulation is the solution.

What stands in the way of change? Fear. Resistance. Laziness.

* * *

Some executive at some company believes the company needs to change in response to some existential threat. That which got it here will not take it forward. Worse still, its own success is stacked against it. What we measure, how we go to market, what we make, how we make, all of that and more needs a gigantic re-think. Unleash the dogs of transformation.

In any business transformation, there is re-imagining and there is co-option. Wedding change to your current worldview - your go-to-market, your product offering, your ways of working - impairs your outcomes. At best, it will make your current state a little less bad. Being less bad might satiate your most loyal of customers, it might improve your production processes around the margins, but it won't yield a transformative outcome.

Transformation that overcomes fear, resistance, and laziness requires doing away with corporate ego. "As a company, we are already pretty good at [x]." Well, good for you! Being good in the way you are good might have made you best in class for the industry you think you're in. What if instead we took the position, "we're not very good at [x]?" General Electric's industrials businesses grew in the 1990s once they inverted their thinking on market share: instead of insisting on being the market share leader, GE redefined those markets so that no business unit had more than 10% market share. That meant looking for adjacent markets, supplemental services, things like that. It's hard to grow when you've boxed yourself in to a narrow definition of the markets you serve; it's easier to grow when you give yourself a bigger target market. That strategy worked for GE in the 1990s.

Re-imagining requires more than just different thinking. It requires humility and a willingness to learn. From everybody. The firm's capital mix (debt stifles change, equity does not), capital allocation processes (waterfall gatekeeping stifles adaptability), how it sees the products it makes (software and data are more lucrative than hardware), how it operates (deploy many times a day), must all change. That means giving up allegiance to a lot of things we accept as truth. This is not easy: creating a learning organization embraced by investors and labor alike is very difficult to do. But if you're truly transforming, this is the price of admission if you're going to overcome resistance and laziness.

What about fear? Those who truly understand the need to transform will face their deepest fear: can we compete?

In the span of just a couple of years, two deep pocketed firms with healthy growth trajectories introduced mobile handset products and services that eclipsed the functionality of incumbent offerings by 99%. The executive who understood the sea change taking place would not concoct a strategy to fight the battle on their terms. That executive would try to understand what the terms of competition are going to become, and ask if the firm had the balance sheet to scale up to compete on terms set by others.

Mr. Münchau points out that the same phenomenon may be repeating itself among Europe's automakers. They got a late start developing electronic vehicle technology. With governments mandating electrification of auto fleets, the threat is not only real, it's got a specific future date on it. Hence there has been increased consolidation (proposed and real) in the automotive industry in the past decade: an automaker needs scale to develop EV technologies to compete. Those automakers that have consolidated are accepting at least some of the reality that they face: automakers as national champions that create a lot of high-paying industrial jobs struck a balance among public policy, societal interests, and corporate interests for many decades. The change to EV technology is challenging the sustainability of that policy. If the enormity of fighting outdated public policy weren't enough, carmakers moving from internal combustion to electricity also face the transition from hardware to more of a software mindset. The ways of working are radically different.

The firm that truly needs to transform doesn't have the luxury of doubling down on what it knows. It must be willing to give up on long-held beliefs, change its course of action when the data tells it that it must, and face the future with a confidence borne of facts and not conjecture. It must trade ego for humility.

Tuesday, April 30, 2019

Excellence

“A fishing crew may be organised and understood as a purely technical and economic means to a productive end, whose aim is only or overridingly to satisfy as profitably as possible some market’s demand for fish. Just as those managing its organisation aim at a high level of profits, so also the individual crew members aim at a high level of reward. Not only the skills, but also the qualities of character valued by those who manage the organisation, will be those well designed to achieve a high level of profitability. And each individual at work as a member of such a fishing crew will value those qualities of character in her or himself or in others which are apt to produce a high level of reward for her or himself. When however the level of reward is insufficiently high, then the individual whose motivations and values are of this kind will have from her or his own point of view the best of reasons for leaving this particular crew or even taking to another trade. And when the level of profitability is insufficiently high, relative to comparative returns on investment elsewhere, management will from its point of view have no good reason not to invest their money elsewhere.
“Consider by contrast a crew whose members may well have initially joined for the sake of their wage or other share of the catch, but who have acquired from the rest of the crew an understanding of and devotion to excellence in fishing and to excellence in playing one’s part as a member of such a crew. Excellence of the requisite kind is a matter of skills and qualities of character required both for the fishing and for achievement of the goods of the common life of such a crew. The dependence of each member on the qualities of character and skills of others will be accompanied by a recognition that from time to time one’s own life will be in danger and that whether one drowns or not may depend upon someone else’s courage. And the consequent concern of each member of the crew for the others, if it is to have the stamp of genuine concern, will characteristically have to extend to those for whom those others care: the members of their immediate families." (MacIntyre, 1994, pp.284-285)
Now MacIntyre is a moral philosopher, and there is no reason why he should ask the question which would concern a business economist like me: which of these crews catches more fish?
-- John Kay, Ethical Finance

In the early 1980s, Tom Peters and Robert Waterman co-authored a book called "In Search of Excellence." At the time, US products and services were generally regarded as inferior in quality to those from other countries. It was bad enough that American manufactured goods and consumer services were so poor; what really made it annoying were the tens of thousands of corporate bystanders who were unwilling or just flat-out helpless to do anything about it. That inaction in corporate America was brought home with a video of a Japanese manufacturing line worker electively inspecting the windshield wipers on finished cars on his way out of the plant after his shift was over. Needless to say, Peters and Waterman had struck a nerve.

Although In Search of Excellence presented an empirical case for a correlation between economic success and excellence in operations, their case studies didn't stand the test of time as a number of their exemplar companies suffered problems within a few years of the publication of the book. While those companies didn't necessarily underperform for operational reasons, their disappointing results did cast doubt on the causality. Still, that didn't invalidate their thesis: at most, the evidence of causality is fleeting owing to multiple business conditions; at the very least, In Search Of Excellence was the first to articulate some common sense stuff.

There are other ways to look for the relationship between excellence and outcomes. Dr. John Kay has long argued that a company that is in the business of what it does will outperform a company that is in the business of making money. When Imperial Chemical Industries was in the business of "the responsible application of chemistry" and Boeing's purpose was to "eat, breathe and sleep the world of aeronautics" they were dominant firms in their industries. Each changed their focus to be in the business of financial outcomes. Once that happened, the latter lost its previously unassailable grip on commercial aviation while the prior disappeared entirely as an independent company. When a business disconnects the value of what it provides from how it provides it, and connects it instead to the financial results it wants to achieve, it tends to fall well short of goals. Worse still, once that transition happens the business itself can erode very, very quickly.

In his book Good Strategy, Bad Strategy, Dr. Richard Rumelt lays out characteristics of bad strategy, including mistaking goals for strategy. "Bad strategy", writes Dr. Rumelt, "is long on goals and short on policy or action. It assumes that goals are all you need. It puts forward strategic objectives that are incoherent and, sometimes, totally impracticable." Good strategy consists of a diagnosis, a guiding policy, and coherent action. By definition, strategy is execution-focused; without execution, strategy is worthless. Financial results are not a strategy, they are byproducts of identifying gaps and challenges, meeting and overcoming those - and that the hoped for opportunities that lie beyond them do, in fact, materialize.

If strategy is execution, then what a company does and how it does it will drive the outcomes that it achieves. Yet there is a subtle differentiator in the "how" that Alasdair MacIntyre, the author of the introductory narrative, draws attention to. The first fishing crew has incentives to achieve performance targets. They have clear alignment of values, skills, and outcomes, on which they are supervised and measured. The second fishing crew has a social contract with one another. They also have clear alignment of values, skills, and outcomes, but that alignment happens through how they function as a community: they teach and reinforce values, skills, norms, and behaviors, and they extend their duty of care to the families of their members. Whereas each member of the prior group is committed to themselves individually, each member of the latter group is committed to each other. Mr. MacIntyre makes clear that a commitment to excellence is a product of a community, not a group of individuals no matter how like-minded they may be. He also makes clear that the value yielded by excellence is more than just financial remuneration for doing the job.

But does excellence matter?

Given how narrow measures of business value tend to be, the evidence tends to be on the negative more than on the affirmative. Excellence is like governance, in that it is most obvious when it is absent than when it is present. For example, you can argue as Dr. Kay does that American automakers, having never quite leveled up to the quality standards of their Japanese and German peers in sedans and small cars, suffered more acutely in the 2008 recession than perhaps they would have otherwise. American manufacturers couldn't compensate for a big fall-off in demand for SUVs and light trucks by selling more sedans and small cars, and as a result two out of the three took a tour through bankruptcy, wiping out a lot of investors. In the story of the two fishing crews, Dr. Kay cites the case of the Prelude Corporation which tried to bring modern management to the commercial fishing industry, only to misunderstand that excellence in operations matters more to success in fishing than do measurements and supervision. The Prelude Corporation, which had been a large lobster producer, went out of business within a few years of adopting those modern management techniques.

A company can create the appearance of financial mastery over operations through things like aggressive cost management and starving itself for investment. But as Kraft Heinz reminded investors recently, burning the furniture to keep warm only lasts for so long. If you believe financial goals are more likely to be met by the absence of bad (no accidents, no shutdowns, no product failures) and the presence of good (high netpromoter score, high quality ratings, high uptime), then you can accept that excellence in what a company does and how it goes about it will be a contributing factor to both limiting impairments and generating value. Not necessarily as quantifiable as we would like it to be: the counterfactuals aren't provable and customer decision-making rationale is wooly. But it is enough to say that excellence - as a socio-cultural phenomenon - amplifies the upside and buffers the down.

Thursday, February 28, 2019

The Obsession with Metrics

In recent decades, what I call “metric fixation” has engulfed an ever-widening range of institutions: businesses, government, health care, K-12 education, colleges and universities, and nonprofit organizations. It comes with its own vocabulary and master terms. It affects the way that people talk and think about the world and how they act in it. And it is often profoundly wrongheaded and counterproductive.

Metric fixation consists of a set of interconnected beliefs. The first is that it is possible and desirable to replace judgment with numerical indicators of comparative performance based on standardized data. The second is that making such metrics public (transparency) assures that institutions are actually carrying out their purposes (accountability). Finally, there is the belief that people are best motivated by attaching rewards and penalties to their measured performance, rewards that are either monetary (pay for performance) or reputational (rankings).

-- Dr. Jerry Z. Mueller, The Tyranny of Metrics

In his book Other People's Money: The Real Business of Finance, Dr. John Kay confirms the fallacy of the beliefs Dr. Mueller lays out. The societal utility that banks once provided to the communities they served evaporated once bank managers familiar with their client's character and intimate with their client's needs were replaced by bank salespeople hawking financial products to clients on the basis of credit-scoring algorithms. An increase in published corporate financial data has led to a decrease in transparency as the data published is beyond the comprehension of all but the most sophisticated consumers of it. Rewarding people for hitting financial targets created trading for trading's sake and runaway bonuses, culminating in an "I'll be gone, you'll be gone" culture that intensified the 2008 financial crisis. The misplaced beliefs pointed out by Dr. Mueller lead to the undesirable outcomes described by Dr. Kay.

Dr. Mueller goes on: "Not everything that is important is measurable, and much that is measurable is unimportant."

The first part of this statement begs the question: what is important in business? I posit that in most enterprises today, the outcomes are actually less important than the means. Let that sink in for a moment. Companies have lost a lot of tribal knowledge about their systems and even their core business. They need to first regain that knowledge to put themselves on a path to make their legacy systems (a) accessible, then (b) extensible, and eventually (c) malleable again. What most enterprises desperately need is learning and growth, not more software endpoints on the fringe of an impenetrable legacy hairball. What truly matters is being able to systemically achieve outcomes; achieving outcomes in isolated instances is not a proxy measure for the intrinsic ability to do so.

This sounds great, but it is easier said than sold: regaining lost knowledge and developing the ability to do different things with it may be important but it isn't really measurable in any meaningful manner. Because boards are financially - not operationally - focused, learning and growth will never be a board priority because it doesn't appear on any financial statement. Or at least, not in a positive way: "learning" is cost bloat on the income statement, while "knowledge" is not a leverageable asset on the balance sheet. Making learning and growth a long-lived business priority is a leadership challenge that goes beyond reporting "training hours" and "number of people trained". It takes persistent, compelling storytelling that relates how successful outcomes have been directly and indirectly enabled by the journey and application of organizational learning and growth - and therefore how these outcomes have become organizationally systemic, and not accidents of chance.

Proponents of metrics champion causality: that for an action to be important it must yield some sort of measurable result. I've written elsewhere that causality can be difficult to establish, particularly in complex business environments where constant and dramatic changes inside and outside a business will create volatility of an observable metric. But the causality argument can work against prudent decision-making. For example, suppose we expect to achieve a specific cost efficiency in several stages: we first make business process change supported by some crude technology, soon followed by major technology change to more comprehensively automate that process change, and along the way we look at the data for stubbornly high-maintenance customers to weed out. Common sense tells us these are all good things to do and that the combination of these events gives us operational lift. Unfortunately, a spreadsheet analysis would conclude that investing in the comprehensive tech is useless as the bulk of the cost efficiency will be captured by the manual changes supported by crappy technology; vulnerability to things like manual error is a thin justification for allocating capital when capital is held dear. The spreadsheet analysis also concludes that efficiency cannot come at the cost of topline growth; to the spreadsheet analysis, every dollar of revenue is the same, so we keep all customers, no matter how inefficient it may be to serve them. Ironic that the spreadsheet-based decision-making makes a company both more valuable and a worse business at the same time.

The second part of Dr. Mueller's statement - "much that is measureable is unimportant" - points to the idiocy of many metrics. First, there are vanity metrics. I've relayed this case in a previous blog, but I once worked with an insurance company that used a nominally dollar-denominated coin called "business value" to measure the total impact of IT projects. In a single year they reported yielding more business value than the market capitalization of the firm. It's entirely possible they were woefully undervalued by markets, but it's more likely that their "business value" was as worthless as the PowerPoints they were pixelated on. Then there are the tenuous proxy metrics. A universal bank that had caught the Agile bug used the number of teams using Jira and Jenkins as the measure of how many teams had "gone Agile". Never mind what was actually going on in those teams, or the fact that nothing else - quality, throughput, customer satisfaction - was being effectively measured, let alone changing. The boss said we're going Agile, these are Agile tools, so once all of our people are using Agile tools we must be Agile, and the rest will follow.

Pursuing measurable value can create bigger problems if it is used to prioritize local optimization over systemic optimization. Consider a technology that accelerates systems integration and therefore reduces the cost of development of individual projects, but will very likely result in redundant integration activity across multiple project teams, a higher total cost of ownership across the portfolio of software assets, and a higher cost of change when a common system changes. A bankable lower cost today will will win out over potentially higher costs tomorrow. The prior can be measured - and managers rewarded - in the context of beating budgets for specific projects. The latter is absorbed into a business-as-usual budget, where the incremental inefficiency cannot be meaningfully disentangled from all the other incremental inefficiency piled into it. Urgent priorities always crowd out good lifestyle decisions; but metrics that justify the urgent are always more compelling than metrics that prioritize the important.

Dr. Mueller makes several recommendations for overcoming a metrics fixation, among them: "... [A]sking those with the tacit knowledge that comes from direct experience to provide suggestions about how to develop appropriate performance standards. [...] A system of measured performance will work to the extent that the people being measured believe in its worth." To do so recognizes that domain familiarity is necessary to determine the appropriate measurable outcomes. That implicitly means a definition of worth is not something that is going to come out of an abstract analysis of value. An ounce of context is worth a pound of measurements.

"With measurement as with everything else, recognizing limits is often the beginning of wisdom. Not all problems are soluble, and even fewer are soluble by metrics. It’s not true, as too many people now believe, that everything can be improved by measurement, or that everything that can be measured can be improved."

The better that we holistically understand our business and the more imaginative we are about our understanding of it, the better we intrinsically understand what it takes to make it a better business. In human systems, the whole is greater than the sum of the parts because of the intangible elements that humans bring. Consider baseball. The game of baseball has entered a stats-heavy era that has changed how people think about the game, but numbers, as Steven Kettmann put it, "eclipse a nuanced understanding of the game." Numbers provide insight and can help to re-think long held assumptions. But numbers don't tell the full story of the game. "Being alert to the twists and turns of a game is vital, since it’s the glimpses of character that emerge during these unlikely sequences that give baseball its essential flavor." Mr. Kettmann cites the example of a player's anticipation for how a play will develop as the deciding factor in a playoff game, and possibly a series. There is no spreadsheet for human decision-making in the moment.

It will take some time for the dust to settle, but results reported by Kraft Heinz last week have brought 3G Capital's management tactics - heavy cost-cutting deduced from heavy data analysis - into severe question. Those management tactics appeared to be successful for a number of years, until they weren't, and quite abruptly so. That sudden change in fortune has drawn attention to things critical to a business - asymmetric exposure to a single consumer market with subtly changing consumer tastes and an irrelevance of the consumer-products marketing model in people's daily lives - that required more than data to perceive, let alone prepare for.

"Managers agree. 'I watch the game,' said Bruce Bochy, the manager of the World Series champion San Francisco Giants. 'You don’t see me writing down a lot of things or having to look down at stats. They’re important, but there are some things that you can’t see on a spreadsheet.'"

Metrics help us to better understand something that we've learned through experience and observation. But we can never appreciate something through numbers alone: we must have the wisdom of experience and observation. Metrics are sources of data and potentially sources of information, but they are not sources of wisdom.

Wednesday, January 31, 2018

You say you want a devolution...

"This isn't to say that alternative approaches to management are dead, or that they have no future. It is to say that in the absence of serious upheaval - the destabilization / disruption of established organizations, or the formation of countervailing power to the trends above - the alternatives to the Freds will thrive only on the margins (in pockets within organizations) and in the emerging (e.g., equity-funded tech start-up firms)."

-- Me, September 2013

I wrote that nearly 5 years ago. That previous summer I cracked the spine on some management books I had last read a quarter of a century earlier. When I first read those books in the 1980s, there certainly did seem to be a management revolution afoot. In the late 1970s, large industrial firms in the US were plagued with quality and performance problems, a rank-and-file that was fully aware of but apathetic to them, and management that was clueless about what to do. The epitome of the industrialized era in western nations turned out to be a company that would systemically disappoint both customer and investor alike. The long dominant organization-as-machine model was commonly perceived to have matriculated to a state of intellectual bankruptcy. Out went command-and-control, in came employee empowerment and team autonomy. Meet the new boss!

Yet when I read these management books anew in the early 2010s, it was clear that the revolution had been stopped dead in its tracks somewhere along the way. Same as the old boss!

I have had reason to re-visit this recently, this time in the context of enterprise technology platforms. A company that develops recomposable, atomic components that can be consumed in a self-service manner by other developers can help to yield more coarsely-grained solutions more quickly. Making those coarsely grained solutions recomposable components as well should enable an organization to create with both greater ambition and speed.

The objective of a platform is not to build both big and small things more quickly or to build more efficiently, but to create more effectively. A platform should allow for a greater number of experiments and more comprehensive feedback. Employees closest to an opportunity - current and potential consumers, technology, competitors, people and capital - are the ones best positioned to pursue that opportunity through exploring, learning, and adjusting. In an emerging area of business or tech, a local team muddling through stands a better chance of success than a distant management imposing its will over a market. In practice, muddling through experiments and feedback requires some degree of authority devolved to the team level, so that a team can decide and act for themselves.

The notion of authority devolved to the team level brings up the question of the autonomous organization yet again. Plus ça change...

The same old idea comes with the same old questions. What does an organization of autonomous teams look like? Can it work? How does it scale?

Before we ask, "can an organization of autonomous teams work?", we have to ask, "what does autonomy at team level mean?" Does it mean the authority and responsibility for what they do and when they get it done? Does it include design and architecture? Can they act on things that are nominally the responsibility of other teams? Do they get to pick and choose the people on their team and the providers they source people from? Do they have to secure their own funding? Who do they answer to? How are they measured?

It may mean all of these things, or it may mean just a few. Autonomy is in the eye of the beholder. To some, just having operational autonomy - authority over what, when and how a team fulfills delivery goals - is sufficient. To others, operational autonomy without owning the P&L and balance sheet - everything from capital to compensation levels - is merely responsibility without authority under the guise of self-direction.

Every firm that has gone down this path has come face to face with the same questions and challenges. Every firm of any scale that has achieved any degree of success has ended up with some hybrid implementation: some things are decentralized, some things centralized; some for a short period of time, others for a longer period of time, and some permanently. For example, we want teams to be responsible for the production operations of their creations, but we must first incubate an ops capability; once we are comfortable that ops has completed its gestation period it will be broken up and absorbed into the line teams. However, to alleviate administrative burden and to avoid violating labor laws we will have a centralized HR function, but we do want ideas to compete for funding, so we will have utility and risk capital allocation processes.

One question, many different answers, and answers that change at different points in time as circumstances require or allow.

When there are many different answers to a single question, it is the wrong question to ask. Looking for specificity where there is none will only sow seeds of confusion and ultimately doubt. And, while there is plenty to be learned from the experiences of others, self-reported testimony must be taken with a grain of salt, and the success of others comes with no guarantee of portability.

A better question to ask is, how convinced are you that team autonomy is a solution to whatever challenges you face? You need to be overwhelmingly convinced that it is, because you need a high tolerance for the ambiguity, uncertainty, and constant adjustments and experiments you will have to run to find and maintain the right balance - that is, construct the right hybrid - for your set of circumstances. You also have to be comfortable without a lot of hard evidence that it solves whatever you had hoped that it would. Even had you not devolved a greater degree of decision-making to the team level, that product might have been a success, that innovation might have emerged, those employees might still have joined your firm. Can't prove the counterfactual.

If you are convinced, and decide to add your name to the list of those that have elected to crack this nut, the operationalizing questions are much different. The one that you will ask again and again and again is the obvious: how do we strike the balance: what do we think that hybrid should be today? what do we think it could possibly be? how do we go about figuring that out?

In addition, given the cyclical love-hate relationship with devolved authority, you must also ask: what makes it more likely, and what makes it less likely, that it will have staying power in your organization?

Wednesday, August 31, 2016

Method, Part II

Last month, we looked at method as a codification of experience borne of values and expressed through rules, guidelines, practices, policies, and so forth. This month, we'll take a look at the relationship between method and the things that influence it, and that it influences.

The principal framework is an article by Cliff Jacobson describing the change in method that impacted camping and outdoor activity starting in the 1950s, drawing comparisons to changes in method in software development. When we think of method in software, we generally think big: "Agile versus Waterfall". But there are more subtle changes that happen in method, specifically through the codification of skill into tools.

Plus ça change...

* * *

Method...

... and Values

"Environmental concerns? In those days, there were none. Not that we didn’t care, you understand. We just didn’t see anything wrong with cutting trees and restructuring the soil to suit our needs. Given the primitive equipment of the day, reshaping the land was the most logical way to make outdoor life bearable.
"In 1958 Calvin Rutstrum brought out his first book, The Way of the Wilderness. Suddenly, there was new philosophy afield. Calvin knew the days of trenched tents and bough beds were numbered. His writings challenged readers to think before they cut, to use an air mattress instead of a spruce bed. Wilderness camping and canoeing were in transition."
-- Cliff Jacobson

As values regarding nature changed from "tame the land" to "conservation", the method of camping had to change. Of course, it took a long time for the new values to settle in. And even once it did, it took a long time for practitioners to change what they did. Resistance to change is a powerful thing, and both practitioner and gear lag would have kept practitioners executing to an old value set in the field for a long, long time.

Values changed in software, too. When users of software were largely internal to a company, before software became such a high cost item for non-tech companies, and before software was weaponized, development moved at a much slower and more deliberate pace. Once the values changed, the method of software delivery was also pressured to change. The change in relationship between humanity and the outdoors is similar to the change in relationship between companies and their software.

... and Skills

But this narrative applies to both a wholesale change in method as well as to the transition from skills-centric to tool-centric method.

"I discovered the joys of camping at the age of 12 in a rustic Scout camp set deep in the Michigan woods. It was 1952, just before the dawn of nylon tents and synthetic clothes. Aluminum canoes were hot off the Grumman forms, though I’d never seen one. Deep down, I believed they’d never replace the glorious wood-ribbed Old Towns and Thompsons."

Early backpackers had to make do with bulky tarps, fashioning poles and tent pegs from branches - sometimes, even sapling trees - in their campsites. The emphasis among the early outdoorsmen was on the skill of adapting the environment to human survival, and achieving Spartan levels of comfort was a symbol of mastery. Being able to fashion poles and pegs from tree limbs was important in the 1940s as tents didn't necessarily come with them. This was not only destructive, it became unnecessary with the evolution of lightweight and portable aluminum poles and stakes. In a relatively short period of time, being good at pioneering became, at best, only useful in an emergency (you need to fashion a tent peg because you discover you've lost some aluminum ones).

Building Agile trackers in spreadsheets and crafting them anew with each project was somewhat akin to fashioning new tent pegs every time you go camping. Creating a new tracker with each project was a waste of money, and having 5 different teams with 5 different trackers was confusing. The advent of cheap commercial trackers made this unnecessary. Still a good skill to have in an emergency - a project tracker so badly polluted with low priority Stories and tasks is an impediment when you want to make a clean and quick start - but fashioning a tracker is no longer itself a core skill.

... and Tools

"The emphasis had shifted from skills to things."

Early tools supporting a method are crude, often hand made and narrow in their usefulness, and several tend to spring up at the same time. The emphasis is on skills.

But with ever increasing popularity of the activity (trips to the Boundary Waters, or Agile software development) comes the tools. Skills take time to learn and master. Tools make the activity at hand easier to perform, and subsequently more accessible and more enjoyable to more people because they're more successful at it. Canoes are made of strong yet lightweight materials so they're more tolerant to misuse while simultaneously easier to portage. Sleeping bags are made of synthetic materials that are water resistant (unlike down) so that somebody who does a sloppy job at packing a canoe pack won't suffer if the bilge water soaks the contents of the bag.

Of course, tools can be a source of efficiency or a source of trouble. A hatchet makes it easy to build safe, small fires out of short cut sticks. But a hatchet can cause grave injury to somebody if they don't know the proper method for chopping wood with it. Nobody is likely to suffer bodily harm from an over-engineered build script (no matter how many felonious thoughts cross the mind of other people in the team) but an overloaded, single-stage build that reduces build frequency and that fails frequently will cause more harm than good.

"Today, high-tech gear and high-powered salesmanship have become a substitute for rock-solid outdoor skills."

As the complexities of a method get codified into the gear, it becomes difficult to separate one from the other. The tools become a proxy for the method because the state-of-practice matures in conjunction with improvements in science (materials or software) and affordability. Today, we create sophisticated, multi-stage pipelines that instantiate their deployment environments and deploy on every commit. 15 years ago, it was amazing to have a build run every few minutes that would both compile source code and run tests. We can't imagine forging our own crude tools to do basic tasks, or even why we'd want to do it.

Newer tools don't lend themselves to older practices. Tightly rolling a modern (down) sleeping bag won't get it into its stuffsack. Managing cloud instances like rack-mounted servers in a physical data center will run up the bills really fast.

This can be a serious point of confusion for middle managers tasked with making their organization "Agile". If we use Jira, Jenkins and jUnit we must be Agile.

... and People

"I felt quite inadequate, like a peasant in Camelot."

Tools can render entire skills sets irrelevant. The right brain creativity to fashion a tracker for some specific project was no longer needed when the commercial tracking tools arrived. It became a left brain activity of making sure all the requirements were entered into the tracker and configuring canned status reports. Suddenly the thing somebody did that was an act of value has been rendered obsolete by the gear.

The information modeler who was capable of telling the right story based on the nature of the task and team is shoved aside by the efficient administrator who's primary job is to maintain team hygiene. It's entirely possible that the hygienist doesn't really understand why they perform the tasks they perform, but they've been told to hustle people to stand-up and make sure people update status on their (virtual) cards. They're also much cheaper than the craftsman they replaced.

This is pretty destabilizing to people. Where Cliff Jacobson felt inadequate by the gear (and the associated cost), the individual can be stripped of their own sense of self-worth by a change in the method. This can happen when the method changes owing to the values (we need to deploy daily and Waterfall won't let us do that). You might have fancied yourself pretty good at software within your organization, but now the boss is telling you that your worldview is out of touch, you're not up to scratch and you're not only told that you're going to do it differently, but how you're going to do it. That's not likely to elicit warm and welcoming feelings. Just the opposite.

But it can also happen when the change in method is a shift from skills to things. Suddenly anybody can appear to be good at project tracking. That can stir resentment that encourages resistance to the tools and pine for the spreadsheets.

The reverse - a sudden shift from tools to skills - has no less an impact. There are development stacks that are entirely tool driven. When the boss comes in and announces that all vendor dependencies in the code and process gotta go, the tool dependency no longer compensates for weak skills. The person accustomed to going glamping may not much care for back country backpacking.

... and Basics

"Chemical fire-starters take the place of correct fire making; indestructible canoes are the solution to hitting rocks; bizzard-proof tents become the answer to ones inability to stormproof conventional designs; GPS positioning has replaced a map and a compass. And the what-if-you-get-your-down-bag-wet attitude attracts new converts every year. In the end, only the manufacturers win."

Cliff Jacobson argues that tools are a poor substitute for skills. Where they support the value system - Leave No Trace camping - they're welcome. But where they are simply gadgets for convenience or separate the individual from the experience, they're not. They're also predatory, exploiting a person's laziness, or fear of being unable to master a skill, or feeling of inadequacy in dealing with challenging situations that might arise.

To same extent, the impulses that spurred the software craftmanship movement are likely similar to those of Messers Rustrum and Jacobson:

"'I’ve canoed and camped for nigh on seventy years and have never got my down bag wet,' he bellered. 'People who get things wet on trips don’t need new gear. They need to learn how to camp and canoe!'"

Pack correctly and paddle competently and you'll never sleep in a soggy bag. We don't need armies of people mindlessly executing test scripts if we build in quality in the first place.

... and the future of method.

Method is a mirror, not a driver. It reflects values and experience, it doesn't create them. Values shift as our priorities change. Experience changes as we learn what new technologies allow, and sometimes re-learn discipline long lost. Method reflects this; it doesn't inform or define this. The values and experience are there, or they are not.

Method is never a destination. It's an echo.

Sunday, July 31, 2016

Method, Part I

Last month, I went on a 6 day, 55 mile canoe trip with several friends. I last went canoe camping in the 1980s with the Boy Scouts. Being out of practice, I bought books from as far back as the 1950s to as recent as 2010 on canoe camping, studying everything from gear and technique to meal planning and water purification.

Some things haven't changed much over the years: Duluth packs are still in fashion because of their low profile when placed inside a canoe and on your back while portaging one. Some things have changed a lot: plastic barrels with harnesses have replaced the old wooden Wanigan boxes. Some things seem to be over-engineered replacements: you could use a GPS and a map, but a compass works really well, doesn't require batteries, and costs a lot less.

No surprise that the method I learned for canoe camping (and backpacking in general) in the 1980s is different from the method practiced today. The method has changed for a lot of reasons. One is technology: materials science has changed what we pack and how we pack as gear is lighter and easier to compact. Another is economic affluence: we no longer make things once in camp, you buy them in advance and pack them in. Yet another are environmental standards: leave-no-trace has us carry food in the thinnest of packaging since we pack it all out.

The method I learned wasn't exactly state-of-the-art, even for the 1980s. For one thing, the leaders had learned method a decade (or more) before I joined. For another, the Boy Scout troop I was with had acquired most of it's gear in the 70s, and some of it dated to the 60s. Gear was expensive, so upgrading it wasn't an option. While our method was effective, it was far from cutting edge.

Clearly, to make a canoe camping trip in 2016, my method needed to change.

* * *

Method is the codification of experience into rules, guidelines, policy, principles, behaviors, norms and so forth. Method is intangible, but it has tangible manifestations: gear and tools are derived from method so that it can be followed, and performed with efficiency.

One reason we develop methods is so that people new to a craft can learn it in a safe and responsible manner: if newbies can build a cooking fire without burning down the forest, they don't go hungry and future campers will have a chance to enjoy the same forest. Sound method spares disaster and frustration. Another reason for developing method is that is allows us to codify knowledge and build on collective craft, pushing the boundaries of technique and gear: the risk of forest fire means we're better off cooking over stoves rather than campfires, which encourages research into energy sources and stoves, which creates safer, lighter and higher density energy sources for cooking, which allows more people to backpack safely for longer periods in remote areas.

There are methods for all kinds of things. NASA's Manned Spacecraft Center defines a method for putting human beings into space and bringing them back alive. FASB defines methods for accounting practices.

Methods are developed by people who have first-hand experience of what works fantastically well, sorta OK, and not at all. This is why people who define methods have to be hands on: method defined by people without experience is just pontification. But those same people need to be abstract thinkers, aware of the un-changeable forces that need to be dealt with. We have to perform a barbecue roll of a spacecraft throughout a journey to the moon, otherwise it'll freeze on one side and burn up on the other.

Methods are a means of practicing a value system. We perform fire safety following this protocol because we're more concerned with the damage we would cause if we lose control of the fire than we are for having a heat source to cook dinner. Values are powerful, because they supersede any rule or practice that is part of the method. If the RCS ring is activated on a Gemini spacecraft, the astronauts have to come home: the procedure is irreversible and we're more concerned with bringing them back alive than pursuing mission objectives that could put their lives in danger. Values are non-negotiable - in theory, anyway. Compromise your values and you cast doubt over the integrity of the method you insist that everybody follow. If you can take fire out of the ring to perform some stunt, it clearly doesn't matter whether fire stays in the ring or not, so "fire safety" must not be that important to you.

Methods define responsibilities and authority. I'm the voyageur, so I'm the navigator of this canoe trip. I'm Capcom, so only I talk to the astronauts. Changes in method are threatening to people where they upset how they understand power dynamics in a group.

The world around changes, so method needs to evolve. Knowing how to make a backpack (Boy Scout Fieldbook, 1967) isn't particularly useful with increased affluence and advances in materials science. People also need to evolve with method to stay on top of practices, as does their gear: practitioner lag sows seeds of confusion, while gear lag can make some activities impractical, if not impossible.

Except when stipulated by law (e.g., accounting standards), there are no shortage of methods for performing the same activity. National Outdoor Leadership teaches a method for canoe camping and backpacking that is different from the Boy Scouts, which is different from Outward Bound, which is different from the method taught by countless other organizations.

When everybody in a group is trained in the same method, we have uniformity of conduct. The group can be expected to socially reinforce the method should one member slack off in a key area (sanitizing cookware) for reasons of convenience (hates doing the washing up).

When people in a group are trained in different methods, or some are trained in no method at all, the differences can range from trivial (you packed in magnesium? That's great mountain man, we have plenty of matches in multiple waterproof containers) to severe (I'm sure you're not planning to burn down the forest, greenhorn, but fire stays in the ring just the same). The differences can be learning experiences, or competitions, or sources of conflict and dysfunction within the party. People can get pretty worked up when every ounce of their being tells them that something is important (secure the canoes overnight because a strong gust of wind can flip - and wreck - a canoe) but other members don't feel as strongly (securing is overkill as gusts that high are highly unusual).

Differences in a small group are mirrored by the differences among method "experts". In the same article in the Summer 2016 edition of Boundary Waters Journal, one expert writes "[getting wet on portages] is completely preventable", while another, 6 pages later in the same article, writes: "Forget about keeping your feet dry. They will be wet." Not only will the disagreements be mirrored, so will the intensity of the arguments about them. This comes as no surprise as method is a proxy for a value system, but the arguments are rarely over the values, and usually over the practices themselves.

Sometimes, we set out to learn a new method by choice. Other times, we're forced to. The means of teaching method range from immersive to suggestive. At one extreme are those who teach method by breaking us down and rebuilding us, to strip away our preconceived notions and ideas, developing new muscle memories for a way of doing things. At the other are those who observe what we do today, suggesting and coaching how we could do something differently, allowing us to decide and trusting that the obviously superior method will prevail.

The student of a new method may embrace it enthusiastically. Or may never be convinced it is better than the one long practiced. And if not a willing adherent to a new method, being told that you have to change your method is threating. It says that the expertise you developed is irrelevant, unnecessary, inferior, or just plain wrong. It erodes your seniority within a system that is based on that method. It's tough being told "the world has moved on from your understanding of it."

We have many methods for developing software, too. We'll look at how we apply those in Part II.

Monday, February 29, 2016

How an Operational Gap Becomes a Generation Gap Becomes a Valuation Gap

A decade or so ago, when an IT organization (captive or company) hit rock bottom - bloated payroll, lost confidence and ruptured trust resulting from rampant defects, rocky deployments, functional mis-fits, and long delivery cycles - it would give anything a try, even that crazy Agile stuff. It didn't matter if it was incumbent management with their backs against the wall, or new management brought in to clean house, desperate times called for desperate measures. To people looking to shake things up, Agile made intuitive sense and a lot of it was appealing, even if its proponents had a bit too much evangelical zeal and some of it sounded a little strange. So they'd bite the bullet and do it, selectively anyway: a build server was easy to provision; releasing software for testing every couple of weeks was done easily enough, and getting everybody in the same physical workspace (or at least in close proximity to one another) could be done with a little arm-twisting; of course, developers were instructed to only pair program "opportunistically", and automating QA was a fight nobody wanted to fight, so they'd sit with the team and test incrementally but go on testing the way they always had. Still, even with compromises, there was a good story to tell, usually to do with much higher quality and eliminating rework, and that made Agile A Good Thing for all parties concerned.

Fast forward to today, and we see that Agile is much more ambitious. A few short years ago we were content to have an automated build execute every few minutes; today we want every check-in to trigger a build that is promoted through progressively more complex tests in virtual environments managed, instantiated and torn down by scripts. We used to be content releasing for user testing every other week, and to production every couple of months; we now aspire to release to production many times a day. We used to want Master Story Lists to guide incremental development; today we want to iteratively experiment through code and have the feedback inform requirements definition and prioritization. We used to be satisfied with better delivery of internally-facing software; today we want to evolve software products that are used by people across our ecosystem, from interested parties to customers to investors to employees. Today, Agile wants to push against everything that creates an artificial or temporal constraint, be it organization, management, accounting policy, or even capital structure.

Although Agile has evolved, the entire tech world hasn't moved with it. In fact, some of it hasn't moved at all: it's still common to see non-Agile organizations that do big up-front design; work in functional and skill silos; have manual builds, deployments and testing; and make "big bang" releases. And, it's still common for them to face a "rock bottom" moment where they conclude maybe it's finally time to look into this Agile stuff.

As hard as it was a decade ago to inject Agile into a non-Agile organization, it's much harder today for a non-Agile organization to complete a transformation. This seems counterintuitive: since the non-Agile to Agile path is so well trod, it should be much easier than it was in those pioneering days of yore. Although there's never been more tools, frameworks, languages, books, blogs, and countless other resources available to the individual practitioner aspiring to work differently, organizational change tends not to be self-directed. The challenge isn't taking an organization through the same well-established game plan, it's finding the people - the transformational leaders - who are willing to shepherd it through its journey.

Unfortunately, re-living the same internal negotiations to reach the same compromises, solving technical and organizational problems long ago solved, only to end up with an operating model that is considerably far short of the state-of-practice today is not a destination opportunity for an experienced change leader. Even assuming, as my colleague Alan Fiddes pointed out, that the change agents brought in still have the vocabulary to carry on arguments last fought so long ago, any change agent worth their salt isn't going to reset their career clock back a decade, no matter the financial inducement.

This might simply mean that the approach to change itself is what has to change: require far less shepherding from without by expecting more self-directed change from within, brought about by setting the right goals, creating the right incentives (motivate people) and measuring the right things (what gets measured is what gets managed). Why shouldn't it be self-directed? It isn't unreasonable to expect people in a line of work as dynamic as software development to keep their skills sharp and practices current. For people leading an organization that's a little dated in how it develops software, then, the question to hold people to isn't "why aren't you doing Agile" but "we're going to deploy any time and all the time effective fiscal Q3, so how are you going to operate to be able to support that?" It's amazing what people will do when sufficiently motivated, change agents be damned.

Whether there's a more effective means of change or not, being far adrift of the state of practice points to a more severe threat to the business as a whole: a generation gap.

* * *

Three decades ago, the state of practice didn't vary that much across companies. Yes, there were people coding C over Rdb deployed in VMS on minicomputers and people coding COBOL over IMS deployed in OS/390 on mainframes, but the practice landscape wasn't especially diverse: waterfall prevailed and a lot of code was still data-crunching logic run in batch. At the time, captive IT, consulting firms, governments, new tech firms (think Microsoft in the mid-80s), and established tech stalwarts (H-P, IBM) could reasonably expect to compete for the same labor. College grads in Computer Science or Management Information Systems learned practices that reinforced the modus operandi common to all consumers of business computing.

The landscape has changed. Practices are far less homogeneous, as they've had to evolve to accommodate a diverse community of interactive users pushing for features and functionality with little tolerance for failure. The familiar combatants still slug it out for labor, but must now also compete against tech product firms untethered to any legacy practices, norms, policies or technologies. Today's grads are trained in new practices and expect their employer to practice them, too.

Companies lagging behind in their state of practice will struggle to compete for newly minted labor: why would somebody with highly marketable tech skills go to work at a place stuck in the past, when they can work in a more current - even cutting edge - environment?

This isn't just a hiring problem. A practice gap is fuel for a generation gap if it deflects young, skilled people from becoming employees. By failing to hire the next generation employee, a company develops an intrinsic inability to understand its next generation customer.

A company isn't going to reach a new generation of buyer - consumer or business - if it is tone deaf to them. A company ignorant of the next generation's motivations, desires, values and expectations has little chance of recognizing what it isn't doing to win their attention, let alone their business. Since social systems are self-reinforcing, a company is unlikely to break the deadlock of ignorance and inaction.

Failing to bridge a generation gap not only cuts a business off from growth opportunities, it sets the stage for long-term irrelevance. Investors recognize this, even when management does not. Growth changes from being a "risk" to being an "uncertainty", and when that happens a company's future1 is no longer priced at a premium, but a discount. In this way, an operational gap becomes a generation gap becomes a valuation gap.

Outdated practices are an indicator that management has it's head buried in the sand: it has a problem it can't see, that it doesn't know how to solve, that is starved for information it can't get because it has elected to disassociate itself from its source. The motivation to change how you practice shouldn't be to become more competitive today, but to still be relevant tomorrow.

 

 

1 By way of example, Yahoo net of its Alibaba holding and cash has frequently been valued at or near $0 by investors in recent years.