I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Wednesday, March 31, 2021

The Ever Given is Not a Black Swan Event

By transporting a container more cheaply than any other vessel afloat, [Emma Maersk] and her six sister ships were expected to stimulate even faster growth in international trade, lowering the cost of moving goods through the supply chains that had reshaped the global economy…

An extremely large container ship - the Ever Given - has been in the news this past week for running aground and blocking traffic through the Suez Canal. This has caused considerable disruption to global supply chains. Some have incorrectly referred to the incident and the aftermath as a “black swan” event.

The disruption is not simply the result of a ship running aground and blocking traffic in a busy, narrow passageway. The disruption is a function of the scale of the container ships themselves. In the mid-2000s, with global trade growing by leaps and bounds every year, shipping companies opted for larger and larger vessels. The largest of these ships each carry the equivalent of over 10,000 truckloads of goods. With trade growing, the value proposition of ever-larger ships was self evident in 2007: “On a per-container basis, a larger vessel cost less to build and operate than a smaller one, allowing the owner to undercut competitors’ cargo rates and still earn a healthy profit.”

Such scale led to optimization among tightly-coupled participants in the global supply chain. “Operating on regular schedules - such that an identical vessel departs Shanghai every Wednesday, stops in Singapore nine days later and arrives in Antwerp five weeks hence, with tight connections to barges and freight trains - intermodal container transport gave manufacturers and retailers the confidence to plan tightly organized long-distance supply chains.”

But as Nassim Taleb pointed out well over a decade ago in his book The Black Swan, “Globalization [of financial markets] creates interlocking fragility, while reducing volatility and giving the appearance of stability. In other words it creates devastating Black Swans.”

And so it has happened with container shipping.

Though supremely efficient at sea, Emma and the even larger ships that followed in her wake became a nightmare. By making freight transportation slower and less reliable than it had been decades earlier, they helped to stifle the globalization of manufacturing…

Local optimization - in this case, minimizing the cost of transporting a container of goods over the sea - distorted the economies of scale of global shipping. Shippers, facing excessive capacity after the financial crisis of 2008, either went out of business, merged, or did deals to fill their ships to capacity. Optimization of the oceangoing containership fleet came at the cost of the efficiency of the rest of the supply chain into which it was integrated.

Discharging and reloading the vessel took longer as well, and not only because there were more boxes to put off and on. The new ships were much wider than their predecessors, so each of the giant shoreside cranes needed to reach a greater distance before picking up an inbound container and bringing it to the wharf, adding seconds to the average time required to move each box. Thousands more boxes multiplied by more handling time per box could add hours, or even days, to the average port call. Delays were legion.
Freight railroads staggered under the heavy flow of boxes into and out of the ports. Where once an entire shipload of imports might be on its way to inland destination within a day, now it could take two or three. Queues of diesel-belching trucks lined up at terminal gates, drivers unable to collect their loads because the ship lines had too few chassis on which to haul the arriving containers. And often enough, the partners in one of the four alliances that came to dominate ocean shipping didn’t use the same terminal in a particular port, requiring expensive truck trips just to transfer boxes from an inbound ship at one terminal to an outbound ship at another.

But the disruption to supply chains created by the beaching of the Ever Given is not a black swan event. First, the Ever Given was involved in an incident in 2019 in which it was caught in high winds while operating at slow speed and collided with a pleasure ferry in the Elbe River in Hamburg, Germany. The same conditions have been cited as contributing factors to the vessel’s recent beaching in the Suez, and it is fair to say those conditions are not exactly the stuff of “one hundred year events”. Second, the supply chain bedlam triggered by the Ever Given’s beaching exposed the asymmetric downside risk endemic to the systemic fragility - that is, the woefully inadequate robustness - in tightly-coupled supply chains. However, this asymmetric risk - as evidenced by the previous two paragraphs - was hidden in plain sight.

An incident is not a “black swan event” simply because some people could not fathom the possibility of it occurring. The triggering event had happened before. The systemic fragility was as plain as day. In fact, the article I’ve quoted throughout this post describing that fragility - titled, appropriately enough, The Megaships That Broke Global Trade - was not published in the wake of the Ever Given’s beaching: it appeared last October in the Wall Street Journal. This is not a black swan event. This was a fragile system operating on borrowed time.

I kept a copy of the October WSJ article intending to use it as a metaphor for large software development processes designed to optimize labor unit costs, specifically developer labor unit costs. The pattern is the same, the risks are the same, the consequences are the same. But with hindsight the article reads much better as a chronicle of one-way risk: a tightly-coupled system with highly concentrated activity in the value stream that, in turn, is locally optimized to a single variable is a powder keg that can be detonated by the most mundane of sparks.

As written, the October WSJ article about the Emma Maersk is a story of misplaced local optimization creating systemic inefficiency and undesirable outcomes. Ironically, the author laments the factors that do not “flatter the bottom line” resulting from local optimization of the oceangoing container fleet: the need for “keeping more inventory, shipping via multiple routings and producing in multiple factories rather than in giant sole-source plants”. While these measures certainly increase costs, they also make manufacturing and distribution far more robust (that is, less fragile). The Ever Given incident points out the benefits of doing so. Cheap insurance policies against one-way downside risk such as multiple facilities and alternative shipping routes are a far preferable price to pay than to have factors outside of your control - a beached container vessel in the Suez - make a mockery of fragile optimization.

While the Emma Maersk article is a compelling story of systemic distortions resulting from local optimization, it is really the story of a critical yet fragile system waiting for a simple, pedestrian event to realize asymmetric downside risk.

Thursday, March 11, 2021

Canoecopia 2021 Is This Weekend

Cheryl and I are presenting a session we titled People, Paddling and Food at Canoecopia this year. Our presentation is about how food affects behaviors in the backcountry. Well, more accurately, how you can get in front of those behaviors, to prevent food from having a negative impact on how a group functions when paddling in the backcountry, as well as how food can have a positive effect on group dynamics. We hope you find the subject interesting enough that you will attend.

Even if you do not find this subject compelling, please give Canoecopia a look. It is the largest paddling conference in North America. It was canceled last year as pandemic lockdown policies took effect hours before the conference was to begin. It is a virtual conference this year.

A lot of people have worked really hard to create a virtual outdoors conference, from creating the conference infrastructure, to making compelling vendor interactions, to recording and staging pre-recorded presentations, to coordinating all of this, to the presenters making work for the coordinators (we mostly got it right on our third submission...), as well as many others doing hundreds of other behind-the-scenes tasks.

Registration is $15. For $15 you get access to amazing, thoughtful people in the paddling community spanning a complete spectrum of topics.

2020 was a difficult year for backcountry trips - overcrowding, closed borders, etc. 2021 will probably not be much different. Regardless the circumstances, we all have the opportunity to improve our thoughtfulness of and tradecraft in the backcountry. Canoecopia is the premier forum in which we can learn how to do that.

I do hope you will attend.

Sunday, February 28, 2021

Echoes

In recent days, I’ve been reminded of some core company values, once fresh and different, later taken for granted. It was good to be reminded of them.

* * *

I was on a call the other day when a member of the client’s team said, “we don’t prioritize because something is easy, we prioritize something because it is hard.” They went on to acknowledge that some of my colleagues had counseled them to do this some months before I started working with them. I was overjoyed to hear the client say this.

When I first joined ThoughtWorks in 2005, one of our core messages was to solve an “outer quadrant” problem first - something in the problem space that is difficult and complex. Something small and discreet of course, as opposed to trying to boil the ocean, but very much in the category of “hard” than “easy”. This was contradictory to the prevailing wisdom at the time (that persists to this day), which was to lead with quick wins, pilots or proofs of concept.

By and large, none of these latter approaches address the core of the problem that needs to be solved. “Quick wins” attack the margins. They might very well be useful, but quick wins are just as likely to contribute to systemic complexity by layering on new points of integration, adding logic and system redundancies, and creating confusion over veracity of data or transaction success. I once came into contact with a company whose primary engagement pattern was to create duplicative data warehouses by copying data from existing data sources and writing reports off it. They’d done this 6 or 7 times at one company alone. It isn’t difficult to imagine the “single source of truth” problem that arose from timing discrepancies and reliability problems among all of those data warehouses. All those quick wins did was make them more sclerotic.

Pilots and proofs of concept tend to be evaluated in environments heavily favored to the success of the pilot, not in environments representative of the actual problem space that needs to be solved. The result of quick wins and pilots and proofs of concept is a false positive with regard to the affordability and even solve-ability of a particular problem space. You might feel good for having done something, but the something doesn’t make material progress toward addressing the underlying problem.

It seems to me that the “quick win” and “pilot” philosophy became increasingly prominent in American business in the 1980s. In the 1970s, American corporations had earned a reputation for poor quality. They also had become gripped by analysis paralysis fueled by, among other things, a fear of making a mistake. Management consultants like Tom Peters struck a nerve on this conflation of “poor quality” and “doing nothing about it”. In the book “In Search of Excellence” published in 1982, Peters and Robert Waterman listed what they called the “8 characteristics of excellent companies”. Top of the list was “a bias for action:” “a preference for doing something - anything - rather than sending a question through cycles and cycles of analyses and committee reports.” Peters, Waterman and others advocated “a bias for action” to cut the Gordian knot of inaction.

At the same time, Japanese management techniques were in the ascendency. Japan’s economy was booming, in no small part because products made in Japan had earned a reputation for both quality and affordability. American managers wanted to know what their Japanese counterparts did differently. Among the things that came to the surface in what made Japanese management different was to solve small problems and learn before solving a big problem. Whereas an American company would attempt to solve Big Problems with Big Solutions, the Japanese management philosophy attacked Big Problems by solving Small Problems with Small Solutions. In the American company, more often than not the Big Solution yielded a Big Mistake: massive cost overruns, excessive time delays, and all-too-frequently outright failure. In the Japanese company, The Big Problem was resolved over time as a sum of the evolution of Small Solutions.

Iterative solution development resonated with the “bias for action” theme because it provided a means of overcoming the collective learned helplessness among employees of American companies. A systemic problem in an American corporation was larger than any one person or small group of people (or their budgets) within an enterprise could solve, but small problems could be solved through simple, affordable solutions. The emerging personal computer technology of the time enabled this tremendously. The result was a brief management revolution of the 1980s, where pent up frustration among employees of being trapped in oppressive systems that yielded poor quality met up with the evangelical management themes of excellence that stressed “action” and “devolved decision making”, which in turn were operationalized by the liberating personal technologies of the day.

The themes of a “bias for action” and “iterative solutions” and many other things stressed by Peters and others under the aegis of “excellence” live on in many forms today, not least of which are Lean and Agile software development as well as product management practices. All well and good, but somewhere along the way the messages get corrupted. All too often, a “bias for action” prioritizes “quick wins” which are interpreted as “go after the low-hanging fruit” and is applied as “do the easy stuff first.” The easy stuff is almost always at the margins and not the core, and are not representative of the need. Similarly, even the most well-intentioned proof-of-concept misses the mark if it intentionally excludes the challenges central to the problem space.

To hear somebody say “we’re prioritizing the hard stuff first” is rewarding indeed. It’s an indicator that they’re serious about solving the problem at hand.

* * *

When I joined ThoughtWorks, one of the things that many of the experienced software developers stressed was mentorship. They would seek out less experienced colleagues who were difficult to staff on a client project and mentor them in everything ranging from technical skills, to the value system for how we worked, to ways in which they could be effective consultants.

Mentoring can be a slow process, the results difficult to measure, it doesn’t scale, and it doesn’t garner a whole lot of attention. Mentoring is not something that comes naturally to a lot of people, and by and large there are more people interested in doing things that are noisy and that scale - go on sales calls, speak at conferences - than in doing the humdrum work of mentoring a person.

Mentoring does more than just help people acquire skills and consulting acumen. It gives new people first hand experience in practicing the values that a company of people holds. This builds muscle memories that become their default way of working. That, in turn, builds the culture of a company, better than any email or podcast or missive ever can.

We had virtual leaving drinks for a long-time colleague particularly adept at doing this kind of mentoring, as he is joining another firm. I will miss him.

Sunday, January 31, 2021

Distribution

In the past year, I’ve written several times about changes taking place and likely to be long lasting in the wake of the COVID-19 pandemic. One area I’ve touched on, but haven’t delved into much, is distribution.

Producers of all kinds have had to create new ways of connecting with their customers. Restaurants lost and had to contend with severely curtailed access to their primary distribution channel, the dining room. Countless manufacturers lost and had to contend with complete loss of their primary distribution channel, small retail and department stores. Digital products like movies lost and had to contend with severely curtailed access to their primary distribution channel, theatres.

Producers have responded by finding, creating and reprioritizing alternative means of distribution. All restaurants (well, those still in business) increased their distribution through take-away meals. eCommerce retail sales from groceries to gadgets skyrocketed last year. In December, Warner Communications announced they will distribute movies simultaneously to theaters and on their HBO streaming service in 2021.

And the opportunities to innovate in distribution are far from over. To wit: in December, the FAA relaxed guidelines for drone delivery, expanding the potential for commercial drone use in the US (and of course, drone delivery has expanded in the rest of the world).

Distribution will be among the primary ways the chattering classes characterize the post-pandemic world. To what extent do people return to old - largely physical - forms of distribution? Are the new forms of distribution long-lasting, or are they just phenomenon of their times, like Sherry served at Christmastime in Britain?

Change is like a regenerative fire. Fire needs three things: fuel, spark, and accelerant.

The fuel is policies in response to COVID-19 and, in turn, the individual and business reaction to them. Companies need to sell and people want to buy. The longer those policies remain in place, the more significant these new forms of distribution become to producers. Those policies also depress the prices for assets tightly coupled to past forms of distribution, things like commercial airplanes and shopping malls.

The spark is the realization that businesses don’t need to operate in the ways that they have for decades, e.g., expenses don't need to be so high, a company doesn’t need as much square footage or needs to be able to use its physical space differently. The evidence exiting 2020 supports this: even though revenues declined in 2020, earnings per share for the S&P 500 for the year look great.

The accelerant is cheap capital: Fed policy will maintain cheap capital for the foreseeable future. This creates liquidity that has to go somewhere, and will find every nook and cranny.

Distribution has changed. Policy is entrenching those changes. The businesses that didn’t collapse survived in large part because they found new distribution channels. Innovation in distribution is accelerating. Cheap capital will finance more innovation in distribution. It’s a reinforcing cycle. And it’s just beginning.

Thursday, December 31, 2020

There Might Be No Grand Lessons, But There Are Plenty of Darn Good Ones

Janen Ganesh wrote in the FT this past week that “COVID offers no grand lesson”. His point is that no political and economic system in any nation has consistently outperformed all others as far as COVID policy is concerned. While his inconclusiveness appears to be just a few column inches filled to meet a deadline at a time of the year when most aren’t reading their news sources too closely, it reinforces something that I wrote last month: that the “grand question” analyses show the bankruptcy of the question itself.

Just the same, there are some pretty good lessons from COVID-19.

Corporate income statements are bloated with expenses, and the beneficiaries of that bloat are in for a reckoning. Bloomberg reported this week: “Scars inflicted on travel are looking permanent. Companies are shifting away from massive expense accounts and the experiential lifestyle has become a memory.”

Corporate balance sheets are bloated and the writedowns are going to be painful. The WSJ reported this week: “Oil industry has written down about $145 billion in assets this year amid an unprecedented downturn and long-term questions about oil prices.”

Devolved decision making is superior to command and control. In the FT this week: “Devolution rules.” “[S]enior executives had to accept that management decisions were best taken on the front line and not in head office - often reversing a more traditional top-down line of command.”

Municipalities are changing in extraordinary ways as companies and their employees leave high cost of living (and high-tax) cities and states for less expensive ones. In the WSJ this week: “Accelerated Growth Strains Austin.” Rents are up, commercial areas are being significantly redeveloped, and residential prices are skyrocketing as the population surges. And with it, the culture and character of the town will change, too. “What happened in San Francisco with the tech boom was something nobody saw coming until it was too late.”

COVID-19’s lessons are that corporate spending was too high, a lot of valuations were too great, decision rights too concentrated, and many companies don’t need to be based in high cost geographies. Activist investors have already been pushing companies on the second point, challenging oil majors for overstating their reserves. Activists will scrutinize P&Ls to look for expense bloat and taxes to free cash to return to investors, and command and control management practices that stifle local efficiencies and innovation.

Societies, economies and businesses don’t evolve through grand design in response to big questions; they evolve through the crowdsourced effort of their member’s ability to muddle through. In benign times - stable monetary policy, low inflation and slow growth - the muddling through isn’t an obvious phenomenon. In extraordinary times, it is. So it turns out that corporate spending was too high, valuations were too great, control was too concentrated, and companies chose to locate in labor bubbles. The shock of this realization raises big questions. Those questions will be resolved through millions of small answers, not the narrative fallacy of a big narrative.

Monday, November 30, 2020

Listen

For two decades now, we’ve heard about the threat of tech disruption to established industries and incumbent firms. Yet it isn’t the tech that disrupts, it’s socio-economic change that creates conditions that a technology can exploit. Tech isn’t the catalyst, but it can be the beneficiary.

COVID may turn out to be the greatest global catalyst of socio-economic change since the middle of the 20th century. As the pandemic has continued and the numbers have risen, the chattering classes are now asking what the lasting changes will be. These can be useful exercises, certainly to the business leaders who’ve got to find their customers or compete against rivals with slimmed down cost structures. Not to mention, the acceleration of innovation - a WSJ article recently cited a McKinsey study that had suggested 10 years of innovation was compressed into a 3 month window - has created opportunities that were not practical just a year ago.

No surprise that the analyses range from the very narrow to the very broad. The narrow ones are easy to comprehend and useful for specific industries. For example, I’ve read projections that anywhere from 15% to 50% of all business travel isn’t going to return. Although a wide range, it suggests that airlines and hotels will have to appeal to leisure travelers to fill seats and beds. Leisure travelers are more price sensitive and less brand loyal than business travelers, so even if volume recovers, revenue will lag, which portends more cost cutting or in-travel sales or on-demand activations (you have to swipe a credit card to get the TV to work in coach on the airline, why not require a customer to swipe a credit card to get the TV to work in the discounted-rate hotel room?) It also suggests that a startup airline with a clean balance sheet, a fleet of fresh planes requiring little maintenance (there’s a desert parking lot loaded with low mileage 737MAX jets), able to draw on a large experienced labor force of laid off travel workers could create significant heartache for incumbents.

At the other end of the extreme are the macro analyses asking The Big Questions. Are cities dead? Is cash dead? Is brick-and-mortar retail dead? These are less useful. The Big Questions are too big. They require far more variables and data than can be acquired let alone thoughtfully considered in a coherent analysis. The authors traffic in interesting data, but either lack the courage to draw any conclusion beyond Things Might Change But Nobody Knows (thanks for that, so helpful), or use the data selectively to present defenses for their preference of what the future will be.

In the middle are Big Question headlines with narrow questions posed, even if not answered. Analyses on “the future of work” cite specific employer examples to posit what is now possible (e.g., specific roles that gain nothing from being in an office and lose nothing by being distributed) and broad employee survey data to suggest their potential scale (e.g., 25% of employees in such-and-such industry want working from home to be a permanent option on a part- or full-time basis). These are useful analyses when they highlight future challenges on management and supervision, collaboration and communication. Economically, employer and employee alike win when a person chooses to relocate to a lower cost-of-living area for quality of life purposes. But that only works if the physical separation causes minimal, if any, impact to career growth, skill acquisition, productivity and participation, and corporate culture. A company believing it can espouse even moderately aggressive distributed workforce policies must be aware that these are specific problems to be solved.

What I’ve yet to see is an analysis of how the institutions that are benefitting and the institutions that are suffering will influence the micro-level trends and, by extension, influence the answers to The Big Questions.

Consider a large universal bank that employs hundreds of thousands of people in cities round the world. One way its retail bank makes money by converting deposits into loans. One way its commercial bank makes money is by making mortgage loans to businesses. One way its investment bank makes money is by underwriting debt issued by municipalities. It may look as if the bank can reduce its operating costs by institutionalizing a work-from-home policy for a large portion of its workforce. But doing so is self-destructive to its business model. Fewer employees in office towers means fewer people to patronize the businesses to which the bank lends, fewer public transport and sales tax receipts to the municipalities whose debt they underwrite, and less demand for construction and renovation of mixed use commercial properties. The bank stands to lose a lot more in revenue than it would gain in reduced costs, so as a matter of policy, a universal bank will want its employees back in their offices in full numbers. The bank will set the same expectation to vendors, particularly those supplying labor.

But other companies are benefiting from this change and will want permanency of these new patterns. Oracle provides the cloud infrastructure that Zoom operates on. More Zoom meetings not only means more revenue for Oracle’s cloud business (investors will pay a premium for a growth story in cloud services), it gives their cloud infrastructure business a powerful reference case as they pursue new clients. It comes as no surprise that Oracle’s executive chairman Larry Ellison is a vocal proponent of lasting change.

And, of course, nobody knows what public policy will look like, which will play a huge role in what changes are permanent and what reverts to the previous definition of normal. State and municipal governments are facing significant tax receipt shortfalls as a result of COVID policies. Many have also suffered a depletion of residents and small businesses. They may offer aggressive tax incentives to encourage new business formation or expansion as well as commercial property development. At the same time, there are states that have received an influx of population and cities that have seen residential property price increases. They will be reluctant to see their newly arrived neighbors leave, so they, too, will offer incentives for them to stay.

It isn’t difficult to imagine there will be aggressive new forms of competition. Suppose firm A is adamant about employees returning to the office. If the employee survey data is to be believed, it’s possible that as much as 25% of firm A’s labor force prefers to work from home a majority of the time. Firm B can aggressively use that as a recruiting wedge to not only lure away firm A’s talent, but offer them relocation packages to lower cost-of-living areas, expanding and potentially upgrading their talent pool at a lower price.

Or, suppose that city C imposes putative taxes on companies employing a distributed workforce. It’s not unprecedented. Several cities already charge a “commuter tax” (also known as a “head tax”) on employers with workers who travel into the city. This would instead be a “can’t-be-bothered-to-commute tax” levied on employers in a city whose workers do not travel into the city. Meanwhile, near-west suburb D of city C entices a WeWork-like firm to develop a property that can house several businesses with partially distributed workforces, offering a smaller physical office space with fully secure physical and digital premises. This would lure midsized employers whose labor force lives largely in the western suburbs, reducing not only their rents but avoiding the “headless tax” imposed by city C.

The analyses of what will or will not change and why it will or will not change is only going to increase in the coming months. And, because some stand to lose significantly from change while others stand to benefit handsomely, the debate will only intensify. For those without the balance sheet and political clout to write the future, a firmly held opinion about the future isn’t worth very much. But the ability to study, process, absorb, investigate and prove ways of exploiting heretofore unrealizable opportunities is priceless.

Saturday, October 31, 2020

Playing the Cards You're Dealt

Some years ago, I was working with a company automating its customer contract renewal process. It had licensed a workflow technology and contracted a large number of people to code and configure a custom solution around it. This was no small task given the mismatch between a fine granularity of rules on the one hand and a coarse granularity of test cases on the other. The rules were implemented as IFTTT statements in a low-code language that did not allow them to be tested in isolation. The test cases consisted of clients renewing anywhere from one to four different types of contracts, each of which had highly variable terms and interdependencies on one another.

At the nexus of this mismatch was the QA team, which consisted almost entirely of staff from an outsourcing firm. An vendor had sold the company on QA capacity at a volume of 7 test scripts executed per person per day. They had staffed 50 total people to the program team, while the company had staffed four QA leads (one for each contract team). The outsourcing vendor was reporting no less than 350 test scripts executed by their staff every day, yet the QA managers were reporting very low test case acceptance and the development team was reporting the test case failures could not be replicated.

A little bit of investigation into one of the four teams exposed the mismatch. The outsourcing staff of this one team consisted of 10 people, contractually obligated to execute 70 test scripts. The day I joined, the team reported 70 test scripts executed, of which 5 passed and 6 failed.

Eleven being a little short of seventy, I wanted to understand the discrepancy. The answer from the contracted testers was, "we have questions about the remaining 59." The lead QA analyst - an employee, not a contractor - spent the entire day plus overtime investigating and responding to the questions pertaining to the 59. And then the cycle would start all over again. The next day it was 70 executed with 3 passed and 4 failed. The day after it was 70 executed with 1 passed and 9 failed. And the lead QA would spend the day - always an overtime day - responding to the questions from the outsourced team.

Evidently, this cycle had been going on for some time before I arrived.

We investigated the test cases that had been declared passed and failed. Turns out, those tests that were reported as having passed hadn't really passed: the tester had misinterpreted the results and reported a false positive. And those reported as failed hadn't actually failed for the reason stated: the tester had misinterpreted those results as well. On some occasions, it was the wrong data to test the scenario; in others, it had failed, but it was because a different rule should have executed. In just about every circumstance, it was false results. The outsourced testers were expending effort but yielding no results whatsoever. A brief discussion with the QA lead in each of the other three teams confirmed that they were experiencing exactly the same phenomenon.

After observing this for a week and concluding that no amount of interaction between the QA lead and the outsourced staff was going to improve either the volume of completions or fidelity of the results, I asked the one lead QA to stop working with the outsourced team, and instead to see how many test cases she could disposition herself. The first day, she conclusively dispositioned 40 test scripts (that is, they had a conclusive pass or fail, and if they failed it was for reasons of code and not of data or environment). The second day, she was up to 50. The third, she was just over 50. She was able to produce higher fidelity and higher throughput at lower labor intensity and for lower cost. And she wasn't working overtime to do so.

The outsourced testing capacity was net negative to team productivity. That model employed eleven people to do less than the work of one person.

This wasn't the answer that either the outsourcing vendor or the program office wanted. The vendor was selling snake oil - the appearance of testing capacity that simply did not exist in practice - and was about to lose a revenue stream. The program office was embarrassed for managing the maximization of staff utilization rather than outcomes (that is, relying on effort as a proxy for results).

The reaction of both vendor and program office weren't much of a surprise. What was a surprise was the fact that nobody had called bullshit up to that point. Experimenting with change wasn't a big gamble. The program had nothing to lose except another day of frustration rewarded by completely useless outputs from the testing team. So why hadn't anybody audited the verifiable results? Or made a baseline of testing labor productivity without the participation of the outsourcing team?

This wasn't a case of learned helplessness. The QA leads knew they were on the hook for meaningful testing throughput. The program office believed they had a lot of testing capacity that was executing. The vendor believed the capacity they had sold was not properly engaged. Nobody was going the motions, and everybody believed it would work. The trouble was, they were playing the cards they'd been dealt.

Some years later, I was working with a corporate IT department trying to contain increasing annual spend on ERP support. Although they had implemented SAP at a corporate level and within a number of their subordinate operating companies, they still had some operating companies using a legacy homespun ERP and all business units still relied on decades of downstream data warehouses and reporting systems. Needless to say, there were transaction reconciliation and data synchronization problems. The corporate IT function had entered into a contract with a vendor to resolve these problems. In the years following the SAP implementation, vendor support costs had not gone down but had gone up, proportional to the increase in transaction volume. The question the company was asking was why the support labor couldn't respond to more discrepancies given they had so many years experience with resolving them?

It didn't take a stroke of genius to realize that the vendor stood to gain from their customer's pain: the greater the volume of discrepancies, the more billing opportunities there were for resolution. Worse still, the vendor benefited from the same type of failure recurring again and again and again. The buyer had unwillingly locked themselves into a one-way contract: their choices were to live with discrepancies or pay more money to the vendor for more labor capacity to correct them. The obvious fix was to change the terms of the contract, rewarding the vendor for resolving the discrepancies at their root cause rather than rewarding the vendor for solving the same problem over and over and over. This they did, and the net result was a massive reduction of recurring errors, and a concomitant reduction in the contract labor necessary to resolve errors.

This was, once again, a problem of playing the cards that had been dealt. For years, management defined the problem of containing spend on defect / discrepancy resolution. They hadn't seen it as a problem of continuous improvement in which their vendor was a key partner in that improvement rather than a cost center to be contained.

There are tools that can help liberate us from constraints, such as asking the Five Why's. But such tools are only as effective as the intellectual freedom as we're allowed to pursue them in the first place. If the root question is "why is test throughput so low given the high volume of test capacity and the high rate of test execution", or "how can the support staff resolve defects more quickly to create more capacity", the exercise begins with confirmation bias, in this case that the operating model (the test team composition, the defect containment team mission) is correct. The Five Why's are less likely to lead to an answer that existentially challenges the paradigm in place if the primary question is too narrowly phrased. When that happens, the exercise tends to yield no better than "less bad."

It's all well and good for me to write about how I saw through a QA problem or a support problem, but the fact of the matter is we all fall victim to playing the cards that we're dealt at one time or another. A vendor paradigm, a corporate directive, a program constraint, a funding model, an operating condition limits our understanding of the actual problem to be solved.

But reflecting on it is a reminder that we must always be looking for different cards to play. Perhaps now more than ever, as low contact and automated interactions permanently replace high contact and manual ones in all forms of business, we need to be less intellectually constrained before we can be more imaginative.