I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Tuesday, February 28, 2017

Our Once and Future Wisdom: Re-acquiring Lost Institutional Knowledge

Last month we looked at the loss of institutional memory and the reasons for it. This month, we look at our options for re-acquiring it.

The erosion of business knowledge is not a recent phenomenon. Management textbooks dating at least as far back as the 1980s included stories of employees performing tasks the reasons for which they didn't really understand. The classic reference case was usually some report people spent hours crafting every month that they distributed to dozens of managers and executives, none of whom read it because they didn't know what it was for. Those execs never put a stop to it because they assumed another exec knew why it was important. Then, during the much-anticipated system replacement, some business analyst tracked down the person who wrote the report specs so long ago; after he was done laughing, that person told the business analyst the crisis that triggered the need for that report ended many years ago, and he couldn't believe they were still wasting time producing that report.

This story always seemed apocryphal - of course that could happen, but people are smart enough that it wouldn't really happen - until I saw it first hand at an investment bank just 6 years ago.

Natural (retirement) and forced attrition (layoffs) have long robbed companies of their knowledge workers. The rise of automation has simply made their loss more acutely painful. Accounting for knowledge hits the income statement in the form of the salaries of experienced and tenured employees; unfortunately, the value of their knowledge has no representation on the balance sheet. Extracting greater cash flows through payroll reduction is value-destructive in ways that accountants cannot (or at any rate, do not) measure.

If we have a business that hasn't yet gone full zombie that we want to pull back from the brink, what can we do to re-build business knowledge? There aren't a lot of high cards we can draw, but playing them in the right combination offers us a strategy. None of these are discreet solutions, but are a collection of non-mutually-exclusive tools that we can use to bridge a knowledge gap.

Tool 1: Dolly the Sheep

Companies that are heavily rule-based - think insurance - eagerly moved their business rules into code. It was easy to move into code; it's just as easy to move it back into human-readable format. Hire some developers fluent in a legacy technology, make sure you have an objective way of auditing their extraction of the rule base, and identify a cadre of employees who can understand those rules well enough to more comprehensively catalog and contextualize them. It's cheap (people paid to document code will be less expensive than people paid to create code) and hygienic (preservation of business information is a good thing) and it makes our business rules accessible to a wide audience spanning business users, managers, business analysts, and quality assurance engineers.

Of course, this is data, not information. A working foundation of facts is better than none, but facts are of limited value without context. And, while it's easy to reverse-engineer facts like rules, it's not so easy to forensically construct the business contexts that encapsulate those rules. A clone of something extinct - our lost business knowledge - runs the risk of suffering severe defects. For example, ghost code - code that is not commented out but will conditionally never be executed - is likely to be confused for real code in a reverse-engineering exercise. The facts are fantastic to have, but facts are not knowledge.

Tool 2: Seek the Jedi Masters

Somebody (well, somebodies) figured out how to automate the business. There are people behind the systems to which we're bound today. Why not put them back on the payroll? If they're still alive (always a good start), local and accessible, and grateful to the company for the income that put food on their table and their children through college, welcome them home. Techniques like value stream mapping bring them back to a business-operations mindset, allowing the business "why" in their heads to be extracted in a structured and coherent manner.

Of course, this isn't as simple as it sounds. Former colleagues won't come cheap. A knowledge worker who was forced out years ago may not feel inclined to share the wealth of their knowledge. The business will have evolved since the time these knowledge workers left. Corporate policies may also interfere with a re-recruitment campaign: one company I worked with forbade engaging contractors for more than 24 months, while another forbade contracting former employees at all.

You could also hire people who work for a primary competitor: In his book The Competitive Advantage of Nations, Michael Porter pointed out how industries tend to form in clusters; so if you're in an industry that isn't post-consolidation there's a good chance you've got a direct competitor nearby, offering a source of business knowledge you can recruit from. Again, this isn't as easy as it sounds. It's hard enough determining whether the people in our own business really understand the business "why" behind the things that they do, or whether they just know the complex motions they go through. It's even harder to do that with people grounded in another company's domain: if our business knowledge is in short supply we won't have the business knowledge to ask the abstract questions to gauge their comprehension of the business; plus, we may speak fundamentally different languages to describe their implementation. If their knowledge is too finely grained - that is, too specific to the context of our competitor - their knowledge won't travel: they're a subject matter expert in our competitor's operations, not in the industry. Plus, if our loss of business fluency was the result of corporate blood-letting, it's highly likely that our competitor up the street has done much the same, and will be no richer in domain expertise than we are.

One final word of caution is that we have to challenge the "why" the experts give us. Ten years ago, I was leading an inception for a company replacing their fleet maintenance systems. Their existing system was a combination of a custom AS/400 based RPG system (that had started life on a System/34), a fat client Visual Basic application, and thin-client Java tools, all of which required manual (operator) steps for data integration with one another. The user got to a step in the workflow in the VB application, then transferred data updates to the AS/400 and resumed there, then transferred data updates to the Java application and resumed there, all over a period of days or even weeks and often going back and forth. Their experts genuinely knew their business, but had grown so accustomed to the data transfer steps that they ended up baked into the initial value stream maps we produced. It took a lot of challenging the "why" on those specific portions of the value stream before they understood how a simple shared database would eliminate lots of no-value-added inventory control steps.

Still, maintaining a connection with the people who were there at the creation helps us identify the things so important for us to know if we're going to evolve or pivot from it. In much the same way as air traffic controllers are taught how to land planes in the event the software fails on them at a critical moment, former knowledge workers can help re-build our knowledge from the ground up.

Tool 3: Buy Before you Try

If you're on your way to becoming a zombie company, why not eat someone else's brain? Re-constructing a lost capability is expensive, so buying a competent operating company - along with its digital assets - is a shortcut. This assumes that you as the buyer can make an informed decision about the competency of the people you're acqui-hiring. It also assumes that the people in the acquired stick around after the acquisition.

A reverse-acquisition can take one company's girth and bloat and wed it to another company's core nimbleness and agility. But M&A is ego-driven: the CEO or board member who wants to do a deal will see the deal through regardless the state of the acquirer or target. A few years ago, I worked with a holding company that had bought two competing firms that collected data about banks and sold it on a subscription basis. As their product was becoming digital, the value of the data they sold was plummeting (as most data tends to do when it becomes digital), so we helped them define a strategy to combine the companies and transform them from providers of data to providers of digital tools. Three days into the inception, we were frustrated that the workshops had ending up with incomplete and unsatisfactory levels of detail. We hypothesized the reason for that was because the experts weren't all that expert. On day 4 we ran a series of experiments in our workshops to test this hypothesis, and in the process confirmed that the activities they performed in the acquisition, curation, publication and distribution of the data they sold were performed for reasons of rote, not reason. The inception was successful in that it exposed am inability to execute on the strategy in the manner they had hoped to do, which led to an entirely different approach to execution.

Buying is a shortcut, and as Murphy's Law teaches us, a shortcut is the longest distance between two points.

More modestly, we can simply license technology to replace major portions of legacy systems, and train or hire experts in that technology. This, though, substitutes solution knowledge for business knowledge, and the prior isn't necessarily a proxy for the latter: even though commercial ERP systems have largely replaced home-grown ones, those commercial solutions are highly customized to the needs of the business.

Tool 4: Play Calvinball

We're barraged by business media to be internal drivers of digital "disruption" because it puts our competitors at a disadvantage, challenging their leadership by forcing them to chase after us. But disruption is also a means of rebuilding lost business knowledge: if we change the rules of the game, we're less restrained by our current assets and procedures. The more we can change, the more we set a new agenda in the competitive landscape. Ideally, we should be playing Calvinball, and make up the rules as we go along.

Disruption is a tool, not a solution. Disruption may be constrained by prevailing legislation and regulation, while regulators tend to look at established firms differently from upstarts - if they look at them at all. In the wake of the 2008 financial crisis, bank lending declined in response to higher capital requirements against risk-weighted assets and tighter lending standards; marketplace lenders skirted balance sheet restrictions and lending regulations simply by not being chartered banks. This allows marketplace lenders to underwrite loans with much more flexibility than a bank. The door to this type of disruption was closed to banks. As with Calvinball, it is the player with the ball who makes the rules, and banks (like many other regulated businesses) aren't the ones holding the ball.

Plus, when we build on existing business rules rather than replace them, we're not moving away from a dependency on fundamental knowledge that we don't have. Re-imagining how an existing offering is packaged, distributed or even consumed doesn't alleviate the need to understand the core characteristics of that offering.

Making the Best of a Bad Hand

Re-gaining lost business knowledge is a slow, sometimes difficult, and usually expensive proposition. Pushing too hard to re-acquire it is like beginning to learn calculus and non-Euclidean geometry the night before a comprehensive final exam: a grade of D- would be a small miracle. But, since strong business knowledge is key to executing any business strategy pursuing growth or evolution, a grade of D- isn't going to cut it.

Worry less about the slow rate of re-acquisition and think instead about where you want your business fluency to be in 6 months, 12 months, and beyond, and how much more effective your organization will be at those times. That guides the extent to which you employ each of the four techniques described in here and how they get you to a greater state of fluency so that you can operationalize the business strategy. For example, contracting legacy language developers to capture encoded logic and hiring in a couple of retired employees for value stream mapping sessions, all in exchange for donuts and a fat payday for a few months, may be an effective and inexpensive precursor to an acquisition, or provide suitable grounding to initiate disruptive change that re-writes the rules of an industry.

This requires us to prioritize organizational learning along side operating performance and delivery goals. The latter two are quantifiably measurable and glare at us from our financial statements; the prior is not and does not. A commitment to learning is an investment that needs board-level visibility and air cover: without the learning there is no execution, and without the execution the strategy is just an elaborate Powerpoint. Board-level patience isn't infinite, so in exchange for an investment in learning, line management will have to commit to strategic execution - even if it has to commit to execution before it has re-learned as much as it would like.

The alternatives are to be acquired (sooner rather than later, at peak value for the book of business the company still commands) or slow obsolescence (and concomitant market irrelevance). Since this gives the people in the company a fighting chance, trading a commitment to learn for a commitment to strategic execution is a fair exchange.

Tuesday, January 31, 2017

Where Has All the Business Knowledge Gone?

I was queuing for a flight late last year when two people standing behind me started talking about how disappointing their trip had been. They were in consultants in logistics, and they were lamenting how one of their clients was struggling in the wake of a business process change that another firm - a tech consultancy - had agitated for their mutual client to adopt. The client in question purchased and warehoused perishable items, hundreds of thousands of different SKUs that they distributed to retailers ranging from tiny independents to large global chains. The distribution operation was built on efficiencies: fill delivery trucks to a minimum of 80% capacity and deliver products to customers on routes optimized for time and energy consumption. Cheap distribution keeps product prices down, which makes their retail clients more competitive. The up-start tech consultants pushed for flexibility: more frequent product deliveries made to the same stores throughout the day would keep shelves stocked, so they could better match stock to store. If there's a run on one item it can be replenished much sooner, resulting in fewer lost sales. Unfortunately, more frequent deliveries required more frequent truck dispatch; trucks could only be dispatched more frequently if they spent less time being loaded with products, so the load level of a truck fell to below 50% of capacity; expedited dispatch also meant ad-hoc rather than fixed routes, which resulted in driver confusion and receiving delays that translated into higher energy and labor costs of distribution. The intra-day re-stocking didn't capture enough revenue lost due to shelves being empty to justify either lower wholesale margins or higher retail prices.

The two standing behind me were exasperated that their client "listened to those [other] consultants!"

Distribution is not a high-value-added function. Distribution can get stuff form one esoteric locale to another, but that isn't the miracle of modern supply chains. Their magic they create is doing so for mere pennies. Cheap distribution can make something produced tens of thousands of miles away price competitive with the same something produced a few doors down the street. Distribution is about efficiency, because efficiency translates into price. When you're distributing hundreds of thousands of different SKUs, you're a distributor of commodities, and whether toothpaste or tungsten the vast majority of commodity purchases are driven by price, not convenience. Capturing incremental lost sales sounds like a good idea until it meets the cold, hard reality of the price sensitivity of existing sales.

This got me to reflect: why would anybody in a distribution business agree to do something so patently counter to the fundamental economics of their business model?

They're not a company I'm doing business with, and I didn't strike up a relationship with the frustrated consultants, so I don't know this specific situation for fact. But I've seen this pattern at a number of companies now, and I suspect it's the same phenomenon at work.

The short version: companies have forgotten how they function. They've lost institutional knowledge of their own operations.

We've been automating business calculations since the tabulator was introduced in the late 19th century, and business processes since the debut of the business computer in the late 1940s. Early on, business technology was mostly large-scale labor-saving data processing. It wasn't until the 1950s (well, more accurately, the rise of COBOL in the 1960s) that we really started capturing business workflows in code. Although few could appreciate it at the time, this marked the beginning of businesses becoming algorithm companies: all kinds of rule-based decision making such as accounting was moved into code; complex rules for functions like pricing and time-sensitive decisions such as trading quickly followed suit. As general purpose business computers became smaller and cheaper in the '70s and '80s, the reach of computer technology spread to every corner of the organization. As it did, business workflows from order entry to just-in-time manufacturing to fulfillment were automated.

The work that had previously taken people days or weeks could be done in minutes or even seconds. People with intimate knowledge of the business were liberated from computational and administrative burden. The business could grow without adding staff, and suffered fewer mistakes for that growth. Computer technology fueled labor productivity throughout the '80s and '90s as more and more business processes were automated.

Then something happened that went largely un-noticed: those business people who had devised and performed the manual processes that defined the software solutions built in the '70s and '80s retired. It went unnoticed because their knowledge was captured in code, and the developers knew the code. And, the new business people hired in to replace the old were trained in how to run the code, so there was no interruption in normal business operations.

Then the original developers disappeared, either because they aged out, got better opportunities with other firms (tech people tend to change jobs frequently), or got caught up in the offshoring thing in the early 2000s. No matter how it happened, the original developers left the scene and were replaced by new people.

At this point, the cracks started to appear.

Business people knew what they did with the code and tech people knew what the code did, but neither knew why. While regular operations didn't suffer, irregular operations caused fits because nobody knew what a measured response to them was. These led to bad decision-making about the software. Among other things, the new business people didn't know the simple protocol their predecessors followed to contain a crisis. While new people had the tactical knowledge to execute tasks in the software, they didn't know how use the software in tandem with manual procedures to efficiently respond to an irregular operating situation. On the other side of the knowledge base, the new tech people didn't know why extreme scenarios weren't accommodated in the code. Again, without the meta knowledge of how to procedurally minimize a crisis inside and outside the software, they had to defend why the software wasn't "robust" enough to deal with this crisis. Since anything and everything can be codified, they had no ready answers, nor could they chart a procedural course outside of code.

With management screaming about escalating costs and poor customer service and making assurances to higher-ups that This Will Never Happen Again, the decision was made for them. So the software bloated with new rules, complexity (If This Than That), and features rarely invoked to make the software very well prepared to respond to the last crisis. Of course, given the nature of irregular operations, it wasn't entirely accommodative to the next crises. Thus went the progressive cycle of code bloat and fragility.

Once the old code became so cumbersome and brittle that executives were terrified of it, they were compelled to sponsor re-invention-through-technology initiatives. These immediately defaulted into feature parity exercises because nobody on the re-invention team had sufficient business context to imagine the business differently from how it operated (tactically) today. Because the new generation of business people had never been required to master the business rules in the same way that a London taxi driver had to have the knowledge, business users were beholden to yesterday's status quo as codified in the software that ran the business. In addition, these replatforming exercises were characterized by a shift in authority: the business people executed the rules, they didn't shape them. Tech people were the ones who manipulated the code behind the rules; they were the new shapers. Tech, not business, are the authority in replatforming initiatives.

The depletion of business knowledge and the shift of the curation of that knowledge from business to tech leads to the scenario described above: no resident adult present who can authoritatively explain why flexibility would blow out the economics of a mature distribution utility. While tech people are great at compilers and build pipelines, they're crap at business economics. Without a meta understanding of business operations, a re-invention or re-platforming initiative will be little more than a high-cost, high-intensity exercise that gets the business no further than where it is today.

I've seen plenty of companies where business understanding has been depleted. Re-learning the fundamentals is an expensive proposition. So how do we re-build a lost institutional memory? We'll look at the ways of doing that next month.

Saturday, December 31, 2016

Myths of Replatforming

Replatforming is all the rage these days.

Platforms are conceptually popular with investors: in theory, a platform makes the mundane portions of a business efficient, scalable and adaptable, allowing a company to release the creative talents of its people to pursue growth and innovation. Replatforming makes for a convincing story following an acquisition because it explains to investors how deal synergies will be achieved, sets a tone of equivalency among employees of both the acquired and acquiring firm, describes a vehicle through which the company will reinvent itself with modern technology and practices, and conveys pragmatism in the need to clean house of dilapidated infrastructure. The replatformed business promises innovation at scale and larger cash flows from operations that, combined, position it to grow both organically and through acquisition. Destiny awaits.

In practice, replatforming is messy. Core systems are a complex web of integrated rules and functions spun over generations. They're difficult to disentangle because we have all the wrong people: most of those with the business understanding that went into building those systems have long since left, while the architects and senior engineers who created the outer layers - the web and mobile components - are still on the payroll. Replatforming initiatives become a black hole almost from the start because people don't know how to "eat the elephant": their ways of working are out-dated, they struggle to come to terms with the totality of what needs to change, and they can't envision a future that is much different from the present.

I've had a front row seat to a number of replatforming initiatives over the years, and I've seen several myths that plague them from the inside.

We'll build the platform first, then change the organization once it's live. Employees will see replatforming as an exercise in re-creating software. The existing systems have their shortcomings, but we run a multi-billion dollar business on that code, so we must be pretty good at software. Let's stick to what we know to create the software - because that's what the business needs, right? - and then we'll change process and organization once it's up and running. The fatal flaw in this line of thinking is Conway's Law: software mirrors the communication patterns of the organization that develops it. When the goal of replatforming is to rapidly innovate at scale (as it is usually alleged to be) you have to start with cross-functional, integrated teams with devolved authority that can autonomously deliver end-to-end solutions. Coming from traditional organizational structures of hierarchy, shared services and specialists, that's a lot of change. It creates confusion and disorder that just about everybody is uncomfortable with. It also makes for a slow start on developing the software that makes middle managers nervous; the more nervous they get, the more inclined they'll be to abandon organizational change. An over-managed, hierarchical, silo'd organization will develop over-engineered, tightly-coupled, brittle software; no after-the-fact restructure will compensate for that because you're bound by Conway's Law. As the saying goes, 'tis always best to begin as one intends to proceed. Replatforming doesn't succeed unless organization change precedes.

We need a product organization. As business operations become more encoded in algorithm and less executed through manual labor, we need long-term stewardship of our software from both our business and engineering communities. The popular way of doing this is by creating a product organization. Don't bother doing that if you can only define their responsibilities in a self-referential manner (e.g., "the product owner owns the product"), if you disproportionately define the scope of their responsibilities as user experience, if they have no direct accountability for how their outcomes impact the line of the business, and if you're substituting systems knowledge (how things work) for business knowledge (why things work). Not only is this not value-generative, it adds a layer of decision making intermediation and creates ambiguity in responsibilities for requirements, analysis, design, and prioritization. It's made much worse when product managers are alleged to have authority, but have their decisions reversed by higher-ups or invalidated by people with stronger business knowledge. Per the prior paragraph, we need to create the organization that will both create and perpetuate the platform. Creating a mature product organization is hard. Better to encapsulate product responsibility into the line of the business (where it arguably should be in the first place, and not adjunct to it, but that's another blog for another day) than to create a product organization stuck in perpetual adolescence. The latter will result in systemic distortions in the code, courtesy Conway's Law.

We need to put all business requests on hold for the next [long period of time] while we replatform. The older a company is and the more acquisitions it has been involved with, the more far reaching and complex its core systems will be. The more depleted it is of business knowledge - the why, not the how - the more mysterious those systems will be. Employees will be predisposed to define a new platform through the lens of feature parity with the old. The lower the caliber of business knowledge (that is, the more that system knowledge substitutes for business knowledge), the higher the degree of feature parity that employees will insist defines the minimum viable product - a position reinforced by traditional software development methods that released software once every few months, not days or hours. Additionally, ambitious replatforming efforts lay bare deficiencies in organization, skills, capability, knowledge, process, and infrastructure. Changing those takes time. These two points are conjoined to form the mistaken belief that business needs will have to take a back seat while people figure all this stuff out, but those business needs would be deferred regardless because there's no way to go live with a partial solution, and we can't very well pursue a moving target. To destroy this myth, start with that "long period of time" the business is being asked to wait. It always seems to be on the order of a year to a year and a half. At the very least double it, because nobody's estimate for something they've never done before is going to be particularly accurate. Think your customers will wait that long - two to three years - for you to get your act together? At best, a fork-lift replacement will get you tomorrow where you needed to be yesterday. From an engineering perspective, progressive strangulation of incumbent systems is almost always a question of will, not possibility. From a business perspective, progressive strangulation is a question of personas and user journeys, not features and functionality.

We'll build it the way we should have built it if we knew then what we know now. It's tempting to think our best bet is to re-build something with the wisdom our people gained from the experience of building it the first time round. That's a risky bet. It assumes they've drawn the right conclusions from their experiences. That's a lot to hope for, since that requires a lot of introspection, critical analysis, and personal acknowledgement of error and mistake, something most people aren't predisposed to do. It also assumes they've kept current with technology and process. Microservices, containerization, elastic cloud, and continuous delivery are significant departures from where software development was just a decade ago. The people who got the company into the mess it's in aren't the people who will get the company out of it. In their hands, you'll get what you've already got, only worse, like multi-generational in-bred aristocracy. Replatforming requires forward thinking technology, ideas, and execution. Changing culture, capability and mind-set requires a major transfusion; be prepared to lose a lot of blood.

Aging infrastructure, cheap capital and a dearth of innovation have fueled consolidation in a variety of industries. Replatforming will be with us for as long as those conditions exist. The myths don't have to be.

Wednesday, November 30, 2016

The Patrol Method and Objections to Self-Directed Agile Teams

In the previous post, we saw there are quite a few similarities between the Patrol method and self-directed Agile teams. It stands to reason that the resistance, doubt and objections faced by each from sponsors, leaders and members alike will be very similar. If that's the case, one can learn from the other.

These excerpts from the 1950s edition of the Scoutmaster's Handbook will sound familiar if you've ever tried to implement a self-directed Agile team:

"Some don't grasp the possibilities of the patrol method, and subsequently don't see the importance of it." If you can't appreciate the adaptability and agility of an organization characterized by strong capability and strong leadership throughout the ranks - one where people are continuously completing, learning, and adjusting to an extent that they perform best with team-level autonomy rather than top-down hierarchy - you won't see the value in a method designed to bring that about.
"Some lack faith in boys' ability to carry out responsibilities." You have to be willing to trust in people's capacity to both learn and make good decisions. Of course, trust is earned and not given, and while it takes time to develop it takes only seconds to erode. But an organization is a lot more efficient when it gets by on trust, because trust requires far less supervision and administration. Again, you have to be committed to what the right organization can deliver, and not make the organization and it's people hostage to the deliverables themselves.
"Some give up if it doesn't function perfectly right off the bat." Self-directed teams are a departure from traditional command-and-control hierarchy. The initial experience - and the initial results - can be very poor if self-direction is adopted swiftly and suddenly: self-direction requires very different behaviors and these can take a long time to develop. Until they do, teams will experience mission confusion as they come to grips with new expectations, interruption caused by external dependencies and boundaries they now have the responsibility to negotiate and manage, and seizure as people struggle to come to terms with responsibility for an entire solution rather than discreet, small tasks. How the team struggles with Tuckman's stages of group development will be mirrored in its results: at best a roller coaster where results are up one iteration then down the next, at worst a flatline where they struggle to get anything across the line. If we're not cognizant of (or better still, actually measuring) the development of new organizational "muscle memory", then the appearance of chaos within the team twined with few deliverables to show will cause the faint of heart to bail before the team gets through forming, norming and storming, to actually performing.
"Others don't like to part with authority. They found a chance to play and show off their specialties and don't realize they're stifling leadership development." Some people responsible for delivery of large programs or simple projects may be more comfortable concentrating authority rather than distributing it. This is an indication that they're more worried about the success of what they're responsible for than they are building up the people who can successfully deliver. While they may be good executors - a safe pair of hands you can trust to get something across the finish line - they're not good organization builders who can sustain what they create. Curiously enough, while self-directed teams are often accused of "not scaling", it is the characteristics common to command-and-control hierarchies - asymmetric knowledge distribution and concentrated decision-making - that don't scale.
"Still others have the old idea of training by mass instruction too ingrained in their systems to change." The best way of learning is to do something and not just study it. But on-the-job training can be very expensive, especially if somebody does, and re-does, and re-re-does, and still doesn't get a basic grasp of what it is they're doing. There is a certain allure to separating skill acquisition from skill application, if for no other reason than we can measure the effort spent on skill acquisition in the form of number of hours spent in classroom training and number of people who earn certifications. Things like training and certifications are proxies for competency: if we earn the certifications we will know what we're doing which will jump-start our execution. This makes people feel good about progress toward plan. But giving training to an individual and that individual demonstrating competency are entirely different things. The prior is useless without the latter.
"Also, some by temperament aren't suited to this way of training and are happier in a system other than the Patrol method." Quite a few people can't function without a strong hierarchy. Asking them to perform in a system defined by collaborative teams of peers who are self-directing themselves toward reaching a goal is an unreasonable expectation, and potentially damaging, to somebody who simply can't function that way.

From this, we can glean the characteristics a leader committing to the formation of self-directed teams needs to have if they are to succeed.

Humility - Self-directed teams are about its members, not its leaders.

Patience - A self-directed team will suffer setbacks and disappointments, particularly in the early going. The point isn't to obsess about failure, but to make sure every member of the team learns from it and doesn't make the same mistakes again.

Faith in people - If you believe people need to be told what to do, rather than can figure out what it is they need to do, you'll not get far with self-directed teams.

Belief in the wisdom of crowds - You have to believe that the whole is greater than the sum of the parts, and that a hive-mind will come up with a better solution through execution and continuous feedback than one dictated to them by a small cabal of people.

These prior two points suggest that you have to espouse a professional rather than an industrial mindset. If you can do that, you have the mindset for self-directed teams.

Commitment to the method - Above all else, you have to believe that if faithfully applied, the method will create the conditions - generalized skills, strong leaders, and independently-functioning units - under which a program is far more likely to succeed than if it is delivered in a command-and-control style. If you lack confidence in the values behind the method, you'll be quick to abandon practicing the method.

"The troop that is run as a club, with the Scoutmaster as boss, dies when the boss leaves." Hierarchies with central decision-making can be effective, but they are brittle because of their dependence on a handful of key people. If your goal is to build a resilient, evolutionary, adaptive organization, the price of admission is decentralization. Decentralization requires empowerment; empowerment requires atomic leadership and capability. The history of organizational development teaches us that the process of building an organization that can function this way is a very difficult one indeed.

Monday, October 31, 2016

The Patrol Method and Self Directed Agile Teams

As part of my research into method earlier this year, I picked up a 1959 edition of the Scoutmaster's Handbook. The core of the philosophy for a Scout troop was what Robert Baden-Powell, the founder of Scouting, called the Patrol method. The early editions of the Boy Scouts of America's version of the Boy Scout Handbook were mostly written by the same person, William "Green Bar Bill" Hillcourt, and revised over many years.

The Patrol method Hillcourt described in the Scoutmaster's Handbook was essentially a self-directed Agile team. The key characteristics are:

  • Small team size: patrols are 6 to 8 people. Several patrols form a troop, but patrols are autonomous.
  • Pairing: experienced people teach new people on the team.
  • Continuous feedback: the Court of Honor is "...a peer system in which Scouts discuss each other's behaviors and is part of the self-governing aspect of Scouting."
  • Servant leadership: achieved through an emphasis on service to others (an expectation shared by all), as well as stressing that the highest leadership roles are expected to assist those in the troop to train themselves as opposed to telling them what to do. "A Scoutmaster's job is to help boys grow - by encouraging them to learn for themselves."
  • Hands-on over theory: "No meeting should be inside - all activities should be outdoors".
  • Respond to change: "If the planned program doesn't work, be resourceful. Throw some out, if necessary, to suit conditions."
  • Transparency: "Encourage members of the troop committee to attend regularly."
  • Stakeholder management: "When they come, have something definite for them to do."
  • Chickens and pigs: "Keep visitors on the side lines. Most of the time visitors come to see what is happening. Don't let them interrupt the meeting."
  • Generalize skills by rotating pairs and responsibilities through the duty roster instead of allowing people to specialize in tasks.
  • Tool construction: pioneering techniques forge useful tools from available resources that make you more productive and comfortable.
  • Each team owns the plan: troop goals and patrol objectives are set by members of the patrols themselves, not dictated the adult leadership.
  • Adaptability in technique: "Fortunately, there is no standard way of planning the program of a troop. A group of robots using a standard pattern in exactly the same fashion would pretty soon kill Scouting. Each troop works out its own way..."
  • A code with positive goals: the Scout Oath and Laws provide a value system for conduct, in much the same way that the Agile Manifesto is a value system for software delivery.

There are many more similarities I could draw out between the Patrol method and Agile teams. The point isn't to suggest that the concept of self-directed teams are influenced by the Boy Scouts - it doesn't matter whether they are or not. Or that there are no new leadership philosophies under the sun - servant leadership concepts are at least 2,500 years old at this point.

The point is to learn from what the people championing that method experienced when they applied it: intransigent doubt that the method can work because it turns leadership responsibility over to the team, or that learning-by-doing is inferior (e.g., doesn't provide value for money, or isn't more effective) to training by mass instruction.

If the concepts aren't new, the objections to them aren't, either. The strengths of a self-directed team might be self-evident to the initiated, but they're not an easy sell to those who are not - for reasons that have been with us for time immemorial. We can learn from their setbacks.

In the next post, we'll look at how champions internalized objections to the method, and what they observed happened when the method was compromised for sake of implementation.

Friday, September 30, 2016

Ecosystems and the Energy Source of Last Resort

It's fashionable for a company to proclaim itself an ecosystem. A mobile phone company makes handsets for users and curates an app market place for developers. The virtuous cycle of an ever increasing collection of apps motivating an ever increasing population of consumers. They have the benefit of steady cash flows from existing customers and constant growth from new ones attracted by an increasingly complex array of products. There are a number of self-proclaimed commercial ecosystems, ranging from online lending to conglomerates of retail, credit and loyalty.

Markets are kind of like ecosystems in the way participants reinforce one another. Buyers and sellers come together in sufficient numbers to perpetuate a market. As more buyers emerge, more sellers offer their wares in the market, which attracts still more buyers. An increase in the number of buyers triggers more complex and diverse products, making the ecosystem more interesting, if not more robust. To wit: demand for tomatoes triggers cultivation of different varieties, some of which are resistant to disease or insects that others are not, increasing the resiliency of the lycopene trade.

Ecosystems aren't inherently complex: a simple terrarium consisting of a lamp, dirt, water and seeds will yield plants. Commercial ecosystems aren't complex, either. We can stand up marketplaces for mobile phone software or money lending or property investing. In doing so, we hope to encourage people to risk their labor by writing software they hope people will buy, or their capital they hope will find a worthy investment. With the right marketing and promotion (i.e., fertilizer) we might attract ever more buyers and ever more sellers, creating a growing and ever-increasing community.

One thing an ecosystem needs to survive is a constant supply of energy. The sun provides an uninterrupted supply of energy to the Earth. It can be consumed immediately (e.g., through photosynthesis). It can also be stored: liquefied dinosaurs are effectively stored energy that originated with the sun. Energy from the sun can be concentrated in many other forms, and accessible to parts of the planet when they're directly exposed to it. This allows formation of more complex life and lifestyles. Some spot on the Earth may suffer drought or fire or some other disaster that wipes out the basic plant life that supports more complex life forms, but the constant energy from el sol means that a devastated area has a source of energy it can draw on to re-develop.

In commercial ecosystems, capital is energy. Markets are highly vulnerable to periodic contractions of liquidity. Both asset classes and tech products fall out of favor, destroying the fortunes of sellers quickly (bank equity values in 2008) or slowly (Blackberry software developers from 2008 onward). Turn off the lamp and the terrarium becomes barren.

Markets require a constant supply of capital in the same way that ecosystems needs a constant supply of energy to survive volatility and seizures. In financial markets, there are market makers who guarantee a counterparty to every trade and buyers of last resort who provide liquidity in the event of a sudden seizure of market activity. It's the latter - the Federal Reserve and the European Central Bank buying sovereign and commercial paper as well as lending to banks with the expectation that they will do the same - who act as the constant supply of energy that keeps commercial ecosystems functioning. Markets will surge and markets will plunge, but it is the "energy source of last resort" that sees markets through the peaks and troughs.

Economic cycles - credit or tech - aren't new phenomenon. When they turn, they expose the fragility of the businesses at their mercy. Late last year, lending marketplaces found themselves with plenty of loans they could write but fewer willing to buy them. The solution they turned to was to introduce a buyer of last resort, initially in the form of banks and eventually in the form of investment vehicles they created themselves.

Any self-proclaimed ecosystem without a backstop buyer - that is, without a constant and reliable source of energy - will be at the mercy of commercial cycles. Mr. Market will not hesitate to turn off the terrarium lamp when the cycle tells him to do so. Once off, he is not so willing to turn it on again. But he might not reach for the switch for the first place - and might very well be first to harvest green shoots after a devastation - as long as there is an energy source of last resort.

Wednesday, August 31, 2016

Method, Part II

Last month, we looked at method as a codification of experience borne of values and expressed through rules, guidelines, practices, policies, and so forth. This month, we'll take a look at the relationship between method and the things that influence it, and that it influences.

The principal framework is an article by Cliff Jacobson describing the change in method that impacted camping and outdoor activity starting in the 1950s, drawing comparisons to changes in method in software development. When we think of method in software, we generally think big: "Agile versus Waterfall". But there are more subtle changes that happen in method, specifically through the codification of skill into tools.

Plus ça change...

* * *

Method...

... and Values

"Environmental concerns? In those days, there were none. Not that we didn’t care, you understand. We just didn’t see anything wrong with cutting trees and restructuring the soil to suit our needs. Given the primitive equipment of the day, reshaping the land was the most logical way to make outdoor life bearable.
"In 1958 Calvin Rutstrum brought out his first book, The Way of the Wilderness. Suddenly, there was new philosophy afield. Calvin knew the days of trenched tents and bough beds were numbered. His writings challenged readers to think before they cut, to use an air mattress instead of a spruce bed. Wilderness camping and canoeing were in transition."
-- Cliff Jacobson

As values regarding nature changed from "tame the land" to "conservation", the method of camping had to change. Of course, it took a long time for the new values to settle in. And even once it did, it took a long time for practitioners to change what they did. Resistance to change is a powerful thing, and both practitioner and gear lag would have kept practitioners executing to an old value set in the field for a long, long time.

Values changed in software, too. When users of software were largely internal to a company, before software became such a high cost item for non-tech companies, and before software was weaponized, development moved at a much slower and more deliberate pace. Once the values changed, the method of software delivery was also pressured to change. The change in relationship between humanity and the outdoors is similar to the change in relationship between companies and their software.

... and Skills

But this narrative applies to both a wholesale change in method as well as to the transition from skills-centric to tool-centric method.

"I discovered the joys of camping at the age of 12 in a rustic Scout camp set deep in the Michigan woods. It was 1952, just before the dawn of nylon tents and synthetic clothes. Aluminum canoes were hot off the Grumman forms, though I’d never seen one. Deep down, I believed they’d never replace the glorious wood-ribbed Old Towns and Thompsons."

Early backpackers had to make do with bulky tarps, fashioning poles and tent pegs from branches - sometimes, even sapling trees - in their campsites. The emphasis among the early outdoorsmen was on the skill of adapting the environment to human survival, and achieving Spartan levels of comfort was a symbol of mastery. Being able to fashion poles and pegs from tree limbs was important in the 1940s as tents didn't necessarily come with them. This was not only destructive, it became unnecessary with the evolution of lightweight and portable aluminum poles and stakes. In a relatively short period of time, being good at pioneering became, at best, only useful in an emergency (you need to fashion a tent peg because you discover you've lost some aluminum ones).

Building Agile trackers in spreadsheets and crafting them anew with each project was somewhat akin to fashioning new tent pegs every time you go camping. Creating a new tracker with each project was a waste of money, and having 5 different teams with 5 different trackers was confusing. The advent of cheap commercial trackers made this unnecessary. Still a good skill to have in an emergency - a project tracker so badly polluted with low priority Stories and tasks is an impediment when you want to make a clean and quick start - but fashioning a tracker is no longer itself a core skill.

... and Tools

"The emphasis had shifted from skills to things."

Early tools supporting a method are crude, often hand made and narrow in their usefulness, and several tend to spring up at the same time. The emphasis is on skills.

But with ever increasing popularity of the activity (trips to the Boundary Waters, or Agile software development) comes the tools. Skills take time to learn and master. Tools make the activity at hand easier to perform, and subsequently more accessible and more enjoyable to more people because they're more successful at it. Canoes are made of strong yet lightweight materials so they're more tolerant to misuse while simultaneously easier to portage. Sleeping bags are made of synthetic materials that are water resistant (unlike down) so that somebody who does a sloppy job at packing a canoe pack won't suffer if the bilge water soaks the contents of the bag.

Of course, tools can be a source of efficiency or a source of trouble. A hatchet makes it easy to build safe, small fires out of short cut sticks. But a hatchet can cause grave injury to somebody if they don't know the proper method for chopping wood with it. Nobody is likely to suffer bodily harm from an over-engineered build script (no matter how many felonious thoughts cross the mind of other people in the team) but an overloaded, single-stage build that reduces build frequency and that fails frequently will cause more harm than good.

"Today, high-tech gear and high-powered salesmanship have become a substitute for rock-solid outdoor skills."

As the complexities of a method get codified into the gear, it becomes difficult to separate one from the other. The tools become a proxy for the method because the state-of-practice matures in conjunction with improvements in science (materials or software) and affordability. Today, we create sophisticated, multi-stage pipelines that instantiate their deployment environments and deploy on every commit. 15 years ago, it was amazing to have a build run every few minutes that would both compile source code and run tests. We can't imagine forging our own crude tools to do basic tasks, or even why we'd want to do it.

Newer tools don't lend themselves to older practices. Tightly rolling a modern (down) sleeping bag won't get it into its stuffsack. Managing cloud instances like rack-mounted servers in a physical data center will run up the bills really fast.

This can be a serious point of confusion for middle managers tasked with making their organization "Agile". If we use Jira, Jenkins and jUnit we must be Agile.

... and People

"I felt quite inadequate, like a peasant in Camelot."

Tools can render entire skills sets irrelevant. The right brain creativity to fashion a tracker for some specific project was no longer needed when the commercial tracking tools arrived. It became a left brain activity of making sure all the requirements were entered into the tracker and configuring canned status reports. Suddenly the thing somebody did that was an act of value has been rendered obsolete by the gear.

The information modeler who was capable of telling the right story based on the nature of the task and team is shoved aside by the efficient administrator who's primary job is to maintain team hygiene. It's entirely possible that the hygienist doesn't really understand why they perform the tasks they perform, but they've been told to hustle people to stand-up and make sure people update status on their (virtual) cards. They're also much cheaper than the craftsman they replaced.

This is pretty destabilizing to people. Where Cliff Jacobson felt inadequate by the gear (and the associated cost), the individual can be stripped of their own sense of self-worth by a change in the method. This can happen when the method changes owing to the values (we need to deploy daily and Waterfall won't let us do that). You might have fancied yourself pretty good at software within your organization, but now the boss is telling you that your worldview is out of touch, you're not up to scratch and you're not only told that you're going to do it differently, but how you're going to do it. That's not likely to elicit warm and welcoming feelings. Just the opposite.

But it can also happen when the change in method is a shift from skills to things. Suddenly anybody can appear to be good at project tracking. That can stir resentment that encourages resistance to the tools and pine for the spreadsheets.

The reverse - a sudden shift from tools to skills - has no less an impact. There are development stacks that are entirely tool driven. When the boss comes in and announces that all vendor dependencies in the code and process gotta go, the tool dependency no longer compensates for weak skills. The person accustomed to going glamping may not much care for back country backpacking.

... and Basics

"Chemical fire-starters take the place of correct fire making; indestructible canoes are the solution to hitting rocks; bizzard-proof tents become the answer to ones inability to stormproof conventional designs; GPS positioning has replaced a map and a compass. And the what-if-you-get-your-down-bag-wet attitude attracts new converts every year. In the end, only the manufacturers win."

Cliff Jacobson argues that tools are a poor substitute for skills. Where they support the value system - Leave No Trace camping - they're welcome. But where they are simply gadgets for convenience or separate the individual from the experience, they're not. They're also predatory, exploiting a person's laziness, or fear of being unable to master a skill, or feeling of inadequacy in dealing with challenging situations that might arise.

To same extent, the impulses that spurred the software craftmanship movement are likely similar to those of Messers Rustrum and Jacobson:

"'I’ve canoed and camped for nigh on seventy years and have never got my down bag wet,' he bellered. 'People who get things wet on trips don’t need new gear. They need to learn how to camp and canoe!'"

Pack correctly and paddle competently and you'll never sleep in a soggy bag. We don't need armies of people mindlessly executing test scripts if we build in quality in the first place.

... and the future of method.

Method is a mirror, not a driver. It reflects values and experience, it doesn't create them. Values shift as our priorities change. Experience changes as we learn what new technologies allow, and sometimes re-learn discipline long lost. Method reflects this; it doesn't inform or define this. The values and experience are there, or they are not.

Method is never a destination. It's an echo.