I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Thursday, May 31, 2012

Rules versus Results

Assessments are based, not on whether the decisions made are any good, but on whether they were made in accordance with what is deemed to be an appropriate process. We assume, not only that good procedure will give rise to good outcome, but also that the ability to articulate the procedure is the key to good outcomes.
-- John Kay, writing in the Financial Times
A common error in both the management and governance of IT is an over-reliance on rules, process and "best practices" that portend to give us a means for modeling and controlling delivery. We see this in different ways.
  • Project managers construct elaborately detailed project plans that show business needs decomposed into tasks which are to be performed by specialists in a highly orchestrated effort. And detailed they are, often down to the specific task that will be performed by the specific "resource" (never "person", it's always "resource") on a specific day. This forms a script for delivery. The tech investor without a tech background cannot effectively challenge a model like this, and might even take great comfort in the level of detail and specificity. They have this broken down into precise detail; they must really know what they're doing.
  • Companies invest in "process improvement" initiatives, with a focus on establishing a common methodology. The belief is that if we have a methodology - if we write unit tests and have our requirements written up as Stories and if we have a continuous build - we'll get better results. Methodology becomes an implicit guarantor of success. If we become Agile, we'll get more done with smaller teams.
  • "Best practices" are held out as the basis of IT governance. Libraries of explicit "best practices" prescriptively define how we should organize and separate responsibilities, manage data centers, contract with vendors and suppliers, and construct solutions. If there is a widely accepted codification of "best practices", and we can show that we're in compliance with those practices, then there's nothing else to be done: you can't get better then "best". We'll get people certified in ITIL - then we know we'll be compliant with best practices.



* * *

To see how stultifying such behaviour can be, imagine the application of this emphasis on process over outcome in fields other than politics or business. Suppose we were to insist that Wayne Rooney explain his movements on the field; ask Mozart to justify his choice of key, or Van Gogh to explain his selection of colours. We would end up with very articulate footballers, composers and painters, and very bad football, music and art.
-- John Kay
It is tempting to try to derive universal cause-and-effect truths from the performance of specific teams. Because this team writes unit tests, they have higher code quality. Or, Because we ask users to review security policies and procedures at the time their network credentials expire, we have fewer security issues. Does unit testing lead to higher quality? Does our insistence on policy result in tighter security? It may be that we have highly skilled developers who are collaborative by nature, writing relatively simple code that is not subject to much change. It may be that our network has never been the target of a cyber attack. A "rule" that dictates that we write unit tests or that we flog people with security policy will be of variable impact depending on the context.
There are no such things as best practices. There are such things as practices that teams have found useful within their own contexts.
@mikeroberts
Suppose a team has low automated unit test coverage but high quality, while another team has high automated unit test coverage but low quality. A rule for high unit test coverage is no longer indicative of outcome. The good idea of writing unit tests is compromised by examples that contradict lofty expectations for going to the trouble for writing them.

It isn't just the rules that are compromised: rule makers and rule enforcers are as well. When a team compliant with rules posts a disappointing result, or a team ignorant of rules outperforms expectation, rule makers and rule enforcers are left to make ceteris peribus arguments of an unprovable counterfactual: had we not been following the rule that we always write unit tests, quality would have been much lower. Maybe it would. Maybe it would not.

Rules are not a solid foundation for management or governance, particularly in technology. For one thing, they lag innovation, rather than lead it: prescriptive IT security guidelines circa 2007 were ignorant of the risks posed by social media. Since technology is a business of invention and innovation, rules will always be out of date.

For another, rules create the appearance of control while actually subverting it. Rules appear to be explicit done-or-not-done statements. But rules shift the burden of proof of compliance to the regulator (who must show the regulated isn't following the rule), and away from the regulated (who gets to choose the most favourable interpretation of the rules which apply). Tax law which explicitly defines how income or sales transactions are to be taxed encourage the individual to seek out the most favourable tax treatment: e.g., people within easy driving distance of a lower cost sales tax jurisdiction will go there to make large purchases. The individual's tax obligation is minimized, but at a cost to society which is starved for public revenues. In IT terms, the regulated (that is, the individual responsible for getting something done) holds the power to determine whether a task is completed or a governance box can be ticked. I was told to complete this technical task; I have completed it to a point where nobody can tell me I am not done with it. The individual effort is minimized, but at a cost to the greater good of the software under development which is starved for attention. It is up to the regulator (e.g., the business as the consumer of a deliverable, or a steering committee member as a regulator) to prove that it is not. 

When the totality of the completed technical tasks does not produce the functionality we wanted, or the checboxes are ticked but governance fails to recognize an existential problem, our results fall short of our expectations despite the fact that we are in full compliance with the rules.



* * *

Instead, we claim to believe that there is an objective method by which all right thinking people would, with sufficient diligence and intelligence, arrive at a good answer to any complex problem. But there is no such method.
-- John Kay
Just because rules-based mechanisms are futile does not mean that all our decisions - project plans, delivery process, governance - are best made ad hoc.

Clearly, not every IT situation is unique, and we can learn and apply across contexts. But we do ourselves a disservice when we hold out those lessons to be dogma. Better that we recognize that rigorous execution and good discipline are good lifestyle decisions, which lead to good hygiene. Good hygiene prevents bad outcomes more than it enables good ones. Not smoking is one way to prevent lung cancer. Not smoking is not, however, the sole determinant of good respiratory health.

Principles give us an important intermediary between prescriptive rules and flying by the seat of our pants. Principles tend to be less verbose than rules, which makes them more accessible to the people subject to them. They reinforce behaviours which reinforce good lifestyle decisions. And although there is more leeway in interpretation of principles versus rules, it is the regulator, not the regulated, who has the power to interpret results. Compliance with a principle is a question of intent, which is determined by the regulator. If our tax law is a principle that "everybody must pay their fair share", it is the regulator who decides what constitutes "fair share" and does so in a broader, societal context. Similarly, a business person determines whether or not software satisfies acceptance criteria, while a steering committee member assesses competency of people in delivery roles. Management and governance are not wantonly misled by a developer deciding that they have completed a task, or an auditor confirming that VMO staffing criteria are met.

"We always write unit tests" is a rule. "We have appropriate test coverage" is a principle. "Appropriate" is in the eye of the beholder. Coming to some conclusion of what is "appropriate" requires context of the problem at hand. Do we run appreciable risks by not having unit test coverage? Are we gilding the lily by piling on unit test after unit test?

We are better served if we manage and govern by result, not rule. Compliance with a rule is at best an indicator; it is not a determinant. Ultimately, we want people producing outstanding results on the field, not to be dogmatic in how they go about it. Rules, processes, and "best practices" - whether in the form of an explicit task order that tells people what they are to do day in and day out or a collection of habits that people follow - do not compensate for a fundamental lack of capability.

Adjudication of principles requires a higher degree of capability and situational awareness by everybody in a team. But then we would expect the team with the best players to triumph over a team with adequate players in an adequate system.