I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Sunday, December 03, 2006

It might make the car go faster, but does it make the car more competitive?

Frank Williams, the legendary principal of WilliamsF1Team, has a single question he asks of any proposed change or innovation: “Does it make the car go faster.” This is intuitively appealing, especially for Formula1 teams which are in the business of winning races. We can similarly define a “litmus test” for business decisions: does any given project, initiative or decision make more money for the business?

While it’s great to have an over-riding sense of priority, this isn’t sufficient. We can engineer a car that will sit on pole and lead the race, but if the engine melts or the suspension buckles or the wings fall off after 20 laps, being fast simply isn’t enough. In Formula 1, reliability is just as important as speed for winning races. And the results reflect this: in the November 2006 issue, F1 Racing magazine published an analysis of race results between 2000 and 2006.1 They concluded that Kimi Räikkönen lost two F1 driver’s championships because of reliability problems of his Mclaren-Mercedes; all other things being equal, in 2003 and again in 2005, mechanical retirements cost Kimi a championship. Clearly, going fast is important, but going fast while meeting requirements for quality is another.

The “it makes the car go faster” question is the classic “they get results” statement in consuming business services or solutions. Simply making the car “go faster” at best assumes things are in compliance with expectations (regulatory, quality, security, etc.), at worst dismisses if not outright ignores the importance of these issues relative simply to achieving bottom-line results. In F1, it is important to say that teams are not in the business of “speed,” but of “winning races.” This latter definition is far more inclusive, providing more comprehensive consideration for what it takes to succeed given operating realities.

Just as the FIA is increasing regulation, businesses are subjected to both increasing regulation and expectation (e.g., “is consumer data is secure”). This will affect our business “litmus test:” we can do things that drive revenue or increase profitability, but at what cost to our competitiveness relative to the business environment in which we operate? For example, we can produce a new consumer website to drive revenue, but if there’s a 90% chance of identity theft resulting from its use, the solution is a failure. Similarly, we can cut salaries 10%, but if turnover jumps to 50%, we will have a long-term cost problem. To be competitive, then, in business, just as in F1, we must have a more complete definition of “competitiveness” than simply, “does it make us more money.”

Carrying on with the F1 example, suppose that the FIA changes regulations so that engines must be more “green,” or consume less energy. Should this happen, the question “does it make the car go faster” must be considered with not just reliability but emission constraints, giving us a multi-dimensional problem. Success in this environment is far more difficult than simply “going really fast.”

It is important that we use the word “constraints” as opposed to “considerations” to describe the environment. “Constraints” promotes the value of one dimension over others, in this case “speed” over “reliability” and “emissions.” Emissions and reliability can only lose the race for us (e.g., by being in clear violation of sporting regulations and being disqualified, or by retiring from the race due to insufficient tolerance to stress), whereas speed can win the race for us by putting us, and keeping us, at the front of the grid.

The business imperative, then, regardless of what it means to have a “complete solution” remains the same: we’re in business to make money, not to comply with regulations. However, we respect the fact that we must comply with the regulations, expectations and constraints of the business environment and we place value in our capability to manage in the complete definition of our environment. We therefore want simple yet consistent models through which we can expose, analyze, and improve that which we do such that we maximize results given constraints of our operating environment.

This is why IT governance is such an important capability: it gives us greater mastery over our selves, our environment and how we interact with our environment. To that end, the equivalent question we must ask is “does it make the car more competitive.” This is less appealing as it lacks the precision, exactness and simplicity of “does it make the car go faster.” It also makes our objective a bit more vague, with “regulation” and “expectation” soft qualifiers that introduce opinion into our assessments of “compliance with expectations.” But it is a more accurate definition of success.

And therein lies the governance gap. We’re not especially good at governance now. For one thing, our bias is toward the appearance of results (delivering something under budget, getting something in production, steering systems through transactional acceptance, etc.) over solution completeness. For another, day after day we deal with the self-inflicted wounds of security violations, spiraling maintenance costs, production failures, and so forth. The intensity of the competitive environment – both regulatory and expectation – is only going to increase. We need greater mastery over not just what but how we do such that we’re aware of and developing our capabilities, and actually delivering, complete solutions.

An improved governance capability has the following characteristics:

  • We balance our results orientation with a genuine concern for the means by which those results are achieved.
  • We embrace a governance model that gives a comprehensive definition of “complete solution” the peer status – but not disproportionate priority – relative to “bottom line results.”
  • We have the capability to answer both “effectiveness” and “completeness” questions such that governance is far more a matter of fact than opinion.
  • We do the things necessary so that the team can put itself and the car (e.g., the business) on the limit without self-inflicting failure in the form of disqualification or inadequacy.
  • We recognize that our ability to govern, as well as our ability to execute, are each maturing aspects of our collective capability; we execute to both as means by which to continuously improve and, therefore, maximize profitability given a changing operating environment.

By so doing, we constantly improve our execution and awareness of our execution, and sharpen our focus in pursuit of what matters most: not running the fastest race we can, but winning the race.

And that’s why we’re in business.


1 The race results analysis appears in the “Looking Back in Anger” column of the Pitpass section of the November 2006 edition of F1 Racing magazine. F1 Racing consistently presents as complete a picture of an industry as you’ll find anywhere, including driver, engineering, supplier, regulatory and management dimensions. They communicate in a matter-of-fact writing style supplemented with unbelievable photographic images. If you have even a passing interest in F1, read a few editions and abstract the lessons learned, you’ll find a take-away for your business or industry.

Wednesday, November 22, 2006

The Leading Indicator of IT Relevance

An article in the 16-22 edition of BRW magazine by David James entitled “Listen and Earn” reports on research by Mark Ritson, an associate professor at the Melbourne Business School. His research shows a high degree of correlation (70%+) between customer loyalty and revenue growth. Specifically, the more positive customers are about a company or product to would-be customers, the greater the growth; similarly, the more negative they are, the lower the growth. It is, according to Dr. Ritson, twice as accurate as the next best measure.1

This is unambiguous guidance for business executives and managers, especially relevant at a time when growth is the business imperative.2 We can apply the same rule and draw similarly powerful conclusions for IT.

Whilst businesses are growing, IT budgets aren’t keeping pace. Gartner Research reports that on average, IT budgets lag revenue growth by about 61%; it’s as high as 63% in financial services.2 Just as the bottom line number lags, so do the satisfaction indicators: Gartner goes on to report that 60% of CEOs see IT as an inhibitor of business imperatives2, and Forrester Research reports the same number of business sponsors of IT projects are dissatisfied with fitness or timeliness of solutions delivered.3

If we interpret IT budgets as IT's revenue, it comes as no surprise that they’re lagging: business dissatisfaction with IT will reduce the enthusiasm to spend. Budget size is not the goal of IT; it is, however, an indicator of how relevant IT is to the business. If it is to move away from being perceived as a “tolerated nuisance” of doing business in this day and age, IT must be a driver of breakaway solutions, the things that create substantial market advantage or process efficiency for the business. IT won’t drive these unless solution satisfaction comes into alignment, bringing with it an increase in confidence in the capability of IT.

To have this information, we need the data. Fortunately, collecting custsat data need not be invasive or high ceremony: an 8 to 10 question survey consistently proctored (e.g., quarterly) will provide sufficient, frequent data. And, experience shows that what gets measured is what gets managed, meaning there is quick impact: although making the data visible early on causes some heartache, the numbers tend to trend upward because of their visibility.

This, then, makes the case for having customer satisfaction as a component of IT governance, not only because it is integral to answering the second governance question, “are solutions being delivered in accordance with expectations,” but specifically because it is a forward-looking indicator of IT’s relevance to the business.



1 I highly recommend reading the full article in the 16-22 November issue of BRW magazine, it’s worth the AU$3.30.
2 Gartner has produced a number of research reports in this area, including “IT Spending Lags Behind Revenue Growth in Most Industries” in August 2006 and “What’s on the Minds of CEOs and the Implications for IT” on 17 January 2005
3 From Forrester’s 2005 United States Technology User Benchmark Study.

Monday, October 23, 2006

The Governance Gap

The demand (and need) for IT governance is increasing faster than the discipline is maturing. Core to the problem is that there is no concensus as to what is meant by "governance" in an IT context. If anything, there is outright confusion, evidenced by the polyglot of consulting and services and the number of project management products being positioned as governance "solutions." This, in turn, means there are few coherant approaches (let alone solutions) to satisfy the need.

In and of itself this isn't a problem: that governance needs to be highly tailored to an IT organisation is simply an indication of practice immaturity. But there is a significant downside: practice immaturity means an absence of best practices and a high rate of solution mis-fit. This can be seen in many current IT governance structures. Often little more than large, cumbersome reporting exercises grafted on top of operations, they relying on poorly-modeled data-gathering mechanisms that inadequately interrogate what's actually happening on the ground. Instead of providing "windows on operations," they're CYA exercises that create arms-length relationships between field decision-makers and directors. At their worst, they're inhibitors to productivity and sew seeds of mistrust.

For starters, IT governance needs a clear definition. Fundamentally, IT governance is a results-oriented exercise that can be summed up in one question: what is being achieved, given operating realities (commercial, regulatory, risk, etc.)?

This is a more active than passive definition. Governance is, in fact, an active, engaged, results-oriented discipline. Some would argue that governance is more accurately defined as providing "guidance" to people throughout an organization as to how decisions should be taken and work should be done, but this is an incomplete definition. If it were meant as "guidance" it would be called "guidance." With governance comes responsibility: if bad corporate stewardship can result in criminal penalty, it is, indeed, a results oriented practice. That the obligation of "good governance" carries with it the need to know "how work is being performed" is simply an expansion - but not replacement - of the definition of governance. To wit: SOX requires the CEO and CFO to certify that financial reports reflect reality; that is, the bottom line is what they assert it to be. That the "how" is important to this act of executive certification is an extension of that basic performance-oriented question. IT governance is no different.

The über-question - what has been achieved given operating realities - breaks down into two distinct dimensions:

  1. Are we getting value for money? That is, are we maximizing return on our techcnology investment? This allows us to assess the effectiveness of our decisions.
  2. Are solutions being delivered in accordance with expectations? That is, are execution and delivery in compliance with corporate policy, be it security, quality, design, etc. This allows us to assess the completeness of our decisions and results.

The first dimension is results-orientated. It's important to note that it's not concerned with the question "are we meeting plan" but "are we getting bang for the buck." IT has traditionally asked this question after-the-fact, measuring results of operations / projects / programmes post-flight, as in-flight reporting is often little more than reporting to plan (and laden with opinion). Post-flight, however, is no longer sufficient: while corporate revenues are rising, cost containment is embedded into corporate culture; a recent report showed IT budgets lagging revenue growth by more than 60%. There really is a need to do more with less, and to do so on a large, organization-wide basis. In addition, the operating environment changes quarterly, monthly, weekly, daily; as a result "meeting plan" can be more destructive than helpful. Finally, it is important to assess which IT investements will yield the greatest business impact going forward. This means the "results" question needs to be asked on an ongoing basis and framed in a portfolio context.

The second dimension is process-oriented, concerned with whether results achieved are consistent with organisational objectives and expectations (that is, are decisions being made with a full scope of consideration.) This dimension - so often the focus of "governance" and so often ignored or outright chided ("what difference does it make how it's done, just that it's done?") - carries equal status to the results question. The "how" question has been the source of the bureaucratic burden of little operational value, the breakdown in the governance programme requiring that monthly "compliance reports" are filled out by teams, rife with survey-borne opinions more than operational fact. The reality is, "how" stuff is done is just as important as "what" is done. If SOX is evidence that this can't be left to trust in accounting and finance, it is not a stretch to say that it can't be left to trust in IT operations, either. Clearly, IT solutions that are vulnerable to security violation or are long-term financial drags on the balance sheet due to architectural flaws are serious matters: jobs are on the line for this stuff. As the stakes are higher - e.g., as the threshold for internal investment into IT projects is increasingly high and under closer scrutiny, as the cost of failing to safeguard information is similarly high, etc. - this can't be ignored.

Formalized governance is coming to IT. If it is imposed from the outside, IT will suffer under the weight of measurement and collection mis-fit, relegating IT to the role of "passenger" in the organization for many years to come. IT has the opportunity to incubate and rapidly professionalize a governance capability, aligning governance with execution through fact-based management that communicates performance not in an IT context, but in a business context. If successful, it will transform IT from being passenger to driver of organizational objectives.

Monday, September 18, 2006

Management Trends in Financial Services Application Development

IT in Financial Services is maturing into a business-value measured and driven capability. To meet demands for both rapid business innovation and high quality, it is moving toward highly disciplined, low ceremony processes that give responsibility and ownership as well as enable maximum performance, from highly capable people.

Application development in the more volatile parts of Financial Services is typically characterized as being "on fire" more than it is considered a "centre of excellence." This is accepted as m.o. largely because in these parts of the business, IT has simply ceeded control to the business operations, such as the trading desks of securities firms. Traders, being belligerant, noisy and aggressive, typically call the shots with IT. No doubt, they bring business urgency: millions of dollars (if not more) are on the line with each passing second. However, having ceeded control, IT only manages to the moment, not to an overall capability to manage business urgency. This means a lot of mistakes and rework, subsequently inefficiency and cost, precicely in the parts of the business that would benefit the most from disciplined responsiveness.

Financial Services IT increasingly recognizes that a lot of it's inability to respond is self-inflicted. The drive toward Continuous Integration is an example: how much time is wasted and how much delay is introduced by doing a build by hand, something that's done once a week (if not more frequently) in support of a production or QA release? Build can be transparent, and just about as close to something happening in zero time as you can get in IT. Because it brings a benefit of tremendous quality improvement for relatively little cost of implementing the practices and the tools (CruiseControl being free), there has recently been a signifcant groundswell of adoption in Financial Services development to automated build.

There's three specific management-oriented trends that suggest the ad-hoc, purely reactive activities are giving way to a more controlled, consistent way of working.

An emphasis on structuring, capturing and managing requirements is the first trend. Teams writing trading applications typically have little discipline in requirements definition and management. Priority decisions are off the cuff, meaning a lot of information isn't captured and expectations are simply not met. There is a growing trend toward structurally simple, highly accessible, easily managable and ubiquitously understandable expression of software requirements. This provides a durable communication channel between the business and IT as to what needs to be done, it survives business shocks of priority change and staff turnover, and it does so in a lightweight, non-ceremonial, non-cumbersome way. To that end, a common language and structure for requirements expression - the Story - is beginning to take root, specifically because it effectively communicates the essentials: the role, need, justification and acceptance criteria driving a requirement from a business, not a technical, perspective. That Stories can be managed with open-source repositories such as XPlanner, and that these scale from the team to a global level, make professionalizing requirements capture a low-disruption and high-value-added change.

The pursuit of uniform operational transparency is the second trend. It's increasingly less acceptable to have management by "I feel good about this" and increasingly more important to express the course, speed and direction of a project in a fact-based manner. Short delivery windows (e.g., iterative delivery) coupled with clear business requirements (the prior point) provide a controlled responsiveness to change. Together, what's in, what's not, what's priority and what's bumped are unambiguously communicated; in addition, team performance - of targets exceeded or missed, of capacity in excess or short supply - are factual, exposed and nearly indisputable. Particularly in the most volatile of business environments they provide accuracy of what's actually happening on the ground, both professionalizing delivery management and preventing a team from devolving into a free-for-all. Ultimately, the combination provides a great deal more information as to what everybody is doing for the business. They also uniquely and uniformly scale-up to a programme or a department, giving global status reporting that exposes hotspots and happystates well in advance of delivery dates. This approach to management provides unparalleled transparency: there is true uniformity across and within a large programme of work because it doesn't allow substitution of opinion ("we're about xx% done") for fact ("xx% of requirements are QA complete.") While it takes longer to onboard, the trend toward iterative delivery and status reporting at both project and programme levels is becoming established in Financial Services IT.

Greater rigor in identifying, recruiting and retaining best-and-brightest is the third trend. Financial services companies including investment banks, insurance companies and funds managers are all posting record numbers. Simultaneously, venture capital and angel money are back and bigger than ever, and the overall IT economy is healthier and more structurally sound than it was in the late unpleasantness (2001-2003). This is creating tremendous pressure on IT organizations to recruit and retain the high-performance people who thrive in dense-and-dynamic environments. To this end, financial services companies are increasingly scrutinizing the ways in which they hire, looking to ways to identify not just good executors, but people who (a) understand how to satisfy the requirements of the business doman and, (b) know how to maintain operational discipline so that they're not over-run by the business domain. This requires more than just good coders, analysts and managers; it requires people with high degrees of situational awareness and professionalism. Finding the right people is more than just technical and personality questions, a company must be seen to be a destination employer to attract them, build the screening processes to identify them, and maintain the environment to retain them. The feedback from IT to HR is already affecting recruiting practices, and it will continue to drive it in this direction as demand increases.

Of late, there has been a metrics trend that swept through IT organizations. This has turned out to be not so much a current trend as much as an echo of the future, a foreshadowing of things to come. There's still demand for performance, output and compliance metrics in IT, but the realization is that the call for metrics can't be answered until items 1 and 2 come to fruition. Once portable, consistent and durable requirements capture and fact-based project management have a firm footing, IT will be ready to address metrics again. Reliable metrics will still require integrity of completion (i.e., that every piece of code is certified to functional and technical tests prior to being called complete), but improved analytics and project management will lead the introduction of business-oriented metrics to IT.

The bottom line: Financial Services IT is moving toward low-ceremony, disciplined processes that give autonomy to enable maximum performance from highly capable people. Strength in fundamentals will ultimately allow IT to both measure performance and drive results in business impact terms.

Saturday, August 26, 2006

The Pile-On Index

When faced with extreme uncertainty in a problem – where perhaps the only certainty is that the solution will end up bigger than it appears to be today – it’s tempting for a team or its management to be aggressive with workload, to take on stretch goals with the intent of “getting ahead” or “getting on top of the problem.” Because situational uncertainty creates insecurity, the various stakeholders – executives, managers and executors – will seek out certainty of situational control. Just as there will be those who seek control, there will be those willing to feign control.

This is not leadership, and in fact it’s a management anti-pattern. Committing to a solution too soon, or being a “serial committer” of several solution paths over the life of a project, is at best a waste of time and energy, at worst poor stewardship: it often leads to instability (e.g., team burn-out, staff turnover), high operating costs, poor performance, and no results. It is also bad management: when faced with a high degree of problem uncertainty, it will be impossible to know how much progress is actually being made toward creating a solution. Aggressive pursuit of completion is, therefore, pointless.

This can be called out by indexing the degree to which work “piles-on” a project. This can be expressed as the sum of work in hangover (that is, work that is signed-up for but not completed in the time-box) plus the expansion of scope over the same time period (new requirements or depth of understanding which increases work estimates.) “Pile-on” has a compounding effect: teams have memory; with each passing iteration of goals not achieved and discovery of new work, the team appears to never get “on top of” the problem. The project appears to be under-achieving and therefore losing more ground to an ever-expanding solution definition.

We can get a sense for the effect of this by plotting a “pile-on” index.

Consider a team working in a highly volatile domain, such as an R&D exercise. In these situations it is difficult to establish a well-defined initial shared vision between business and IT, and scope management is difficult as requirements will enter, exit and re-enter the solution domain in very short periods of time. The solution is therefore something of a “moving target.” In this situation, if the team:

  • commits above its capacity to delivery (e.g., sets stretch goals)

  • delivers to capacity (hits none of its stretch goals)

  • experiences scope expansion iteration-on-iteration

It will more rapidly exhibit “work pile-on.”

In this example, while scope is expanding and creating a “scope deficit,” over-committing amplifies the degree to which work is “piling-on” the team. Instead of working at a sustainable pace to better understand the problem domain, the team (by team decision or mandate) is self-inflicting a greater degree of hopelessness. Going in mad pursuit of a “solution” when there is no clear understanding of the “problem” in the first place has the opposite effect of its intent: not only does the team not “turn the corner,” the project gives every appearance of sliding further and further into an abyss.

This is also where team memory comes into play: the final data point, showing up-tick (the last data point), is not itself representative of the corner being turned: a single point is not a trend, the project trendline is still negative. Until that flattens out, work is still trending toward greater “pile-on.”

This index appears to hold outside of this example. Consider two other scenarios:
  • Deferred Discovery: a team defers requirements discovery to be completed during the first few iterations of delivery, but commits and executes within its capacity. This might be a situation where scope is more important than time or cost – additional requirements are known to be coming (and are hopefully factored into the release and project plans), cadence and predictability are more important to project success.

  • Aggressive: a team working in a requirements domain that is decreasing, making commitments that reflect capacity but delivering slightly ahead of requirements. This might be a situation where the delivery date is more important than scope – in this case, all parties agree to clearly focus on the highest value requirements. Shared vision and scope management are more important to project success.


Using data from representative projects, we can plot the “pile-on” index of all three situations.

The plotlines in the second graph show a relatively neutral “pile-on” index for the aggressive and Deferred Discovery scenarios: both are working to relatively predictable paths. The Aggressive team, trending slightly ahead, is a positive working environment where it is able to track to an early completion. The Deferred Discovery team, working at a predictable pace, may re-define its project time-box (by adding one or two more iterations) to fulfill scope. Both situations are manageable and predictable and demonstrate teams are in control of their solutions. By comparison, the “futility” path (copied from above) shows no mastery of a solution domain, and in fact shows inability to manage in uncertain circumstances.

While working toward an uncertain (and expanding) domain can be frustrating, that frustration is magnified when stretch goals are set or imposed and not met. Feigning control in a situation of tremendous uncertainty creates more harm than good. The better management practice is for a team to acknowledge the situational uncertainty and collectively and incrementally develop a better understanding of both problem and solution. This minimizes frustration and maximizes the energy, creativity and raw bandwidth applied to drive out a solution.


Sunday, July 23, 2006

Nothing Happens in Zero Time

One of the benefits of Agile / best practices is integrity of completion: everything from requirements gathering to development to build to acceptance testing that needs to be done to drive a particular unit of business functionality to completion is encapsulated by it's definintion as a story. That is, we drive more finely grained units of business functionality to a state where that functionality can be utilized; that provides a boolean state of completion: 'tis done or not. This is diametrically opposed to the way in which functionality has been delivered since Eisenhower was president and Churchill was prime minister for the second time round: we have restructured business needs into technical silos of analysis, development, build, curse, test, negotiate and live-with-it. By doing so, we mortgage our future by borrowing against time-boxes in the hope that there will be sufficient bandwidth to complete inaccurately encapsulated and measured activity.

The benefits of the Agile approach are documented elsewhere, obviously the return on technology investment is much greater, as well as the transparency in knowing exactly where you are at any given time. Transparency is worth underscoring specifically because of the greater degree of fact (as opposed to opinion) there is in status reporting: requirements are in fact done or they're not done: "done" means they're ready to go live; "not done" means they're not ready to go live. There's no middle ground, no "I'm xx% complete," no mystery 6-week-window-at-the-end-of-the-project-in-which-we'll-marshal-test-and-deploy-and-hope. Above all, we rely far less hope. In and of itself this is a rich area of discussion which we'll continue to dissect, but it's important to acknowledge a subtle admission in all of this that must not go un-noticed: nothing happens in zero time.

When making work estimates we tend to assume that 100% of the effort required to get something done rests with developing (coding) software. In the process we overlook - and devalue - requirements gathering, unit testing, marshaling/building, integration, QA, UAT and releasing to production. To some extent, in the silo/waterfall world we collect requirements and assume the time / effort /cost to build code in support of two sets of functionality is marginally longer than the time / effort / cost to build just one. This isn't true as each requirements is more an "exception" than a "rule." Each introduces complexity to the business solution, each is code composed by different people which by it's nature presents different challenges. The point is, non-development-technical activity doesn't happen of it's own volition, and in many cases increases exponentially with an increase in the number of requirements piled-on, again as each requirement is really an aberration (introducing quirks, problems and issues) more than a uniform evolution of the platform.

In the same way, our "windows" for silo'd activity - separate time-boxes specifically for requirements gathering, build, a QA / UAT, etc. tend to be characterized by activity pile-on without refactoring the underlying time-box:

  • In the abstract, ever notice how so many projects have a 6 week window at the end of the project for UAT? It's never 5 weeks or 7 weeks, it's 6. Teams tend to borrow against that 6 weeks all through requirements and development, over-mortgaging it to a point where when it's called - when the project is due - there's insufficient capital (time) to cover the positions. That's because we borrow on hope that we can cover our positions, not on fact.

  • By way of specific example, in many projects, sometime during development somebody is going to migrate data; this data needs to be maintained. It often ends up a "ghost" task, sucking time away from people that they otherwise have allocated to complete (value-added) development. The same can often be said for deployment and / or environments: somebody has to own environments and own the process for putting software into production.

The point is that critical albeit non-development-specific activity ends up "piling on" during our different windows: the lead developer is now not just realizing the software but carrying a team, managing an environment and a deploy process, and keeping a data set clean. Subsequently, delivery defaults to a hero-based model during the days/hours leading-up to QA, UAT or production events. This, in turn, introduces a lot of delivery risk around that person's bandwidth: it completely collapses in the event of resignation or burn-out or if that person becomes a victim of the regional transportation authority (i.e., kisses a bus). Clearly it's horrific for the individual; what makes it a tragedy is that it's completely unnecessary business risk.

Consider the criticality of the underlying application and the cost of the risk being introduced by the hero-based model. Suppose failure of a software delivery means a commercial product doesn't get launched and revenue won't come in. Suppose further the burn rate on the development team is $200k / month and the monthly revenue is on the order of $10mm / month. The probability of delivery failure multiplied by the business impact of that failure (total costs incurred, lost revenue, etc.) becomes the maximum amount of insurance the business is providing to itself that it will complete development. From here, it isn't hard to construct a risk-reduction formula to identify the maximum the business is willing to invest to reduce the probability of failure. This is a time-sensitive calculation, so the sooner we're working this math the more likely a smaller increment in cost (people, resources, scope) will create a greater reduction in risk.

Put this in a spreadsheet for any project you have in-flight. Looking at the total commercial picture - business impact including IT commitment - consider two stakeholder perspectives:

  • As the business decision maker, what would you want to know to take an informed decision at different points in the delivery lifecycle? There's less time to be responsive to changes in the business environment as the target delivery day approaches, and there's nothing worse than a negative change in the business environment that is self-inflicted.
  • Then take the IT perspective and ask yourself (a) if it's better to personally guarantee success of the team - especially if you don't have the ability to execute that delivery - when you suspect it to be false or hollow, or (b) if it's better for you to be transparent on behalf of the entire team consistently through the delivery lifecycle.

If you want IT and business to be in partnership, you'd better be prepared to have this conversation, specifically in commercial terms. Those who will be politically embarassed by a failure to deliver - the product manager, the CEO, somebody who is measured on these results - will have the authority to take necessary action to rectify the problem provided they have sufficient time to do something about it. This means they've got to know - in terms they understand (e.g., business terms) - there's a problem in the first place. Fact-based management enables this.

Of course, there might be fear that being the bearer of bad news - e.g., that success isn't guaranteed, that there are risks - might reflect poorly on a manager. In fact, the opposite is true: calling out risk shows you're master of your domain because you're identifying and mitigating delivery risk. Hoping that things magically work themselves out exposes an inability to understand, appreciate and manage success; acknowledging the universe of activity and effort - and subsequently what could go wrong - shows clear understanding of the solution domain. I know the manager to whom I would entrust my investment decisions.

This comes back to the initial intent of this posting. In the above example of piled-on technical tasks of data management and environments issues, there's no specific visibility into what's needs to be done because non-development activity are ghost tasks. In addition to everything else, this creates tremendous tension in the business-IT relationship: "When will you be done" can't be answered authoritatively because there's no authoritative catalogue of what needs to be done and what purpose each serves in delivering business value. Similarly, time spent pursuing things outside of coding drive the business to wonder, "Just what do you spend all your time doing?"

These are diffused with fact-based management; fact-based management inherantly requires acknowledgement that nothing happens in zero time.

Tuesday, July 18, 2006

Responsiveness is more than Efficiency

Being efficient - eliminating waste - does not inherantly make one more effective in responding to changes in the business environment. It certainly engenders responsiveness: since I have a more precise sense of where I'm at, I can more precisely determine what and when I can implement a reaction. But there's a difference in (a) being able to respond and (b) being aware of the need to respond and formulating an appropriate response. The latter part - awareness and solution shaping - allows an organisation to effectively capitalize on operational responsiveness.

To be responsive to our environment, we need a high degree of "situational awareness." This means we need to be very aggressive in looking outward, developing an opportunity radar and knowing how to read / interpret / shape / prioritize what it tells us. Efficiency is, by definition, inward looking: all well and good making those Model Ts for pennies until you determine that the market doesn't want to buy Model Ts any more.

To mature our situational awareness, we need to focus on business requirements management. (Of course, arguably we're not generally very good at managing requirements, but for now, let's focus on our aspiration.) There's an ideal state of Agile maturity where requirements are captured as expressions of business functionality that provide value, with each requirement globally stored and prioritized (and re-assessed) in near-real-time. If we have transparency of operations and integrity of completion supplementing well-formed and prioritized requirements, we're not only looking outward but we have aligned the entire organisation - not just development, not just IT - in so doing.

This uniformity is important. It doesn't take much to make a development team hyper-efficient, at least relative to it's peers in just about any silo'd organisation. This is every bit as much disruptive (if not moreso) than having an inefficient delivery organisation, only now you create a queue of unused functionality: feedback loops lose timeliness, organisational memory is malformed or lost as people move or simply leave. This, in turn, simply relocates the organisational (stationary) inerta and organisational waste by creating a stockpile of largely unused capital assets.

In sum, a hyper-efficient development capability buys nothing if you don't have the business capability to properly consume it: it's not sustainable, it's disruptive, it's wasteful. It's also potentially damaging: being hyper-efficient without situational awareness makes you an over-caffinated, high-strung, hypersensitive development shop with a massive inferiority complex in a destructive relationship with your customer/business partner.

So efficiency is a goal, but let's not lose the wood for the trees. How requirements are shaped, communicated and prioritsed provides IT a critical external view that operational efficiency alone does not. Effective requirements management enables you to do more than just respond, it matures your situational awareness, letting you to take informed decisions about what and how to respond. It also brings the business and IT into the same structure of execution, creating alignment and partnership (as opposed to a subservient relationship of one to the other) in a social system of peers. That, in turn, engenders organisational cadence.

Monday, July 17, 2006

Fact-Based Management

We should be able to draw a line from delivery execution - development, implementation, infrastructure, operations - through project, programme and department level reporting. We can only do this if work being performed is properly encapsulated, trace-able and prioritized, and if the state of work (complete or incomplete) is substantially a statement of fact, as opposed to opinion.

This latter point is critical. In fact-based management, we tighten the relationship between execution and tracking by emphasizing integrity of completion. While there is still the chance of reporting the state of work as complete when in fact it is not, the probability of doing so is much lower.

In software development integrity of completion is achieved through automated testing, continuous integration, pairing and collaboration. Combined with iterative tracking, we achieve a high degree of transparency. These core practices are synergistic and reinforcing. The lack of attention to one reduces the synergistic benefits.

Fact-based planning - bringing the 4 core variables (time, people/resources, features and quality) into balance - aligns strategy with reality. It is both more transparent and highly adaptive; by definition it engenders responsiveness to changes in the business environment. This has the dual benefit of:
  • reducing organizational inertia of long-running, long-return investments for which sunk costs are substituted for business case compliance; and

  • reducing the uncertainty principle in business - by knowing to a greater degree of accuracy where we're at, we can take better, more confident decisions about where we're going.


The management anti-patterns - that is, when execution and management are unaligned - become similarly obvious:

  • We do not engage in opinion-based planning. Asserting that things will be completed by a certain date through an exercise of top-down planning doesn't give sufficient respect to the domain and the changes it can (and most likely will) introduce. It also tends to ignore the fact that nothing happens in zero time. While this might qualify as a form of prayer, it is not management.

  • We do not forego the practices that create integrity of completion in pursuit of declaring something completed. Asserting that something is "done" without subjecting it to the scruitiny of review in combination with technical and requirements testing framework mortgages the future, increasing the liklihood that we create a brownfield that will have to be remediated at a later date.

In sum, aligning execution with management creates an environment where there are no passengers in delivery. All parties own, are aligned with, and are driving to the plan, and all parties benefit from transparency.

And we don't go in mad pursuit of cramming 10 lbs of stuff into a 5 lb bag.

Monday, July 03, 2006

The Agile Manager.Init()

agile (lowercase-a)
adjective
1. Being responsive to changes in the business environment.

Agile (capital-A)
proper noun
1. An umbrella term for a collection of related methodologies including Scrum, Crystal, and Extreme Programming
2. The disciplined execution of a set of best practices in software development.

I'm creating this blog to encourage discussion of management practices that engender or inhibit responsiveness to changes in the business environment. The content will be substantially from an IT perspective, with the realization that the business environment neither begins nor ends with IT.