I consult, write, and speak on running better technology businesses (tech firms and IT captives) and the things that make it possible: good governance behaviors (activist investing in IT), what matters most (results, not effort), how we organize (restructure from the technologically abstract to the business concrete), how we execute and manage (replacing industrial with professional), how we plan (debunking the myth of control), and how we pay the bills (capital-intensive financing and budgeting in an agile world). I am increasingly interested in robustness over optimization.

I work for ThoughtWorks, the global leader in software delivery and consulting.

Wednesday, December 17, 2008

The Agile PMO: Consistent Project Gatekeepers

In the last installment we took a look at the gap between what the PMO reports out, and what's actually happening in a project team. To begin to understand the nature of this gap, we’ll first take a look at what we use for project gatekeepers.



We need to make a clear distinction in an IT project between the means and the ends. We often confuse this, because what we see day in and day out is that we’re paying for the means of production, when in the end we’re really acquiring an asset. Unfortunately, this tends to skew our thinking about how we execute, organize, measure our progress and assess our exposure.

Traditional IT projects are mass economy-of-scale exercises: once development begins, armies of developers are unleashed. So in Traditional IT we stage large volumes of work to keep the largest and most expensive pool of people – developers – busy in the hopes of maximizing their productive effort. To minimize the chance that development is misdirected (e.g., due to poor requirements) or wasted (e.g., due to poor technical structures), we create checkpoints, or gatekeepers, throughout the project. Satisfy the gatekeeper, so the thinking goes, and we minimize the risk. Traditionally in IT, our gatekeepers are typically several different waves of requirements and specification documents, then software, then test results, then a production event.




This may give us lots of gatekeepers, but they’re inconsistent in both degree of effort and results achieved.

Clearly, a small team delivering documentation is nowhere near as significant an event as a large team delivering executable code. But of bigger concern is the latency between the time when requirements are captured and the time they're available as working code in an environment. We don’t know for fact that our documentation-centric gatekeepers have truly been satisfied until we have a functioning asset. A dozen people can reach a dozen different conclusions after reading the same documentation; the proof of the quality and completeness of documentation is in the delivered software. Inadequacies in documentation may not become apparent until QA or, if we’re lucky, during development. In effect, there’s very little other than opinion to prevent us from developing a toxic asset: bad initial requirements are transformed into flawed derivative artifacts (specifications, code, even tests) as they pass through different stages. And, of course, we not only pass along quality problems, we risk introducing additional quality problems unique to each stage (flawed technical specifications, poor tests). This just adds insult to injury: we’ve not only put ourselves at risk of creating a useless asset, but our interim progress reports are laden with false positives.

One solution often attempted is phased delivery of use cases: the traditional IT steps are still performed, only we make interim deliveries of code to a QA environment and execute functional tests against them. The theory goes that functional success is assured by test cases passing, which, in turn, indicates some measure of “earned value” for the total amount spent. This assumes that the software released to QA on this interim basis is of high functional and technical quality. If it is of low quality – again, think of all the problems that build up when people work in technical or component silos and all that toxicity we’re building up through the “soft” gatekeepers of project documentation – the blowback to the teams in the form of a large number of defects raised will interfere and ultimately derail development. When this happens, it obliterates the economies of scale we were hoping to achieve. Phased delivery of use cases does less to expose problems in a solve-able way early in the lifecycle than it does pile-on work to development teams that are already overloaded. It adds noise to the development cycle and confuses decision makers as to what is going on, and why it’s happening in the first place. This may fail a doomed project sooner, but not by much. The real tragedy is that the idea of incremental delivery will be discredited in the minds of the people involved in the project.

By comparison, Agile maintains a steady pace of progress by having all of functional efforts simultaneously focused on achieving the same result. An Agile team is not an exercise in scale. It maintains a more consistent (or at any rate, less volatile) level of effort over the life of a project. Our gatekeepers are consistent, rooted in certification of the code, not certification of things that describe what will be coded. Either we have delivered the requirements defined or we have not. They either satisfy our technical and functional quality gatekeepers or they do not. They are found acceptable by the business or they are not. We know this with each iteration – every 2 weeks or so – not months or even years after requirements have been penned. Quite simply, because we derive our certification exclusively from the delivered asset and not from things that describe the asset, we’re not confusing the means for the ends.



Just because Agile teams are not exercises in scale doesn't mean they don't scale. To take on a large application, we divide the project into business-focused teams instead of technically-focused teams. "Work completed" is more clearly understood, because we report in terms of business needs satisfied (results) and not technical tasks completed (effort). Reporting progress in a large development program is therefore much more concrete to everybody involved.

However, this doesn’t mean that an Agile project won’t fail. It may. But if it does, it’s far less likely to be a spectacular failure. By paying attention to results as opposed to effort, we spot both trouble and opportunity a lot sooner in an Agile project. This means we can take smaller and less expensive corrective action (reprioritzation, technology change, team change, etc.) much earlier. More importantly, we’ll see the impact of those actions on our bottom line results much sooner, too. This is far better than being surprised into making a large and expensive correction late in the lifecycle.

So what does this mean for the PMO? It means that we have to change what it is we’re measuring – the means by which we can declare “victory” at any gatekeeper – if we’re going to change what it is we’re managing. We don’t want our gatekeepers to be rooted in effort; we want them rooted in results. In IT projects, the results that matter are the code and its demonstrable attributes (performance, technical quality, functional quality, etc.), not assurances about the code. We want to see results-based gatekeepers satisfied from the very early stages of the project, and we want them satisfied very frequently. We can do this across the portfolio to reduce execution risk, and with it reduce the probability that we'll get blind-sided.

Changing our gatekeepers is important, but it’s only the first step. In the next installments we’ll take a deeper look at how we organise and execute for development, and the impact that has on the confidence with which we can measure progress. We also need to be aware of how much work we might unintentionally create for people. Setting up these gatekeepers sounds great, but we need to avoid “metrics tax” burden on the teams, so we’ll also take a look at how we can make this collection non-burdensome to both team and PMO, and get closer to real-time project metrics.