There are two types of project failure. Failure due to inherent risk is OK: it’s the other side of the risk/reward trade-off.
Our real problem is all the unnecessary failure that surrounds our projects.
Projects fail. They cost more than expected, run longer than expected, deliver less than expected. A whole industry has sprung up to collect statistics on project failure and prescribe remedies for the ills that cause this failure. Welcome to the Analysts of Doom.
It’s easy to find reports calling out the latest number. 75% of projects fail. 90% of projects fail. The numbers are always scary. And they always miss something: projects are supposed to fail.
Projects are, by definition, one-off activities. They entail some element of research, of learning, of doing new stuff. (At least new to this organisation.) They all contain risks.
That’s what projects are about. We engage with risk in order to achieve rewards. Sometimes the risk wins, and our project fails. If none of our projects ever fail, then we should be asking whether we’re taking on enough risk.
Perhaps we’re leaving potential rewards on the table?
Of course, there are degrees of risk. Blue-sky research lies at one end of the spectrum, only a tiny proportion of research projects leads to successful new products.
At the other end are the bread-and-butter activities that differ only slightly from one product to the next. There’s not a huge amount of risk attached to, say, setting up an event micro-site that’s just like the dozens we do every quarter.
Organisations succeed, then, when they take on a balanced portfolio of these risks, one that’s tuned to their objectives, ability to manage risk, etc.
My favourite example is the oil industry. Only something like one in nine exploratory wells discovers new oil. That’s a 90% failure rate. And that rate has remained pretty constant for decades – as exploration techniques have improved, people have gone looking for oil in ever-tougher environments.
Yet no one runs around crying about the failure rate of oil exploration projects. They accept they’re in a high-risk business, and get on with it.
Web development. Software development. Digital marketing. They’re all pretty much the same. If you want to do this stuff, you need to accept you’re engaging with a degree of risk. You’re going to experience some failures. If you’re not, you’re probably not pushing the limits hard enough.
Of course, it’s not that simple…
The real issue here is that there are two types of failure. The first type is inherent to what we’re doing. There’s a natural degree of risk attached to, say, new product development, so we have to expect a commensurate failure rate. That’s the failure we have to live with.
The second type is unnecessary failure. The failure we create when we get basic stuff wrong in our project initiation and execution. This adds to the inherent failure rate, creating the statistics those analysts love to decry. (But couldn’t live without, for they generate their revenue by selling remedies for failure.)
Projects fail unnecessarily for a host of boring, predictable, old reasons:
We do too many projects that don’t link to organisational objectives and priorities. These drain resources from other, more important, projects, causing a cascade of failures.
We set up projects with overly vague and fuzzy requirements. In research, you have to live with fuzziness. In those bread-and-butter projects, fuzziness means you waste time and resources chasing unfocused objectives.
Project sponsors don’t put enough time and support into their projects. Without executive cover, projects get tangled up in organisational politics. They struggle to keep in touch with organisational priorities.
We don’t manage risk. Too many teams create a risk register then file and forget it. Recording a risk into a spreadsheet doesn’t eliminate it. And often we don’t even record known risks – some risks are too politically sensitive to even acknowledge. So what chance is there of managing them?
We communicate poorly. Partly that’s because communication is hard. Partly it’s because people use vagueness to manipulate outcomes. Partly it’s because organisations create an environment where people are scared to call it like they see it.
We try to operate with too few people, insufficient time, inadequate skills. Again, this is partly because estimating is hard. But it’s largely because we negotiate poorly, so we end up with unrealistic deadlines and teams.
The project management literature has been cataloguing these reasons for decades. By and large, we know what we need to do to counter them. We’re just very bad at doing it.
I suspect one reason we’re so bad is that we mix up the two failure types. The strategies for managing inherent failure are very different to those for managing unnecessary failure. For a start, the former needs domain expertise while the latter needs project management expertise. When we confuse the two, we end up managing both of them poorly.
Because of this mixing, I have no idea what the unnecessary failure rate is on our projects. None of the analysts separates it from the inherent failure rate that is natural to our industry. I’m sure it’s too high. (I would say that: I have services to sell too.)
But I do hope I’ll never be working in an industry where no project ever fails. That’d be just too boring.