There are two types of project failure. Failure due to inherent risk is OK: it's the other side of the risk/reward trade-off.  

Our real problem is all the unnecessary failure that surrounds our projects.

Projects fail. They cost more than expected, run longer than expected, deliver less than expected. A whole industry has sprung up to collect statistics on project failure and prescribe remedies for the ills that cause this failure. Welcome to the Analysts of Doom.

It’s easy to find reports calling out the latest number. 75% of projects fail. 90% of projects fail. The numbers are always scary. And they always miss something: projects are supposed to fail.

Projects are, by definition, one-off activities.  They entail some element of research, of learning, of doing new stuff. (At least new to this organisation.) They all contain risks.

That’s what projects are about. We engage with risk in order to achieve rewards. Sometimes the risk wins, and our project fails. If none of our projects ever fail, then we should be asking whether we’re taking on enough risk.  

Perhaps we’re leaving potential rewards on the table?

Of course, there are degrees of risk. Blue-sky research lies at one end of the spectrum, only a tiny proportion of research projects leads to successful new products.  

At the other end are the bread-and-butter activities that differ only slightly from one product to the next. There’s not a huge amount of risk attached to, say, setting up an event micro-site that’s just like the dozens we do every quarter.

Organisations succeed, then, when they take on a balanced portfolio of these risks, one that’s tuned to their objectives, ability to manage risk, etc.

My favourite example is the oil industry. Only something like one in nine exploratory wells discovers new oil. That’s a 90% failure rate. And that rate has remained pretty constant for decades – as exploration techniques have improved, people have gone looking for oil in ever-tougher environments.

Yet no one runs around crying about the failure rate of oil exploration projects. They accept they're in a high-risk business, and get on with it.

Web development.  Software development.  Digital marketing.  They’re all pretty much the same. If you want to do this stuff, you need to accept you’re engaging with a degree of risk.  You're going to experience some failures.  If you’re not, you’re probably not pushing the limits hard enough.

Of course, it’s not that simple…

The real issue here is that there are two types of failure. The first type is inherent to what we’re doing. There’s a natural degree of risk attached to, say, new product development, so we have to expect a commensurate failure rate. That’s the failure we have to live with.

The second type is unnecessary failure. The failure we create when we get basic stuff wrong in our project initiation and execution. This adds to the inherent failure rate, creating the statistics those analysts love to decry. (But couldn’t live without, for they generate their revenue by selling remedies for failure.)

Projects fail unnecessarily for a host of boring, predictable, old reasons:

  • We do too many projects that don’t link to organisational objectives and priorities. These drain resources from other, more important, projects, causing a cascade of failures.

  • We set up projects with overly vague and fuzzy requirements. In research, you have to live with fuzziness. In those bread-and-butter projects, fuzziness means you waste time and resources chasing unfocused objectives.

  • Project sponsors don't put enough time and support into their projects. Without executive cover, projects get tangled up in organisational politics.  They struggle to keep in touch with organisational priorities.

  • We don’t manage risk. Too many teams create a risk register then file and forget it.  Recording a risk into a spreadsheet doesn’t eliminate it. And often we don't even record known risks – some risks are too politically sensitive to even acknowledge. So what chance is there of managing them?

  • We communicate poorly. Partly that’s because communication is hard. Partly it’s because people use vagueness to manipulate outcomes. Partly it’s because organisations create an environment where people are scared to call it like they see it.

  • We try to operate with too few people, insufficient time, inadequate skills. Again, this is partly because estimating is hard. But it’s largely because we negotiate poorly, so we end up with unrealistic deadlines and teams.

The project management literature has been cataloguing these reasons for decades. By and large, we know what we need to do to counter them. We’re just very bad at doing it.

I suspect one reason we’re so bad is that we mix up the two failure types. The strategies for managing inherent failure are very different to those for managing unnecessary failure. For a start, the former needs domain expertise while the latter needs project management expertise.  When we confuse the two, we end up managing both of them poorly.

Because of this mixing, I have no idea what the unnecessary failure rate is on our projects. None of the analysts separates it from the inherent failure rate that is natural to our industry. I’m sure it's too high. (I would say that: I have services to sell too.)

But I do hope I’ll never be working in an industry where no project ever fails. That’d be just too boring.

Graham Oakes

Published 25 June, 2013 by Graham Oakes

Graham Oakes helps people untangle complex technology, processes, relationships and governance. He is contributor to Econsultancy.

43 more posts from this author

You might be interested in

Comments (4)


Brent Summers

It's true, project failure rate is quite high... especially if you're following the PMI standards. But some projects are successful at achieving their objectives but simply went over budget or timeline. It's hard to label those as "complete" failures. And, if we let it get the best of us we wouldn't have probably ever reached the moon.

I think one additional reason many projects fail is because goals aren't clear enough. It's critical to *define* success in order to achieve it. Whether your process is waterfall, or agile if you're setting SMART goals you're far more likely to get a win.

about 5 years ago


David Hobbs

Thanks Graham for breaking down the two types of failure. With respect to unnecessary failure, I would say that sometimes the problem is not enough domain expertise and too much "generic" project management. For example, strong and focused requirements will require domain expertise and not just any business analyst from any domain.

about 5 years ago


, Soupbowl AB

A project that was high risk from the beginning and had clear success criteria could theoretically be stopped at a point when it was clear that it would fail to meet goals. Unfortunately in corporate life this rarely happens. Time and time again, executives bow to the political imperative of being seen to succeed, and meaningless projects continue on to the bitter end, often delivering nothing of value, but eating up valuable resources. I loved your oil analogy! Can you imagine the oil industry continuing right the way through to whatever is their "detailed implementation" phase in those nine out of ten cases where there's either no oil or the costs to get at it radically outweigh the value? The point is to fail fast so that you can move on and eventually succeed. Failing slowly is the real enemy.

about 5 years ago



Here is my experience :
1. Project
We worked very hard on a new project and by any means of industry average I would say we were way faster and way below average budgets for these kind of projects. By the way this was confirmed several times when I spoke with peers from other companies working in the same line of business.
Since Management had no idea about industry standards they took the results and claimed it was above budget and did not meet deadlines. They simply did not believe the project engineers, which always told a different story. At that point I also learned that 'all engineers are sandbagging'. So regardless what the engineer claimed he was sandbagging and not working hard enough. I estimated about 1 year (not sandbagging) and it was about one year, but they wanted more like 8 months.

2. Project
So the requirements for the second project were much higher, but the measure of the first project was taken into account. In addition since we 'learned' we should be actually faster and below the first time and budget, which then was supposed to balance the increased requirements. Here again not listening that requirements were much greater than before (lots of new stuff). Also again the 'sandbagging' argument was brought up. (actually I was ready to quit ... but I also like the challenge :) ) Again same thing happened than before. A new project manager was brought in for the third project and so on. So by industry standard I think we were still below budget and within the timeline, but again industry standards were never considered.
I estimated about 2 years (not sandbagging) and it was about 2 years, but they thought we can do it in the same 1 year timeframe.

3. Project
Finally we learned to sandbag :), we also learned to talk the manager lingo (what do they want to hear), we did more in house and this time it was in fact much more like the second project. I estimated with all that about 1 year and it was about 11 months. This time we were about on the same page with management, since they learned by now that they would better listen to the domain experts.

Well the point I am making is that the first two projects were perfect successes by my measure, whereas they were failures by absolute unrealistic measure of management.

about 5 years ago

Save or Cancel

Enjoying this article?

Get more just like this, delivered to your inbox.

Keep up to date with the latest analysis, inspiration and learning from the Econsultancy blog with our free Digital Pulse newsletter. You will receive a hand-picked digest of the latest and greatest articles, as well as snippets of new market data, best practice guides and trends research.