Enter a search term such as “mobile analytics” or browse our content using the filters above.
That’s not only a poor Scrabble score but we also couldn’t find any results matching
Check your spelling or try broadening your search.
Sorry about this, there is a problem with our search at the moment.
Please try again later.
There are two types of project failure. You need to manage them differently.
Projects fail. Lots of them. We don't really know how many, as the statistics vary widely. And most of the statistics are pretty dubious.
They’re gathered using questionable sampling methods. They define failure ambiguously.
And their authors generally have a vested interest in talking up failure rates. They sell solutions to avoid project failure. If there weren’t many failures, they wouldn't have a job. (I call it the project failure industry. hey need to create the perception of failure).
Nonetheless, I think it’s fair to say that lots of projects do fail.
The best time to review a project is probably months ago, when all seemed well.
It doesn't look good. The project has missed a major milestone. The team is working seven-day weeks. The project manager is off with stress-related illness. Quality has gone out the window.
And as for our customer, not at all happy.
We need to find out what’s really going on. It’s time to schedule a project review.
Are you doing what's best, or simply seeking the protection of the herd?
It’s an enticing idea: 'Best practice'. It suggests a clearly defined path to success; a recipe for perfectly honed websites, trouble-free projects, delighted users; a silver bullet.
But what is 'best practice'?
I don’t mean best practice for UX design, or best practice for SEO, or best practice for project execution. But the concept of 'best practice' itself.
Where do best practices come from? How do we recognise them? How do we adopt them?
If your organisation is going to do any sort of substantive innovation, it needs to take on risk. That needs a different style of project management.
Why do you do projects?
For most organisations, the answer is usually something to do with change. “We need to add new features to our product. We need to improve the user experience in our online shop. We need to upgrade the technology running our site”.
That’s kind of odd, when you think about it.
Managing complexity is tough. Especially if you can't agree on where the complexity lies.
Sometimes decisions are easy.
You have the data you need. You know what you want to achieve. You know how things work – if we do this, then that will happen. So you connect the dots, make the decision, and all is well.
Make the same decision often enough, and you’ll define a standard, even a 'best practice'.
When you're buying, focus on opening up lines of communication, not on attemping to appear objective.
We’ve all been there, sat around a table talking to a series of vendors about how they’ll deliver our new site, or campaign, or brand.
By the end of the day, we’ll be confused, up to our eyeballs in jargon, unable to remember quite who said what.
People talk a lot about agile processes, but it’s the agile decisions that really count.
How does your organisation make decisions?
This is a dangerous question. I’ve asked it a lot recently, and got some unprintable replies.
Most of us want to work on projects that make a difference. So we have to deal with risk.
Project managers talk a lot about risk. They prepare risk registers. They analyse impacts and probabilities. They define strategies to mitigate risks.
Then they do nothing.
Centralisation is a power game. Treat it as a strategy for learning, and it might be more useful.
How does your web team work?
We’ve all experimented with various forms. When content management systems were new, devolved teams were all the rage. Give this wonderful tool to everyone in the company, and they’ll each edit their own little bit of the site.
What a wonderful site that was…
We play mind games with success. That's OK if you’re trying to avoid blame, but not if you're trying to avoid failure.
My American friends are still talking about HealthCare.gov. Why did no-one see failure coming? Why, when so many people on the project could see issues, did no-one act to improve things? Another high profile project failure enters the lexicon.
Great questions. Why don’t we discuss the issues that are so manifest on our projects?
Our projects would end better if we accepted the fundamental fuzziness in their beginnings.
Projects often end badly. They go out with a skyful of fireworks, finger-pointing and blame. Why doesn’t the site perform adequately? Where’s all the functionality we expected? Where has all the time, and budget, gone?
Some people say this is because we start projects so poorly.
Product and service development is all about risk. We take on a range of market, design and technical risks in order to gain rewards – new products, better conversion rates, increased market share, improved margins, etc.
Sometimes, however, the risks win. Projects fail for a whole host of reasons: our aspirations run ahead of the technology; we fail to find a commercially feasible solution to design challenges; our competition beats us to the punch. The list is almost endless.
And too many items on that list are entirely manageable. Poor internal communication. Ill-defined scope. Failure to engage key stakeholders. Unrealistic estimates. The project management literature has been calling out such risks for decades.
We know how to solve these problems. We just don’t do it.
That’s the real failure on many of our projects: we fail to see and manage the basic stuff.