{{ searchResult.published_at | date:'d MMMM yyyy' }}

Loading ...
Loading ...

Enter a search term such as “mobile analytics” or browse our content using the filters above.

No_results

That’s not only a poor Scrabble score but we also couldn’t find any results matching “”.
Check your spelling or try broadening your search.

Logo_distressed

Sorry about this, there is a problem with our search at the moment.
Please try again later.

Organisations like to pretend that they're objective. It's not that simple.

Technology is tricky stuff. We respond emotionally to it. It changes the power balance between people, provoking political reactions. Vendors obfuscate about what their technology really does.

Most organisations recoil from this. They place a premium on “objective” decision-making, on measuring their options against some careful breakdown of functionality. By weighing each technology option against clear criteria, they reckon they’ll end up with the objectively “best” solution.

So we end up with the evaluation spreadsheet. Each row lists some feature or sub-feature that someone has determined the technology must have. These features are all weighted according to their importance.

The columns then reflect all the options. Fill each cell with a number assessing how well this option delivers this feature, and you calculate a score for each option. Select the one with the highest weighted score, and you have the best technology for your needs.

Problem is, this rarely works.

For a start, the features and weightings aren’t objective. Someone gathers requirements, filtering them as they go. People argue about weightings. Ultimately, the person with the most power decides.  We’ve just shifted the politics into the structure of the spreadsheet.

The assessment process is no better. I can’t tell you how many times I’ve seen people enter their numbers then adjust them to make their preferred technology come out in front.

They’ve already decided – the assessment is simply a rationalisation for their decision.

The underlying problem is that this process ignores the way people make decisions. By hiding the “irrationality” behind a veneer of objectivity, we actually make it harder to support people to make good decisions.

Research into the way experts make decisions shows that they rarely think through a set of options and assess them against objective “decision criteria”.  

In situations where they need to integrate a lot of information, deal with uncertainty, and balance the concerns of diverse perspectives, experts go through the following stages:

They imagine themselves into the situation.

  1. They identify a single option that will most likely meet their needs.
  2. They test this option, mentally, against the situation.
  3. If this option works well enough, they don’t waste time on further analysis.  They select it and get into action.
  4. If the option doesn't work, they adapt and adjust it in their minds.  If they can find a way to make it work, they use it.
  5. If they can’t find a way to make it work, they look for other options.
  6. They may go through this loop several times, using the selection and testing and adaptation of options to improve their understanding of the situation. They evolve a workable solution.

Experts can do this remarkably quickly. Fire fighters and other emergency service workers go through this loop in life-and-death situations in fractions of a second.  

They may subsequently back fit their decision onto some objective framework to explain what they did, but that’s not how they made the decision in the first place.

OK, so we’re rarely dealing with life-or-death decisions on our technology selection panels. But pretending we make decisions by careful analysis of options against predefined selection criteria has negative effects.  For example:

  • We spend time defining decision frameworks.  If people don't use them, this is waste.  Even worse, the frameworks push people to focus on the minutiae of individual features, making it harder for them to see the big picture.
  • We hide the real decision criteria.  People make up the numbers to give the outcome they want.  But we treat the numbers as if they had some objective reality.
  • We make it easier for vendors to game the system.  They can play the features and scores game better than we can.
  • We lose sight of the complexity of how people use technology.  It gets hidden in the spreadsheet.  We give ourselves no firm basis to accommodate change as we begin to implement the technology.

Marketers understand this dynamic well.  Make it easy for people to imagine themselves into a situation – a new car, a new jacket, a holiday – and they’re half way to a sale.  Long lists of attributes come later, and often only if obfuscation is necessary.  (Think of phone pricing plans – they’re designed to hide differences between vendors, not promote rational decision-making.)

What does this mean for us, when we need to buy systems, select agencies, or otherwise decide about technology?

For a start, we should make it easy to imagine ourselves into the situation we’re trying to address.  What would it be like to use this system, work with this agency, etc?  Write scenarios rather than feature lists, and ask vendors to explain how their technology fits each scenario.

Second, we need to give ourselves hands-on time with each option, so we can experience how they really work. Vendor-driven presentations aren’t enough.  Pilots are ideal, or hands-on workshops where we can try the options for ourselves during the course of the procurement.

We’re not fire fighters  Lives don’t depend on the split-second timing of our decisions.  So it makes sense to balance scenario-based decision-making with some thought about features, functionality and weighing off the different options.  This can help us fill some of the gaps and blind spots in our thinking.

But we shouldn’t let “objectivity” chase away an honest understanding of the way we make decisions.  When we use scenarios and adaptive evaluation of the options, we work to our strengths.  Start from there, and use decision support tools to extend your capabilities.  Don't let the support tools drive the entire process.

Graham Oakes

Published 18 February, 2013 by Graham Oakes

Graham Oakes helps people untangle complex technology, processes, relationships and governance. He is contributor to Econsultancy.

43 more posts from this author

Comments (3)

Avatar-blank-50x50

Mark Bolitho, New Business Director - Ecommerce at more2

Hi Graham

This is incredibly spooky - I have posted on a very similar them myself this morning:

http://econsultancy.com/uk/blog/62155-is-the-rfp-broken-time-for-a-scenario-based-approach-to-choosing-an-ecommerce-solution

I wholeheartedly agree with you that the way some/most people go about selecting new tech is maybe not the best.

Scenarios aren't a new thing, but they are most certainly under valued and underused, in my experience.

Nice post, thanks.

over 3 years ago

Avatar-blank-50x50

Jason Ball

Good points Graham

The myth of the rational buyer is as widespread as it is inaccurate. Even in B2B, emotions largely rule decisions (even if they're hidden behind layers of post-rationalisation).

One thing I'd add is the human element. People buy people. Whether it's a multi-million pound piece of tech or, for that matter, an agency's services. They want a picture of what it will be like to work with company X on a day to day basis. They want to know what happens when things go wrong (because they understand that things do go wrong every now and then). These factors generally trump product and service features (which these days are reasonably close to parity in many categories).

It's one of the reasons that helpful, human, engaging content is works so well. It gives buyers an insight into what a brand will be like to do business with. It helps them, albeit in a limited way, road-test the relationship before getting into the messy stuff of contacting a salesperson or unleashing procurement.

Thanks.

over 3 years ago

Avatar-blank-50x50

Matt

So what is the best veichle for buying?

Is it through online research, customer engagement days or larger exhibitions and conferences such as NRF and RBTE. One would the think its the latter due to the customer being able to compare a larger collection of solutions at the same time?

over 3 years ago

Comment
No-profile-pic
Save or Cancel
Daily_pulse_signup_wide

Enjoying this article?

Get more just like this, delivered to your inbox.

Keep up to date with the latest analysis, inspiration and learning from the Econsultancy blog with our free Daily Pulse newsletter. Each weekday, you ll receive a hand-picked digest of the latest and greatest articles, as well as snippets of new market data, best practice guides and trends research.