Organisations like to pretend that they’re objective. It’s not that simple.
Technology is tricky stuff. We respond emotionally to it. It changes the power balance between people, provoking political reactions. Vendors obfuscate about what their technology really does.
Most organisations recoil from this. They place a premium on “objective” decision-making, on measuring their options against some careful breakdown of functionality. By weighing each technology option against clear criteria, they reckon they’ll end up with the objectively “best” solution.
So we end up with the evaluation spreadsheet. Each row lists some feature or sub-feature that someone has determined the technology must have. These features are all weighted according to their importance.
The columns then reflect all the options. Fill each cell with a number assessing how well this option delivers this feature, and you calculate a score for each option. Select the one with the highest weighted score, and you have the best technology for your needs.
Problem is, this rarely works.
For a start, the features and weightings aren’t objective. Someone gathers requirements, filtering them as they go. People argue about weightings. Ultimately, the person with the most power decides. We’ve just shifted the politics into the structure of the spreadsheet.
The assessment process is no better. I can’t tell you how many times I’ve seen people enter their numbers then adjust them to make their preferred technology come out in front.
They’ve already decided – the assessment is simply a rationalisation for their decision.
The underlying problem is that this process ignores the way people make decisions. By hiding the “irrationality” behind a veneer of objectivity, we actually make it harder to support people to make good decisions.
Research into the way experts make decisions shows that they rarely think through a set of options and assess them against objective “decision criteria”.
In situations where they need to integrate a lot of information, deal with uncertainty, and balance the concerns of diverse perspectives, experts go through the following stages:
They imagine themselves into the situation.
- They identify a single option that will most likely meet their needs.
- They test this option, mentally, against the situation.
- If this option works well enough, they don’t waste time on further analysis. They select it and get into action.
- If the option doesn’t work, they adapt and adjust it in their minds. If they can find a way to make it work, they use it.
- If they can’t find a way to make it work, they look for other options.
- They may go through this loop several times, using the selection and testing and adaptation of options to improve their understanding of the situation. They evolve a workable solution.
Experts can do this remarkably quickly. Fire fighters and other emergency service workers go through this loop in life-and-death situations in fractions of a second.
They may subsequently back fit their decision onto some objective framework to explain what they did, but that’s not how they made the decision in the first place.
OK, so we’re rarely dealing with life-or-death decisions on our technology selection panels. But pretending we make decisions by careful analysis of options against predefined selection criteria has negative effects. For example:
- We spend time defining decision frameworks. If people don’t use them, this is waste. Even worse, the frameworks push people to focus on the minutiae of individual features, making it harder for them to see the big picture.
- We hide the real decision criteria. People make up the numbers to give the outcome they want. But we treat the numbers as if they had some objective reality.
- We make it easier for vendors to game the system. They can play the features and scores game better than we can.
- We lose sight of the complexity of how people use technology. It gets hidden in the spreadsheet. We give ourselves no firm basis to accommodate change as we begin to implement the technology.
Marketers understand this dynamic well. Make it easy for people to imagine themselves into a situation – a new car, a new jacket, a holiday – and they’re half way to a sale. Long lists of attributes come later, and often only if obfuscation is necessary. (Think of phone pricing plans – they’re designed to hide differences between vendors, not promote rational decision-making.)
What does this mean for us, when we need to buy systems, select agencies, or otherwise decide about technology?
For a start, we should make it easy to imagine ourselves into the situation we’re trying to address. What would it be like to use this system, work with this agency, etc? Write scenarios rather than feature lists, and ask vendors to explain how their technology fits each scenario.
Second, we need to give ourselves hands-on time with each option, so we can experience how they really work. Vendor-driven presentations aren’t enough. Pilots are ideal, or hands-on workshops where we can try the options for ourselves during the course of the procurement.
We’re not fire fighters Lives don’t depend on the split-second timing of our decisions. So it makes sense to balance scenario-based decision-making with some thought about features, functionality and weighing off the different options. This can help us fill some of the gaps and blind spots in our thinking.
But we shouldn’t let “objectivity” chase away an honest understanding of the way we make decisions. When we use scenarios and adaptive evaluation of the options, we work to our strengths. Start from there, and use decision support tools to extend your capabilities. Don’t let the support tools drive the entire process.