{{ searchResult.published_at | date:'d MMMM yyyy' }}

Loading ...
Loading ...

Enter a search term such as “mobile analytics” or browse our content using the filters above.

No_results

That’s not only a poor Scrabble score but we also couldn’t find any results matching “”.
Check your spelling or try broadening your search.

Logo_distressed

Sorry about this, there is a problem with our search at the moment.
Please try again later.

The 21st century marketer needs an extensive toolkit. As well as the ‘standard’ skills of creativity, organisation and management, these days they also need to be web literate, social media savvy and equipped with basic data science skills. 

Amongst all of these areas of technology competence one that is growing in importance, but is perhaps still misunderstood, is website testing.    

Testing is the new intuition in site development and optimisation. Rather than relying on hunches, the modern web marketer will test potential changes to their site before deploying them thus, we are led to believe, ensuring their efficacy. 

However, if all changes are now tested, how come we don’t all have perfect sites? If testing only tells us the truth, how come we still sometimes go down dead ends? 

The answer lies not necessarily in the tests, but in the ways that they’re applied. We’ve seen thousands of testing processes run across a huge variety of sites and what’s struck us is that the issues that led to unsuccessful tests were common across industries.

Good tests and bad tests

Perhaps the single most common reason for tests failing is the motivation behind their derivation.  

We divide tests into two types. The first is the data driven test, where you use data to understand user behaviour on the site and then form a hypothesis to test. The second is what we call a UX driven test, where someone has an idea and decides to test it. 

We knew anecdotally that UX driven tests were generally less successful but, in the name of best practice, we decided to test this assumption. What we found surprised even us. 

Data driven tests had a true positive impact 77% of the time, a pretty decent return you could argue. UX driven testing, however, fared less well – delivering a true positive impact just 10% of times. 

Therefore, data driven tests are 7.7 times more effective than UX tests.

Why do tests go wrong?

Why is this? One reason is to do with organisational dynamics, in particular the dominance of HIPPOs (highest paid person’s opinions).

Too often, decisions about what to change on a site are based not a rigorous analysis of the data, but on what the highest paid person in the room thinks ought to be changed. Site owners need to recognise that data, and not subjective opinion, are what drives successful change.

Opinions, however, are by no means the only problem. The most obvious problem that many face is a lack of meaningful data. Successful testing requires access to huge amounts of well structured information in order to ensure quality results – if you put trash in you get trash out. 

Many site owners lack this quality and scale of data as a result of the tools they’re using and, as a result, will be stuck with misleading test outputs.

Even when you have the data, inaccuracies can still sneak into tests through poor statistical models. For example, adding additional variations to an AB test serves to double, or triple the time required to get the results and can often result in a less valid outcome, unless an appropriate adjustment is made the 'win criteria'. 

The only way to ensure that testing results are valid is by following rigorous, scientific methods – even if you put in the right ingredients, a poor recipe will produce a terrible dish!

Finally, tests are often undermined by poor processes or systems. The legacy analytics platforms that many sites have in place are simply not configured to deliver valuable insight. 

They come from an era where reporting, and not understanding, were the objectives of analytics and so they can fail to deliver when pushed towards a testing role. 

Even where well configured testing tools are in place, these are rarely integrated with other systems like data capture and analytics, opening up more opportunities for valid results to fall through the gaps.

Making testing work

So, how can the diligent web marketer ensure testing success? We think there are three key parts of a process that each test needs to fulfil in order to maximise the likelihood of success.

1. Good tests are based on diligent analysis

Ignore the HIPPO and analyse the data to develop an optimisation hypothesis that the numbers indicate is likely to be successful. 

UX driven tests can deliver positive results (if only 10% of the time), but that approach means that you’re going to be wasting the vast majority of your testing investment.

2. Prioritise your testing

Good tests take time and resource, so make sure you’re testing the most important things first. What’s the scale of expected impact? Is changing the colour of a button going to have a 10% impact? 

Probably not. Focus on testing something that’s going to disrupt the user journey as this could have a significant impact either way. 

As part of this prioritisation, you need to take into account the length of your test and the amount of time the changes will take. A longer test will deliver better results, but it will also consume resource and potentially delay positive changes. 

Use a testing duration calculator to optimise your testing length. 

You also need to consider how long changes will take to implement – changing an entire page can be a lengthy development task and that’s before you even consider things like cross-browser testing. 

Think about prioritising quick hit wins, rather than systemic changes to maximise positive outcomes.

3. Think about ROI

Focus on the tests that increase revenue, not just pageviews (unless you’re an ad-funded publisher). Clickthrough and traffic are great, but you need to prove a link to revenue for your tests to be delivering real value.

Once you have these processes in place, the final part of the testing mix is to ensure that you have the technical infrastructure to hypothesise, test and deploy in one seamless cycle. You want to pilot a test using a 50:50 split and then roll it out as an always on campaign without the need to rebuild or recode.

Test your tests now

The rise of data-led online marketing means that testing is only going to become a more vital part of every web marketer’s job. Moreover, the 21st century marketer knows that a 7.7 times increase in positive outcomes is a win that can’t be ignored.

With that in mind, it’s worth ensuring that the testing process you have in place is robust, rigorous and process oriented. 

Additionally, having a technology toolkit that allows you to benefit from that huge success uplift is vital.  Without these assets aligned, testing could become a recipe for time wasting and failure rather than optimisation and success.

Ian McCaig

Published 21 December, 2012 by Ian McCaig

Ian McCaig is Founder at Qubit and a contributor to Econsultancy.

29 more posts from this author

Comments (15)

Comment
No-profile-pic
Save or Cancel
James Gurd

James Gurd, Owner at Digital JugglerSmall Business Multi-user

Hi Ian,

Thanks for the post, interesting to read your findings from existing tests.

Can I ask how you are defining 'effective'? You say:

"Data driven tests had a true positive impact 77% of the time, a pretty decent return you could argue. UX driven testing, however, fared less well – delivering a true positive impact just 10% of times. Therefore, data driven tests are 7.7 times more effective than UX tests."

However, to me it just means that more of the data driven tests produced a positive response, not that they were more effective. Did the data driven tests produce a greater uplift in conversion/ROI etc than the UX tests?

I don't mean to be pedantic, just would like further clarification on the test results out of genuine interest.

I think the methodology to approaching tests you outline is sensible - prioritisation is key as you need to focus efforts on tests that can have a significant impact on performance.

People need to be careful that they don't exclude opinions and 'gut feel', which can be useful when developing hypotheses and test scenarios. I agree that the HIPPO influence needs to be mitigated but sometimes opinions are accurate, even when not backed up with data. You have to know how to factor opinions in to the planning but use data to validate them.

Data is essential as it gives you the reality of what is happening but internal knowledge can also be powerful if used sensibly. In my experience when you blend all inputs together and consider the information logically (analytics data, voice-of-customer, gut feel, internal knowledge/experience etc), you get the best test plans.

Thanks
james

almost 4 years ago

Avatar-blank-50x50

Peter Zmijewski

Nice sharing... Keep continue your work.

almost 4 years ago

Avatar-blank-50x50

Petar Subotic, Worker at Company

I am strongly disagreeing with a comparison of "UX Tests" vs. data driven tests, UX Tests (what I assume you are referring to) also are based on data, sometimes different data than you get from your metrics report, such as user feedback, consolidated session recordings analysis, competitive analysis etc.

Furthermore, I strongly agree with James Guard's comment regarding your suggestion to disregard gut feel and intuition as doing something new and completely unsupported is often the best way to break through local maximums - which are bound to happen sooner or later in one iteration of a solution.

almost 4 years ago

Avatar-blank-50x50

Paul Van Cotthem

Rigorous testing is a fine way to test new features or incremental improvements for a site.

But do not take people or HIPPO's out of the equation. For it is they who most often come up with ideas of improvements, or even radical redesign.

New ideas, which real people come up with, can then be tested for validity and ROI.

True breakthrough ideas seldom spring from analysing data generated by testing the current design or process flow of a website.

almost 4 years ago

Dean Marsden

Dean Marsden, Digital Marketing Executive at Koozai Ltd

User testing for UX can be a good starting point but I'd agree that using raw data from website visitors is a great way building up 'real world' results, even from the most basic of tests.

I always use a multitude of conversion and visitor data to analyse any tests. Simply reviewing the results of an A/B test.

almost 4 years ago

Avatar-blank-50x50

Colin

Thanks for the article but the title is somewhat disingenuous. The ux testing you mention is really "expert" review testing where the "expert" is obviously the wrong person. Real lab based ux testing with real customers would result in a far better ratio of success than 7 to 1
Having said that, once in production any changes derived from ux testing should be subject to data driven scrutiny.
Rgds

almost 4 years ago

Paul Postance

Paul Postance, Profit Optimisation Consultant at t? ??????

Good article Ian.

To clarify, it sounds like you're not talking about a difference in ideology but using ABn/MVT to empirically judge the difference between top-down change concepts (the UX ideas) and bottom-up change concepts (data driven)

I've found that combining qual and quant data inside a methodology geared around tangible outcomes is the best way to drive specific value, rather than try to separate the two - although, one of the main benefits of split testing is that it allows you to step away from the emotion of any decision, so any disagreement can (usually) be solved by running a given test.

The outcomes will then speak for themselves. I've seen examples of tests both confirming and totally refuting 'best practice'. This shows that wherever the idea originated from, if there is a robust measurement process value can be achieved.

almost 4 years ago

dan barker

dan barker, E-Business Consultant at Dan Barker

Here are 3 silly thoughts on this:

1. There is not much data included to back up the headline. There is no explanation of methodology, sample size info, no description of the projects/tests. As smaller sites (with less budget to buy in expertise) are more likely to perform what you refer to as 'ux driven' tests, I'd say it may just be a skill/experience/knowledge issue as much as a pure methodology issue. (I don't know - there's no data so tough to say :)

2. To say that 'data driven tests are 7.7x more effective' is not true according to the data presented. The data says that they are 7.7x more likely to be effective, but it says nothing about the scale of the effectiveness (or even what the definition of effective is). As a silly example, it's possible that the 10% of tests were dramatically more effective than the 77% of tests.

3. Every UX project I have been involved in over the last few years has involved data, and - of course - every data based test involves some sort of alternation of user experience.

Thanks for the thought provoking post - it would be great to see the data. I look forward to the next!

dan

almost 4 years ago

Avatar-blank-50x50

Depesh Mandalia, Head of Digital Marketing at Lost My Name

I agree the title could be misleading to junior testers/optimisers - it could read that data driven testing should be done over UX/experience/gut-feel/business priority based testing which as others have mentioned could be detrimental.

As James/Paul mention you need a blend of both.

almost 4 years ago

Ian McCaig

Ian McCaig, CMO & Founder at Qubit

Thank you for all of the comments and debate on this topic. It is definitely a subject where peoples views are quite different but it is good to see that we all broadly align that you need a blend of both UX expertise with empirical fact to drive the positive results that all businesses want from testing. In reality there are a number of ways in which this research could have been conducted but we felt creating two segments of tests (data driven vs opinion led) was the most affective to administer to give a view on test effectiveness. We also decided to take the overall uplift out of our research and just look at the outcome eg was it positive or not but each test did contain a concrete uplift over a sustained period of time. In our experience most 'high' uplifts tend to decrease over time - and therefore this methodolgy looks to factor that in.

In reality there is no one set of best practices for delivering ROI and AB/MV testing needs to be assessed on a case by case basis but what is certain is that it is getting more challenging to deliver positive ROI as site wide issues become fewer and fewer and more segmented testing is required to drive incremental improvements to specific user groups. Developing efficient processes internally also can help improve the output as most tests can be administered which creates a much stronger learning curve for everyone involved.

almost 4 years ago

Chris Gibbins

Chris Gibbins, Director of User Experience & Optimisation at BiglightSmall Business Multi-user

Hi Ian. Thanks for writing about such an interesting subject and for some good points on the importance of prioritisation.
However, I'm intrigued why you call tests based on only opinions/hunches "UX Driven Testing"? The whole field of User Experience (UX) is grounded in the principles of User Centred Design (UCD) and Human Computer Interaction (HCI) and is absolutely data driven!
"Opinion led" testing is a far less confusing choice of words, as in your comment above.

almost 4 years ago

Avatar-blank-50x50

Pedro da Silva, Digital Marketing Manager at HSBCEnterprise

Hi Ian,

Great topic.

I think your headline should be 'data driven tests are 7.7 times more effective than random ideas'.

By your definition of UX as "The second is what we call a UX driven test, where someone has an idea and decides to test it."

I would say this is a gross misunderstanding of what UX is.

While I completely agree and value the data driven approach, it's not always possible to get reliable data. There are scenarios involving new design builds where past data is not going to be predictive of new designs and taking a behavioural, methodical but qualitative approach to provide alternatives will always outperform.
Even better take those designs and test them thoroughly to present the best outcome.

Cheers,
Pedro

almost 4 years ago

Ian McCaig

Ian McCaig, CMO & Founder at Qubit

Hi Pedro and Chris - Thanks for your comments. I think I should just clarify what I meant by the two groups upon which this analysis was completed - data driven vs opinion led. With all the data driven tests the hypothesis was generated based on quantitative data sets including web analytics, page interaction and large scale onsite feedback so their was some grounded analysis completed to generate the insight. With the Opinion led testing the hypotesis could have come from either usability testing or based on internal beliefs or focus groups about how to improve the website.

I completely understand their are times when data is simply not available and a new website build is a great example of this and in these cases other robust approaches like usability testing are the preferred approach or sometimes experts from within the business sharing their ideas to build the hypothesis. However, we believe if you want to get the best results it is important that any testing is done based on a grounded set of data unless of course something is just broken in which case testing should be used to identify the best outcome from a number of alternatives.

almost 4 years ago

Chris Gibbins

Chris Gibbins, Director of User Experience & Optimisation at BiglightSmall Business Multi-user

Hi Ian - thanks for clarifying.
I think we'll agree to disagree on the subject of what's "opinion led" then. Usability testing for example is about user interaction and what users do... not opinions! And it's an extremely useful technique for live websites as well as new builds.

Regarding A/B/n and Multivariate testing from my experience online marketers would be taking a big risk if UX was NOT a significant factor and driver in their testing process. We've helped many companies who have previously struggled to get good results from CRO and testing even though they already had sophisticated tools in place for gathering all sorts of quant and qual data. And although some of them had struggled with dodgy testing tools most of the problems they faced were not technology related. The two most common factors were, 1) not having a structured process, and 2) not having the expertise in UX and website optimisation (in-house or agency) to make sure that evidence-based testing was actually carried out regularly, iteratively and to a high level of quality.

almost 4 years ago

Avatar-blank-50x50

Janet Salvoni, Head of User Insights at RedEye optimum.web

Ian, you quite rightly point out in your opening paragraph the need for the 21st Century Marketer to have an extensive toolkit. I totally agree, but this must also include an accurate usage of terminology. A 'UX driven' test is by definition one that is informed by DATA based on the behaviours and experiences of real users, gathered through a range of techniques including usability testing, analytics analysis and user surveys. It is misleading to suggest that 'UX driven' testing falls into the same unscientific box as tests based on random ideas.

almost 4 years ago

Comment
No-profile-pic
Save or Cancel
Daily_pulse_signup_wide

Enjoying this article?

Get more just like this, delivered to your inbox.

Keep up to date with the latest analysis, inspiration and learning from the Econsultancy blog with our free Daily Pulse newsletter. Each weekday, you ll receive a hand-picked digest of the latest and greatest articles, as well as snippets of new market data, best practice guides and trends research.