When seeking to optimise a website, what is it that defines whether or not a test has been successful?

It would be easy to fall into the trap of thinking that a test is only successful if it results in a positive uplift of some sort (e.g. higher conversion rate), but in fact the truth is far more complex.

Our recent report, The Past, Present and Future of Website Optimisation, published in partnership with Qubit, examines on a more granular level what are the characteristics of a successful test.

The report details real, on-the-ground experiences of marketers from a variety of different industries.

It is based on in-depth interviews with senior-level executives working within ecommerce, online and marketing departments, from companies including Topshop, Ann Summers, Shop Direct, Schuh, Best Western Hotels, and LV.

For many of the executives interviewed for this report, the success of individual, or even a suite of tests, is not always predicated on immediate cash return.

Here’s a summary of some of the factors that make up a successful test. These points are discussed in more detail in the full report.


Respondents agreed that a successful test was one whose outcome was incontrovertible.

Daniel Sale, head of digital marketing at Investec Private Bank, said:: 

If I get clarity on the impact of anything we’ve changed, that’s a big tick.

To achieve clarity you also have to understand how the way you test affects outcomes.

Small sample sizes reduce affectiveness, as does failing to repeat or using unrepeatable tests.

Equally, understanding results in context is critical. 

Best Western Hotels’ head of digital Daniel Morley makes the vital observation that a 30% increase in conversion does not directly translate into a 30% increase in the P&L.

Neutral and negative are still positives

There’s an important distinction to be made between false results and neutral or negative ones, which deliver insights of equal value to the organisation as positives.

The very reason we run tests is because we don’t know for certain whether our hypotheses will deliver positive business results.

Negative results might disprove an otherwise rational assumption based on other data, and at the very least it stops the business from embarking on a potentially damaging strategy.

This clearly delivers strong value to the organisation.

Chris Howard, head of digital at Shop Direct:

If it is going to fail, fail fast. It’s about the interaction with the wider business. The big change has been the mindset around testing over the last year. We see a positive benefit from around a third of tests. 

A further third will be neutral and a third will perform below the original control. 

We’ve done a lot of work with the team to be comfortable seeing test results that are worse than what went before.

Known unknowns

Some of the biggest successes interviewees alluded to were where the company was able to identify changes that needed to be made as they answered customer challenges that could not be divined through standard analytics. 

Once revealed, these ‘unknown unknowns’ can have a significant impact on customer experience.

For example, Ann Summers was able to achieve a 2.23% increase in conversion on its homepage after testing the messaging around delivery options.

Underneath the navigation it had previously promoted free delivery on a spend over £30 or free express delivery, but testing revealed that customers were actually more interested in discreet packaging and hassle-free returns.


Ultimately all tests need to be measured against their impact on the bottom line.

But despite the common focus on conversion rates, the executives interviewed for the report agreed that the potential business efficiencies, opportunity costs and organisational benefits whose number values were much more obtuse or long term, were of equal value.

LV’s group ecommerce director, Paul Wishman, notes that they take into account other factors beyond increasing profits, in particular customer satisfaction:

It should go hand in hand with a 360-degree view of the metrics stacking up. Tier one are revenue and cost per acquisition, but we need a clear line of sight on tiers two and three. 

There’s also a recognition that if you don’t get your customer statistics bang on and you’re not considering competitor benchmarking then you’ll be blindsided and the bar will be raised.

Inevitably though, he agrees that what drives focus is the financial bottom line.

Pragmatism is certainly key as it is clear from interviewees that optimisation of the customer experience is desirable but not a silver bullet to magically drive sales skywards.

Download the full report for more insights: The Past, Present and Future of Website Optimisation.

David Moth

Published 9 March, 2015 by David Moth

David Moth is Editor and Head of Social at Econsultancy. You can follow him on Twitter or connect via LinkedIn

1719 more posts from this author

You might be interested in

Comments (1)


Deri Jones, CEO at SciVisum Ltd

It's great to see test-based optimisation get attention - it's what my team do kind of do daily, after all (albeit bringing in the 360% perspective of: "what impact is my technology's imperfections having on the outcome, due to the user experience effect").

So having made it clear that I'm a fan: I would also like to point out that a 'test+ optimise in small steps' based approach does not mean you end up with a website in the top 20 or 50% of what it could be.

It's limitations are clear when you take the analogy of climbing a mountain: you are very unlikely to get very high up the mountain if your criteria at every fork in the path is to choose the one that obviously goes up .

That strategy will 100% get you to the top of something: most likely to the top of some foothill, or some minor rise: but probably won't get you to the top half of the mountain.

So there needs to be wider thinking and planning than just 'test+ optimise in small steps'.

Like the question eCommerce Directors ought to be asking themselves at regular intervals : Is it time we planned a year ahead to change platform altogether ? For which there are a number of reasons for an against, in any specific situation: it's a complex call.

Of course, an equally ineffective (but often used) strategy that attempts to add in the wider view: is based on 'let's copy what the competition just did' - e.g. 'let's copy the same platform they have!'

over 3 years ago

Save or Cancel

Enjoying this article?

Get more just like this, delivered to your inbox.

Keep up to date with the latest analysis, inspiration and learning from the Econsultancy blog with our free Digital Pulse newsletter. You will receive a hand-picked digest of the latest and greatest articles, as well as snippets of new market data, best practice guides and trends research.