When seeking to optimise a website, what is it that defines whether or not a test has been successful?

It would be easy to fall into the trap of thinking that a test is only successful if it results in a positive uplift of some sort (e.g. higher conversion rate), but in fact the truth is far more complex.

Our recent report, The Past, Present and Future of Website Optimisation, published in partnership with Qubit, examines on a more granular level what are the characteristics of a successful test.

The report details real, on-the-ground experiences of marketers from a variety of different industries.

It is based on in-depth interviews with senior-level executives working within ecommerce, online and marketing departments, from companies including Topshop, Ann Summers, Shop Direct, Schuh, Best Western Hotels, and LV.

For many of the executives interviewed for this report, the success of individual, or even a suite of tests, is not always predicated on immediate cash return.

Here’s a summary of some of the factors that make up a successful test. These points are discussed in more detail in the full report.

Clarity

Respondents agreed that a successful test was one whose outcome was incontrovertible.

Daniel Sale, head of digital marketing at Investec Private Bank, said::

If I get clarity on the impact of anything we’ve changed, that’s a big tick.

To achieve clarity you also have to understand how the way you test affects outcomes.

Small sample sizes reduce affectiveness, as does failing to repeat or using unrepeatable tests.

Equally, understanding results in context is critical.

Best Western Hotels’ head of digital Daniel Morley makes the vital observation that a 30% increase in conversion does not directly translate into a 30% increase in the P&L.

Neutral and negative are still positives

There’s an important distinction to be made between false results and neutral or negative ones, which deliver insights of equal value to the organisation as positives.

The very reason we run tests is because we don’t know for certain whether our hypotheses will deliver positive business results.

Negative results might disprove an otherwise rational assumption based on other data, and at the very least it stops the business from embarking on a potentially damaging strategy.

This clearly delivers strong value to the organisation.

Chris Howard, head of digital at Shop Direct:

If it is going to fail, fail fast. It’s about the interaction with the wider business. The big change has been the mindset around testing over the last year. We see a positive benefit from around a third of tests.

A further third will be neutral and a third will perform below the original control.

We’ve done a lot of work with the team to be comfortable seeing test results that are worse than what went before.

Known unknowns

Some of the biggest successes interviewees alluded to were where the company was able to identify changes that needed to be made as they answered customer challenges that could not be divined through standard analytics.

Once revealed, these ‘unknown unknowns’ can have a significant impact on customer experience.

For example, Ann Summers was able to achieve a 2.23% increase in conversion on its homepage after testing the messaging around delivery options.

Underneath the navigation it had previously promoted free delivery on a spend over £30 or free express delivery, but testing revealed that customers were actually more interested in discreet packaging and hassle-free returns.

Results

Ultimately all tests need to be measured against their impact on the bottom line.

But despite the common focus on conversion rates, the executives interviewed for the report agreed that the potential business efficiencies, opportunity costs and organisational benefits whose number values were much more obtuse or long term, were of equal value.

LV’s group ecommerce director, Paul Wishman, notes that they take into account other factors beyond increasing profits, in particular customer satisfaction:

It should go hand in hand with a 360-degree view of the metrics stacking up. Tier one are revenue and cost per acquisition, but we need a clear line of sight on tiers two and three.

There’s also a recognition that if you don’t get your customer statistics bang on and you’re not considering competitor benchmarking then you’ll be blindsided and the bar will be raised.

Inevitably though, he agrees that what drives focus is the financial bottom line.

Pragmatism is certainly key as it is clear from interviewees that optimisation of the customer experience is desirable but not a silver bullet to magically drive sales skywards.

Download the full report for more insights: The Past, Present and Future of Website Optimisation.