Over the past five years A/B (or MV) Testing has rapidly grown in popularity amongst digital professionals.

Access to affordable and easy to use tools backed up by robust mathematics has made it relatively straightforward to incrementally improve website performance through live content experimentation.

Today, the question is no longer whether or not to adopt tools like Optimizely or Maxymiser, but how to run an efficient and systemised programme that a) maximises conversions, b) has a positive ROI and c) avoids performance improvements plateauing after initial ‘low hanging fruit’ wins.

In this article we examine how User Insight (gathered through online usability testing) is improving A/B Testing programmes by drawing on real-world implementations with five best practice recommendations:

  1. Use customer-struggle insight to prioritise test plans.
  2. Develop root-cause hypotheses.
  3. Improve variant quality.
  4. Tackle tough problems.
  5. Get a view on competitors.

*A/B Testing means both A/B and MV Testing for the remainder of this document.

1. Use customer-struggle insight to prioritise test plans

It’s prudent to use multiple sources of insight to prioritise what to A/B Test next. But, in reality, too many revert to their hunches to decide.

On the face of it, prioritising A/B Tests should be straightforward. A search on Google reveals  how CRO Practitioners recommend this is approached; often boiling down to a combination of using data to identify those highest value pages that are the easiest to change.

But, in reality, many teams tend to rely on hunches – especially those of influential executives – to prioritise their test plan.

This can lead to a plateauing of results after some initial wins (where the first “no-brainer” hunches proved right) because perceived rather than actual customer pain points are being tackled.

A straightforward way to avoid this is to identify actual customer struggle by observing target customers on the key journeys through UX Testing.

Insight from this testing helps teams in three ways:

  1. to identify the most impactful real World problems experienced by users that may not be evident from mining site data alone
  2. to counter a hunch driven approach with compelling evidence – showing videos of customers struggling will convince even the most ardent executive
  3. to quickly develop root cause hypothesis (see 2 below).

Real world example

A high-end women’s clothes retailer had been using Optimizely for more than 12 months. Their first few A/B Tests, designed to address the hunches of the ecommerce team, resulted in decent uplifts (peaking at 5%).

The success of the initial tests lead them to continue to rely on their own hunches to prioritise future of A/B Tests. Over the following months the results were less impressive, even though the volume of testing increased.

Then, following two rounds of cross-device UX Testing on their key journeys, they re-prioritised their test plan to address the points of actual customer struggle that the testing revealed such as:

  • Confusing returns policy wording that eroded trust.
  • No support for SmartPhone pinch and zoom on product images.
  • Unclear delivery options.

Left to their own hunches, the team would never have prioritised running A/B Tests in these areas and their tests would continue to plateau.

2. Develop root-cause hypotheses

Insight from UX Testing helps optimisation teams develop robust root-cause hypothesis so that the test variants they design address an underlying issue that they understand.

If teams do not understand the root-cause of a conversion problem, they are often tempted to rely on guesswork or best practice to design variants for A/B Tests.

This can limit the overall success of an A/B Testing programme, and can even lead to false positives: where an uplift is stumbled upon without the (more lucrative) underlying issue being addressed.

Real world example

One online retailer with an increasing bounce rate on product category landing pages surmised that the lack of product filtering options was causing users to abandon.

The team then developed design variants based on this hypothesis that had more granular filtering and ran A/B Tests – leading to some improvement, but not addressing the root-cause of the increasing bounce rates.

By observing customers landing on product category pages in a round of UX Testing, the team quickly identified the root cause of the abandonment.

In this case it was sorting options (rather than filtering) that was the actual root cause. The team then successfully reduced bounce rates in another round of A/B Testing.

3. Improve variant quality

Even with robust root-cause hypotheses, the success of any A/B Test is dependent on the quality of the design variants – how well do they address the root cause problem?

An easy way to improve the quality of variants is for teams to gather User Insight on mock-ups or prototypes and improve them during the design phase, before they are A/B Tested.

There’s no need to wait for the finished designs and testing can be undertaken rapidly with the design team iterating on the test results. This can also mitigate against uwittingly introducing new UX issues.

Being confident that the design variants are of the best achievable quality maximises the likelihood of A/B Testing success.

Real world example

AO.com logo

AO.com identified that the absence of videos on product pages was limiting the conversion opportunity.

As AO designed a “B” product page that included manufacturer product videos, it ran UX Tests to validate the variant quality with customers before running live A/B Tests.

The testing revealed that the manufacturers’ videos (highly stylised TV adverts) were of little value to users who wanted to see products in context. The AO.com team then developed a variant with videos that showed the product in a real environment (e.g. a kitchen) and used these style of videos in the A/B Tests.

This resulted in a dramatic 8% uplift in online sales,  an improvement that would not have been achieved had UX Testing not revealed why the team’s initial design was sub-optimal.

4. Tackle tough problems

Some complex conversion opportunities require a greater depth of insight, before they can even be considered for A/B Testing.

Redesigning a Menu structure is the most common example of where more extensive UX Testing is prudent.

Menus can be complex, and extensive – running Card Sorting and Tree Tests ahead of any live testing can save teams from designing and running what can prove to be very complex A/B Tests.

There are times when running A/B Tests, or to be more precise, many A/B Tests in the Wild is simply not feasible. This often applies when compliance or consistency of experience is important. For example, the logged in account area for a Bank or Utility Company.

Real world example

bg

British Gas wanted to improve online bills for customers. It developed four variants into visual prototypes. But, knowing that call centre staff would struggle to answer customer queries if they first had to deduce which variant was being served (had all four been tested in the Wild) they ran an extensive round of UX Testing to determine a single variant to A/B Test.

After gathering insight from over 250 customers, British Gas achieved a significant improvement with the winning variant from UX Testing.

5. Get a view on competitors

Running comparison UX Tests versus the competition benchmarks both experience and common buying objections to inform future A/B tests.

Online consumers’ journeys are not linear. They are rarely: 1) Search Google 2) Choose a PPC ad 3) Buy from that company.

Customers will compare multiple sites in multiple tabs (on Desktop and Tablet at least) while considering many factors including product, price, delivery options, returns and site trustworthiness.

Understanding what matters to potential customers as they are comparing closely competing sites is a good way to generate new A/B Test ideas that go beyond reducing customer struggle on a specific site and delve deeper into the customer psychology.

Real world example

A UK insurance provider wanted to understand why, after getting an online quote, more visitors this quarter than in previous ones never transacted with its site.

To find out, the company ran UX Tests where users compared the offerings of its own site and that of the closest competitors: discovering recent improvements to the competitors’ online offering were behind the abandonment.

Armed with this user insight they ran a series of successful A/B Tests that reduced the causes of abandonment.

Summary

These best practice recommendations demonstrate how embedding UX Testing can make A/B Testing Programmes more successful and efficient because team decision making is improved through insight – eradicating hunches and involving customers at every stage.