MVT. It sounds exciting. It sounds intelligent. It certainly sounds like there is much more to it than plain old A/B testing.

We are a conversion optimisation agency and we have never run a MVT. Why? Let me explain.

MVT: Testing with no hypothesis

What is MVT?

In short, this is where you want to test more than one variation of more than one element of any given web page simultaneously (i.e. three different headlines and three different button colours), and you let your MVT tool create numerous test variations with every possible combination of headline and button colour.

It is often the case that businesses have 16, 32 or even 76 versions of a page being served to visitors for any one MVT test.

The main alternative to running multivariate tests is running straight A/B or A/B/n tests.

Why is MVT popular?

Mainstream promotion

Google was one of the first providers of a tool to allow website owners to run these types of tests back in 2008.

Since then, one of the industry’s biggest and most well-known testing tools has built a business on being an “enterprise MVT tool”. MVT sticks in the mind easier, as do all three letter acronyms based partly on how the human mind likes ‘The Rule of Three’.

The term is used to describe testing in general

Often when we are speaking with senior decision makers they refer to MVT as the catch-all term for their optimisation strategy, even if they are mainly running A/B tests.

It sounds intelligent and complex

On the surface MVT sounds like there is some intelligence and science behind it. The prevailing thought is: ‘This isn’t just basic A/B testing, we are testing multiple variations at the same time. It must be good.’

What is the biggest problem with MVT?

MVT lacks a crucial ingredient when it comes to running a test – a reason why. Why are we doing this test? What is our hypothesis? What are we aiming to learn from running this test? Why have we chosen to make these changes?

Running multivariate tests ignores the skill and experience of the person/s planning and creating the test hypothesis and creative execution, and instead places the work on the tool to serve any number of combinations to visitors and to finally tell us which of the many variations has performed the best.

MVT: No why behind your tests

Many companies invest a significant amount of budget each year on enterprise tools with a much smaller budget invested on people and skills. This is sad.

It indicates that testing is seen as more about the technology rather than what it truly should be – driven by a multi-disciplinary team who create insight driven test hypotheses across the full spectrum of testing.

What do I mean by full spectrum testing? This means everything from simple, quick iterative testing, all the way through to testing business models and value propositions.

Why MVT should be renamed NHT

Anyone who is testing should have a hypothesis behind each test. Why are we running this test, what behaviour are we expecting to change and what impact are we expecting to get?

In its very basic form, this is how a hypothesis should be structured:

  • By changing [something] to [something else] we expect to see [this behaviour change] which will result in [the impact on our primary/secondary metric].

As businesses mature within conversion optimisation, they recognise that this basic hypothesis structure is lacking one critical element: the observations and insights which have led to creating the hypotheses in the first place.

This is a more intelligent structure for your hypothesis:

  • Based on [making these qualitative/quantitative observations and based on prior experience/test learnings], by changing [something] to [something else] we expect to see [this behaviour change] which will result in [the impact on our primary/secondary metric].

So there we have it. The intelligent, insight driven hypothesis structure you should be using.

Let’s go back to MVT and evaluate how this compares. A hypothesis structure for MVT could read something like this:

  • By creating [lots of variations to our control page] changing [a wide range of page elements such as our headline, image, copy and call to action colour] to [other random variations of headlines, images, copy and call to action copy] we expect that our testing tool [will create variations of each permutation and serve these over weeks or probably months] which will result in [at least one of the variations out-performing the original, in which case we have a success and can then produce a detailed analysis report].

It doesn’t quite follow. It doesn’t have intelligence. It lacks any real form of data and customer insight. Plus it will probably take three months to get anywhere near statistical significance.

This is why MVT should be renamed NHT. No Hypothesis Testing.

Four reasons we do A/B testing rather than MVT

Since I first started my business back in 2004, we have never run a MVT. We almost exclusively run A/B tests and here are four reasons why:

1. When each of your test hypotheses are driven by intelligent user research and prior testing and have a clear purpose of positively altering user behaviour, you can confidently create one test variation against a control with the expectation that it will deliver an increase in the primary performance metric.

2. Tests reach statistical significance far quicker than if you were running five or more variations at one time. Time is money.

Each day is an opportunity to learn something meaningful about your businesses visitors and customers. Each day is an opportunity to create new ways of increasing the revenue and profit your visitors are delivering for your business.

A/B testing allows you to run back-to-back tests covering the full spectrum of testing to build and maintain testing momentum, rather than relying on one big MVT running for weeks or months – with the often faint hope that one of the multiple variations out-performs your control.

3. A/B tests allow you to draw meaningful insights from the test outcomes themselves, whereas MVT doesn’t allow you to draw conclusions on which elements impacted your customers and which were just extra noise that had no impact.

Don’t underestimate the value of the learnings and customer understanding you can gain from “simple” A/B testing; they will allow you to make your testing programme more efficient, more progressive and can have big positive implications on the wider business.

4. With A/B testing everyone involved knows the reason they are doing what they are doing:

  • They have the hypothesis.
  • They may have seen the research and data.
  • They know what the goal of the test is.
  • They know that they are not just testing on whim.
  • There is a ‘why’ behind the work they are doing.

MVT turns this process into a sometimes complex technical set-up to get all the elements and variations set-up, QA’d and ready to go live.

MVT isn’t and has never been synonymous with agility. In fact, the technical complexity of setting up and QA’ing MVT can often be one of the major bottlenecks in a company’s testing strategy.

So what next for MVT?

MVT needs to go in to a quiet room with its big brother CRO and have a long hard look at itself. MVT needs to realise that its time has come and gone.

Now is the time to get back to what testing and optimisation should be all about – developing intelligent hypotheses and running clean A/B tests which conclude quickly and deliver insights, learnings and helping grow businesses.

MVT should look at its bigger brother CRO

For more on this topic, check out these A/B testing success stories: