Enter a search term such as “mobile analytics” or browse our content using the filters above.
That’s not only a poor Scrabble score but we also couldn’t find any results matching
Check your spelling or try broadening your search.
Sorry about this, there is a problem with our search at the moment.
Please try again later.
Multivariate tests, whilst marvellous things, are becoming "quick and dirty". The ease of deployment, WYSIWYG variant creation, and on-demand "live" results means that these supposedly scientific tests are being created, executed and reported on in a fashion at odds with their scientific underpinnings.
In this post, I'll try to go through what makes MVT a scientific methodology, the pitfalls of quick testing, and how to get the best out of your tests.
Multivariate testing is all about Maths
Your website is a mathematical model. Run with me on this analogy, it holds up I promise.
You have a series of variables, X1, X2, X3, X9million, which denote things on your website that can change. Bits of text, colours, positions, whether a picture of a person is smiling or not, and so on.
So you know that some function of X1,X2,X3 etc = Y1, your conversion rate.
Multivariate testing comes from a field of statistics known as multivariate analysis, designed to evaluate the importance of a variable on a result, so that you can get from a set of hundreds of variables in your model to just a handful. MVT goes one step further, in that not only can we measure significance of a variable, but also test multiple values of it.
That's the most boring maths-y part of the post and I promise it won't get more maths-y that that. It might get more boring, but that's up to you.
You should proceed as if it were a scientific test. When you are going to create your next MVT, sit down and write out a specification document, which should cover:
- Why are you testing?
- What tests have been conducted previously that have relevance to this one?
- What is the objective of the test?
- What is your success criteria - how will the winner be declared?
- The tool that you're using.
- The variants created, and their reason for creation.
- Any segmentations and limiters in place.
- The time the test took to run.
- The ultimate sample size.
- The measured rate of required activity.
- The confidence rate you will accept.
- What was your "hunch" before running the test`/
- Did the test agree with your hunch?
- Why do you think the winning variant won?
- How could the winning variant be improved further?
Done? Great! You're ready to run your test. However before you do, bear in mind.....
(be warned, I rant a bit here)
There are always external factors
In any test, there's always external factors which will affect your results. Your tests aren't being performed in isolation, but you have marketing campaigns, PR, new content, sales and promotions which will draw different types of visitors to your site.
Yes, it's just like the Heisenberg Uncertainty Principle, only for like websites and stuff.
You can't do anything about this, your marketing team is not going to down tools for a week whilst you run a test.
You can partially protect yourself (if your MVT tool allows) by segmenting your test on entries from a particular keyword or campaign, or cookie value if you've performed an RFM segmentation or similar previously, but still you won't get the full picture.
This might sound like there's no point doing Multivariate Testing, since you can never be completely confident that something is working or not. I'm not saying that, of course, but this makes it clear that whatever MVT tool your using it doesn't know all the facts.
Interpretation is always needed
Since your multivariate testing tool doesn't know everything, whatever result it gives you requires some further analysis. you just can't take it as rote that a particular piece of content performs well exactly on its own.
Sometimes you do have to put your cod psychology hat on and think about what it was about that particular variation that made it work. Was there any merchandising near it that complemented it in terms of tone of voice or imagery, for example.
You're either confident, or your not
So, your running your test, and hey, within an hour, you've got a 2000% increase in conversion because you've changed the word 'Register' to 'Continue'. Hooray, let's all go home for tea and cucumber sandwiches!
Multivariate tests take time, they really do. If you're going to be thorough, and run a full factorial test (running every content variant against every other content variant), then you're in for the long haul.
Google provides a handy MV test duration calculator. Be prepared to sigh.
The primary danger with multivariate testing, in that you're often shown results live. Some testing engines will even call a winner when it thinks it has statistical significance with 50 or so conversions.
50?! A sample size of 50! You can't even make a spurious claim in a TV ad for hairspray with a sample size of 50!
Just like a hairspray ad, volume is everything.
One test is rarely enough
When you look at your analytics, you can always see browsing and purchasing patterns. That peak just after payday, that trough when it was sunny that weekend - remember these apply to your test as well.
In any scientific tests, no results are said to be conclusive unless they can be externally recreated and ratified. Now of course, you can't run your test on a different website, but you can run it at a different time, which will bring with it a different set of conditions (see external factors above). If you're to be sure that a content variant will outperform all others, you need to run the exact same test again
In fact, you should take this as an opportunity to run a follow up test, against the original control content, refining the “winning” content with further variants.
Most magic bullets don't exist
We’ve read fantastic stories about how changing one word massively increased the conversion rate. Look! It's the Three Hundred Million Dollar Button!
But these examples are the anomalies, don’t expect to see the same results, manage your expectations. Huge increases in conversion rate rarely come from a single change, but a larger, transformational event that not only encompasses the site, but also your marketing and merchandising.
This is why MVT is called optimisation, you are using it to finely tweak an already working design. Obviously, you can test out new design ideas, but remember that they might not work as well within the context of the larger site.
Not every test will work
You're certain that the particular design you've created is going to be the winner. It has to be. It's so obvious…
…but it doesn't happen
Sometimes, if you're lucky, it will impact conversion negatively, which will give you something to further analyse. In the worst cases, it does diddly.
It's quite depressing when it happens but it does. Be prepared for it.
Your users aren't stupid
We tend to treat our website users like children, thinking they will blindly follow every link and piece of microcopy to the letter. However, content and usability folks will be the first to admit that most of the text on a website doesn't get read. When it's skimmed, the visitors will infer the meaning, so making some small changes to a piece of microcopy isn't going to change the world for you.
It all sounds pretty negative, huh? Don't be disheartened! Multivariate Testing is a great tool, but before you start posting amazing results all around the internet, make sure those results stand up to further analysis!