There is a fight brewing in the conversion rate optimisation (CRO) world. There are two main camps and there’s a whole lot of money at stake.

In the blue corner, we have split testing (AKA a/b or multivariate testing). Split testing has been continually growing in notoriety in digital marketing.

It has firmly proven itself to be highly effective at improving conversion rates and increasing average order values, driving often staggering increases in website revenue.

In the red corner, we have something of a newcomer: website personalisation. This is certainly a buzzword right now and, for the most part, deserves the hype it’s attracting.  Companies that are getting personalisation right are offering superior web experiences to visitors and boosting conversion rates.

Great news, you might think: two practices that can deliver impressive and long-lasting conversion increases for your website. So, which do you put your money on?

The answer, perhaps predictably, is both. Yes, both of these heavyweights deliver powerful punches and they really need to be fighting on the same team, rather than against each other, to be at their most effective. (I’ll stop the quickly-becoming-tiresome boxing metaphor shortly, I promise.)

So why are they even facing off against each other in the first place? Some believe that split testing and personalisation simply don’t mix at all, while others merely suggest they are not entirely symbiotic.

From my experience, of all the arguments I hear as to why they should be kept apart, three main reasons seem to surface most frequently:

  1. “You don’t need website personalisation if you’re split testing”

  2. “You don’t need split testing if you’re personalising your website”

  3. “Split testing and personalisation can’t work together for technical reasons”

“You don’t need website personalisation if you’re split testing”

Some are of the opinion that if you’ve been split testing and optimising your site vigorously over the years then there should be no need to personalise it further. “But everything is organised perfectly,” they might cry.

“We’ve multivariate-tested every element on our site and found optimal conversion rates,” others may claim.

These people could of course be right, but only if their site sold only one product or service, to the same people, at the same time and in the same way, every time.  This is unlikely.

The majority of split testing results in optimising for the average optimum. Some visitors may love your new variation, others may hate your new variation, but as long as the majority prefer your new variation (at the point of testing) over the original, it typically gets implemented.

Of course, this still leaves those ‘haters’ unsatisfied, meaning that your site is not reaching its full potential. Without personalisation, you’re also very firmly at risk of your website only reaching its “local maxima” rather than “global maxima”.

That is to say, it appears to have reached its optimal conversion rate, but this is only because it’s been optimised as far as the current design can go.

Running split testing on your website without introducing personalisation is akin to having a perfectly organised and well-labelled high street shop, but having no shop assistants to help you.

No one’s around to make product recommendations or to speed up your search for that item you always buy, but keep forgetting its location. In this shop, you’re on your own.

“You don’t need split testing if you’re personalising your website”

The opposing argument suggests that if you’re personalising your website for individual users, there is no need to split test as well.

After all, you’re offering a bespoke service now and hand-holding visitors as they navigate your site, so why do you need to worry about optimising your page layout (for example) via split testing?

Firstly, the level of personalisation possible on a user’s first interaction with your website is far lower than after multiple interactions, as you have less data to play with. That first “touch” still needs to offer a great experience.

Admittedly you can (and should) try to personalise a user’s first touch experience based on audience segment data you have gathered.

For example, if you know from previous experiments that users from location X generally prefer more prominent pricing, then it makes sense to offer this to those users from their first interaction with your site.

However, to get these kinds of insights in the first place, you still need to have confirmed a hypothesis through split testing.

Secondly, there will always be large amounts of your website that are just not practical to personalise.

Realistically, unless you have near-endless amounts of traffic or data about your visitors, you’re unlikely to be able to personalise the wording of every call to action (CTA) message or form field order.

Even if you did have enough data, the resource and infrastructure needed to create and support all variations would be impractical for most.

This leaves a lot of elements still requiring you to optimise for the ‘average’, which is where split testing comes in.

Going back to the shop simile (I told you I’d move on from boxing), personalising your website for visitors without also optimising it over time with split testing is akin to having a chaotic and run-down high street shop, but having a very friendly assistant to show you around.

You still get a crap experience in this shop, but it just happens to be a slightly more personalised crap experience.

“Split testing and personalisation can’t work together for technical reasons”

Some believe that it’s just not possible to offer website personalisation to your users at the same time as running split test experiments to improve conversion rates.

Both practices rely on comparison against some form of “control group” in order to prove their value. Many fear that if you’re trying to run a controlled split test, but at the same time personalising user experiences, then you’ll end up with skewed data and useless comparisons.

While it’s true that data skewing is a possible risk, if you pay close attention to your split testing and your personalisation, there is no reason why they cannot work together in harmony.

If you’re practicing both competently, and you’re careful about which website elements are being changed, then concurrent split testing and personalisation is no different to running simultaneous split tests, where proportional effects ensure recorded differences in conversion rates are still valid.

If you’re really worried that an overlap may genuinely cause data skewing, you can always make the personalisation and split tests mutually exclusive, so that users are only ever exposed to one or the other, thus preventing cross-contamination.

For example, you might normally be running personalisation for 90% of your visitors and using 10% as a control group.

If you formulate a split test you want to run on the site, you could turn your personalisation down to 40% of site users for the period of the test, keep 10% as your control group, and run your split test on the remaining 50% of visitors.

Once you’ve concluded your split test and hard-coded any winning variations, you can dial your personalisation percentage back up again. There are numerous testing platforms and server side solutions for splitting your visitors as described.

In summary:

  • It’s impossible to reach anywhere near a global maxima for conversions on your site without some form of personalisation, as without it you’ll always be offering a global average.

  • It’s not feasible to personalise every element of a website, meaning there will always be a demand for split testing to improve conversion rates and conversion values.

  • The summit of optimal user experience is always moving. Markets, people, products, technology; they all change, constantly. Personalisation and split testing are both ongoing and interconnected practices that help us get as close to that summit as possible, before it moves again and we have to keep climbing.