Furthermore, for an in-depth look at this topic download the Econsultancy/RedEye Conversion Rate Optimization Report 2013.

The research looks at the types of conversion and measurement used, as well as the tools, strategies and processes employed for improving conversion rates.

1. Which software do you use most often for A/B testing and why?

Stuart Mcmillan, deputy head of ecommerce at Schuh

Google Analytics and Optimizely. GA for test planning and hypothesis and Optimizely for running the tests; I probably don’t need to explain why we use GA for analytics. We chose Optimizely as a testing partner for a number of reasons:

  • The cost structure is transparent and compelling.
  • The user interface is well thought out, so it’s easy to operate. It also works by creating jQuery and allows you to see what it’s generating, so you can tweak the code as necessary.
  • They’re a great bunch, a pleasure to work with, always happy to give support
  • We use the Optimizely Platinum package which gives us role-based permissions. This feature gives us the ability to allow pretty  much anyone to create tests but only more experienced testers can put the tests live. It’s my stated intention to democratise testing and have as many people as possible creating tests.

Finally, I should mention that one of the things we do as part of site optimisation is to use WhatUsersDo to test a page or journey and help come up with test hypothesis.

Alex Harris, optimisation expert and founder of AlexDesigns.com

The software usually comes down to the size of the company of the company and their budget. For most clients we use Visual Website Optimizer and Optimizely. But many medium to large companies are using Hubspot and some use Marketo.

We also do a lot of A/B testing using email systems like MailChimp and iContact. For small lead generation and landing page projects we also use LeadPages and Unbounce.

Matt Lacey, head of optimisation at PRWD

We use Optimizely for pretty much all of our A/B testing. Having considered other testing tools, the speed of deployment, flexibility, control and advanced feature set available, made Optimizely a clear choice for our business.

The tool itself is really intuitive to use. Getting simple experiments up and running is a quick process. As well as a solid basic feature set, there is a wide range of advanced configuration settings that allow you to carry out some really clever experiments.

My personal favourite feature is the ease with which we can push test data into Google Analytics and create custom segments for test participants to evaluate their behaviour in greater detail. 

Dan Barker, independent e-business consultant

I mostly use Optimizely, Google Content Experiments, tools that are built-in to the ecommerce platform, or whichever other tool the client happens to be using. In general, what you do is more important than the tool you use, but there are pros/cons of each:

  • Optimizely is very straightforward, easy to use, you get 30 days free trial, it integrates with analytics packages very simply, and it’s suitable for people to self-serve even if they have very little technical know-how.
  • Google Content Experiments is good largely for landing page optimisation. Google used to have a separate set of A/B (& multivariate) testing tools called Google Website Optimizer, but it was killed off and partially folded into Google Analytics, rebadging it ‘Content Experiments’. Unfortunately, it folded the worse of the two tools, meaning that Content Experiments is only really appropriate either for very small sites, or for landing pages on large sites. You need a tiny, tiny bit of technical know-how to set it up, but it presents the results really nicely and lets you segment them by different audience types, different marketing channels, etc.
  • Top-secret Google Content Experiments Cheat. If you have a decent front-end developer, Google’s Content Experiments does have a neat extra feature that makes it much more flexible: the Content Experiments API. There’s some decent info on that here and here.

2. If you had zero budget for A/B testing an ecommerce site what option would you recommend? 

Stuart Mcmillan, Schuh 

Optimizely’s basic package is inexpensive and it also offers a free trial. I’d say, run a trial, hopefully you’ll be able to get a result that can demonstrate an ROI for the basic package cost. 

Google Content Experiments is also a great free product and it’s incredibly powerful. 

Alex Harris, Alex Designs LLC

I tend to recommend Visual Website Optimizer for any budget. I like that it’s easy to use and that you can set up the pages to track revenue and other KPIs. Plus the heatmaps and flexibility of the product make it a winner. 

Matt Lacey, PRWD

One of the great advantages of improving onsite conversion is that it can reduce the acquisition cost of paid advertising per conversion, allowing you to reduce costs, or do more for the same budget.

Therefore, I would make the case to temporarily divert some paid search/advertising budget into on-site conversion optimisation, allowing you to demonstrate the value of testing and ultimately send traffic to a better converting site. 

Dan Barker

Here are a couple of options:

  • Use Google’s Content Experiments.

It’s free and, even if less flexible than other tools ‘out of the box’, you can still do loads with it, learn a lot, and – perhaps – build a case to get some budget for further testing.

  • Do more research 

User testing, surveys, analysis using your analytics tools. A/B testing helps you to understand which changes have had a positive/negative impact on your results, and therefore which to keep and which to ditch. Doing a decent amount of research, and analysing what’s happening on your site/in your marketing properly can often reap similar rewards in understanding which changes will have positive impact.

  • Do some ‘pseudo tests’ instead.

Though people don’t get excited or talk about it as much as A/B or multivariate testing, it’s fairly easy to do tests without any technology. For example, let’s say you’re a retailer with 1,000 products. There are lots of things you could test related to those products: copy, imagery, pricing, etc. It’s really easy to test lots of that with no need for technology.

For example, pick out your top 20 “most viewed” products, rewrite the copy for half of the top 20 with a particular hypothesis in mind (example: “Our product copy is really short – writing 500 words per product instead of 30 may increase sales”). Publish the new copy, and then compare any increase in view:buy ratio for those products.

Compare that increase against the other half of your top 20 products (your control group), and also compare the performance before and after the change for each of the groups. 

This is obviously nowhere near perfect, but if carried out carefully this kind of thing can give you the ability to test hypotheses without much cost or technology.

3. What is the most surprising outcome you’ve seen from an A/B test? 

Stuart Mcmillan, Schuh      

We carried out a test on our tablet site product page call-to-action, as we wanted to check the wording of our ‘check & reserve’ CTA (blue button below). We found that the less ambiguous term ‘are these in my local store’ dramatically increased the number of people using the feature.

However, the really interesting part was that we actually got more people ‘adding to bag’ too. By clarifying one button, we made the other CTA more user-friendly by removing the ambiguity.

Alex Harris, Alex Designs LLC

The most surprising thing I have seen from doing A/B testing is that the client needs to consider seasonality and make changes quickly to run a lot of tests.

It is not helpful to just run one or two tests over a period of a few weeks. You will get the biggest lift from segmenting your traffic and doing many tests at once. The more tests you can run at once the quicker you can make changes and iterate using the feedback.

Once you have a working testing strategy you can then expand to testing AdWords, Facebook marketing and email sends. By creating a clear map of the winning (or losing) elements, you can then apply those same findings to your banner ads, emails and social media.

Matt Lacey, PRWD

In 2013 some of our most surprising results came as we tested the impact of Unique Selling Proposition (USP) bars, usually with three key messages below the primary navigation.

In one example we saw a lift of 4% in sales across the site for a large European retailer, simply by changing the messages in the USP bar. The key to this test was that the messages that won had come directly from the mouths of customers, from user research prior to developing the hypotheses for the test. 

Another of my favourite tests from 2013 was one we ran to find out which features of an educational subscription service were the most compelling to students, with the aim of increasing subscriptions.

We ran an experiment with five versions of a feature table where we simply changed which features where included in the Free or Premium service.

By implementing the winning version we lifted premium subscriptions from this page by 185%. Even more valuable than the lift in conversions, we learned a lot about the value of different features to users.

Dan Barker

Here are three general things that often come up:

  1. The most aesthetically pleasing version of something is often not the version that performs best in tests.
  2. Sometimes big changes have very little impact, particularly where visitors are highly motivated to buy or where the brand is very strong. User experience sometimes matters less than you’d expect.
  3. There are piles and piles written about A/B testing buttons and checkout processes, but quite often it’s the bits before that which make the bigger difference (and that includes the marketing channels that brought traffic).