{{ searchResult.published_at | date:'d MMMM yyyy' }}

Loading ...
Loading ...

Enter a search term such as “mobile analytics” or browse our content using the filters above.

No_results

That’s not only a poor Scrabble score but we also couldn’t find any results matching “”.
Check your spelling or try broadening your search.

Logo_distressed

Sorry about this, there is a problem with our search at the moment.
Please try again later.

A/B testing is an incredibly useful tool for designers, developers, managers and executives. Sadly, despite the benefits, it’s often underused.

The news for those who shun A/B testing is particularly bad: it can facilitate dramatic improvements in numerous KPIs, including conversions and sales, as evidenced in the following five case studies.

WriteWork.com

The right design, information architecture and copy can make a huge difference when it comes to conversions and sales. All too often, however, sites are designed, organized and written based on assumptions made by a few people.

Using A/B testing with "a radical new design", WriteWork.com says that it doubled conversions and boosted sales by 50%.

Official Vancouver 2010 Olympic Store

One of the great things about multivariate testing is that it allows for hypotheses to be tested. In developing the Official Vancouver 2010 Olympic Store website, Elastic Path Software developed a number of hypotheses about the site's all-important homepage. It then used A/B testing to test those hypotheses in the real-world. What it found: sometimes things you think will have a big impact really don't.

NutshellMath Homework Help

Google AdWords can be a company's best friend, but to have productive AdWords campaigns, you typically need high-performing landing pages.

In trying to boost the number of free signups to its NutshellMath Homework Help website, Academy 123 turned to an outside vendor, Enquiro, to revise a key landing page. Did it boost performance? Academy 123 flipped the switch on an A/B test using a Google AdWords campaign to find out.

JML Direct

A/B testing isn't just a great tool for websites, it's a great tool for email campaigns. After all, many of the same factors that can impact performance on a landing page are also present within email creative.

One of the interesting things about A/B testing is that it can provide additional insight that's crucial to maximizing ROI. JML Direct discovered this when A/B testing an email campaign which showed that the email that appeared to have the better response in terms of click-throughs delivered half as many sales as another email.

The Split Testing Guessing Game

Do you know someone who is skeptical about the power of A/B testing? You might want to direct them to this case study, which is actually five case studies in one. The author invites readers to guess which version of each page delivered the best results. As you might suspect, there are some surprises.

Patricio Robles

Published 24 February, 2011 by Patricio Robles

Patricio Robles is a tech reporter at Econsultancy. Follow him on Twitter.

2401 more posts from this author

Comments (8)

Comment
No-profile-pic
Save or Cancel
Avatar-blank-50x50

Gordon Campbell

Hmm. In the Vancouver 2010 example:

"This was a very tough test where even after 2400 transactions we did not have a statistically significant winner. However, a decision had to be made quickly due to the Games fast approaching. Variation A was chosen as it was converting almost 3% better than the control and had lower bounce rates overall."

So, they ran the test, learned nothing, and went with a statistically insignificant choice. And then stiffed the taxpayers of Canada with ElasticPath's bill!

It will happen in London too, don't worry.

over 5 years ago

Scott Hunt

Scott Hunt, eMarketing Executive at eSterling ltd

I'm just about to enter into A/B testing with Google Adwords, I believe its part of the Google Conversion University.

I think this can be an excellent tool to refine landing pages for Adwords visitors, I cant wait to see the results.

The case studies you have provided really great to look at from a beginners prospective, although it is under used at the moment I am sure that the use of it will grow when people begin to place emphasis where it should be, on conversions.

over 5 years ago

Patricio Robles

Patricio Robles, Tech Reporter at Econsultancy

Gordon,

Finding out that two pages deliver the same performance is not learning nothing. Not every A/B test produces a clear winner, and that's NOT what you should expect when using it.

The value in A/B testing is that it enables organizations to move away from assumptions made by small groups of individuals -- assumptions which, no matter how smart the individuals, can prove to be wrong and expensive.

over 5 years ago

Matt Clark

Matt Clark, Analytics / CRO Consultant at Userflow

I'm amazed that after 2,400 transactions and one version beating the other two by 3% the test was not statistically significant?

GWO declares a winner after less than 50 conversions in some cases (which I don't agree with).

Over 500 for each should be enough, but since the changes weren't huge in the test I'm not surprised there was only a mild improvement.

Absolutely agree with Patricio's point above. Testing is as much about learning what doesn't work as learning what does.

It's important to know you're not stepping backwards as well as knowing that you are stepping forwards!

over 5 years ago

Avatar-blank-50x50

Michelle Carvill

Working alongside a client we've been running A/B testing and some wider multivariate testing using Google's Website Optimiser. Whilst it's a useful tool - we've yet to run a test which shows any level of significance. However, even though the data shows as insignificant, you can still learn which is having the most impact (albeit small).

What we've learned is that it really seems to depend on how 'significant' the change is that you're testing. Eg: We are testing a headlilne change and some graphic changes on a payment page - and so over time, with enough data through the funnel then this will at some point no doubt show a significant result. However, if we were to be testing a whole new home page layout - then we'd have something far more significant to test - and I suspect we'd see clearer evidence.

Is this what others have found?

over 5 years ago

Avatar-blank-50x50

Matt Chandler

Sometimes it's possible to invest a lot of time and effort in split testing and not get a statistically relevant result. This seemed to be the case in the Vancouver example. However, even when the outcome may not be clear cut (in terms of statistics), that's still a valid outcome, and gives you a lot of useful information in terms of knowing what are the critical conversion factors on your site. A non-statistical result just tells you that certain factors on your site are very insenstive to change, so it doesn't matter too much whether you put element "A" on the left or the right of the page, for example. (Depends on what your testing of course).

over 5 years ago

alex avery

alex avery, Inbound Marketing Consultant at Alex Avery Inc

Often it can be a GIGO scenario. "garbage in, garbage out" if you set up an A/B test and the variants you are testing are insignificant, you will get insignificant results. Like Matt says above, it might tell you the factors you are testing are "insensitive to change". Read: "you are testing the wrong thing".

A/B shouldn't just be about finessing a site. For that, I would consider longer term multivariate testing. Possibly year on year to appreaciate seasonal weighting. With A/B, be bold! You'll get better - more significant - results.

over 5 years ago

Avatar-blank-50x50

Linda Bustos

Hi, Linda Bustos from Elastic Path here. I was not personally involved in our experiment, but I wanted to address Gordon's comment about the taxpayer. Elastic Path and other official licensees of the Olympic Games do not take any taxpayer money. Licensees pay fees for the right to conduct business with the Olympics, and pay taxes on earnings from the endeavor. The net benefit to the Olympic City, not the other way around.

Just to clarify. And that's how it works in London too, which we are not participating in.

Re: the statistical insignificance - the larger the spread between two performers, the shorter your test. It's a lot faster to reach statistical significance if one version is killing the other. When spread is slimmer, it takes longer - though I wouldn't call test variables "meaningless" if they're close. 1 or 2% across millions of sales over a 2 week period is a lot of money to be saved by making a decision in favor for the version that performed better after 2400 transactions.

over 5 years ago

Comment
No-profile-pic
Save or Cancel
Daily_pulse_signup_wide

Enjoying this article?

Get more just like this, delivered to your inbox.

Keep up to date with the latest analysis, inspiration and learning from the Econsultancy blog with our free Daily Pulse newsletter. Each weekday, you ll receive a hand-picked digest of the latest and greatest articles, as well as snippets of new market data, best practice guides and trends research.