Website testing is one of the most important ways to improve conversion. 

A recent Econsultancy roundtable on conversion rate optimisation highlighted that the prioritisation of tests is a real challenge for digital marketers. Too many tests generate little lift in website conversion. 

However for three years in a row, A/B testing has remained the most used method for improving conversion rates, with over half of companies surveyed by us saying they use it.

So with this in mind, here is a simple way to improve prioritisation of tests to get better results.

Use a spreadsheet to filter and prioritise your tests

The spreadsheet should list the main facets of each test. Is it a copy, page layout, image or navigation test? 

List where the test is, does it test the checkout process, landing pages or call to action buttons? Is the test for different user segments?

Once tests are finished record the results

As test results build up you can see what types of test generate the best results. Use the spreadsheet to better predict what future tests may yield. 

It will become clear if there are areas you are not testing enough. You may well find that your tests are skewed towards easy to test things such as copy and lay-out, whereas more difficult areas to test such as the checkout process generate better results. 

Prioritise tests by expected lift and difficulty

This approach makes the testing process more objective helping to minimise the HIPPO (highest paid person's opinion) factor. 

It gives you an easy to use record of what tests have been done and helps drive a structured approach to testing and conversion rate optimisation. This is something Econsultancy's research has shown to be key to generating better results. 

Regularly review the spreadsheet criteria you use

For example, a retailer may find that optimising the product selection process yields good results so it is worth adding to the spreadsheet. 

In the roundtable we also discussed the challenges of management and structure. Centralised responsibility for testing is in danger of becoming a bottleneck. Testing is a little bit like web analytics was ten years ago. Then analytics was centralised whereas today it is much more accessible. 

A similar pattern may emerge with website testing over the next 10 years.

For a deeper look at the types of conversion strategies and tactics organisations are using, in addition to the tools and processes employed for improving conversion rates, download our latest Conversion Rate Optimisation Report.

Mark Patron

Published 3 July, 2015 by Mark Patron

Mark Patron is an independent consultant at Patron Direct and a guest contributor to Econsultancy. You can connect with him on LinkedIn.

4 more posts from this author

You might be interested in

Comments (2)

Pete Austin

Pete Austin, Founder and Author at Fresh Relevance

No mention of the novelty effect in that chart? Shoppers learn to ignore marketing after they have seen it a few times, so I thought 99% of marketers knew to improve conversion rates by "shaking up" frequently viewed marketing.

Ad: This can be partially automated by including randomness in your slot rules :)

about 3 years ago

L.M.L. Beerthuyzen

L.M.L. Beerthuyzen, CEO at 1972

It is also important to know what you did test. Which psychological effect did you add to your test? On the other hand the fact that you don't test equal on every page/template sounds plausible. You start testing on pages where you can reach the highest uplift.

about 3 years ago

Save or Cancel

Enjoying this article?

Get more just like this, delivered to your inbox.

Keep up to date with the latest analysis, inspiration and learning from the Econsultancy blog with our free Digital Pulse newsletter. You will receive a hand-picked digest of the latest and greatest articles, as well as snippets of new market data, best practice guides and trends research.