{{ searchResult.published_at | date:'d MMMM yyyy' }}

Loading ...
Loading ...

Enter a search term such as “mobile analytics” or browse our content using the filters above.

No_results

That’s not only a poor Scrabble score but we also couldn’t find any results matching “”.
Check your spelling or try broadening your search.

Logo_distressed

Sorry about this, there is a problem with our search at the moment.
Please try again later.

The cost of attracting high-value visitors to a website is increasing as sites compete for the same customers.

With online conversion rates in the UK falling by 55% over the past five years the best way to increase efficiency is exploiting existing visitor streams with conversion optimisation.  

To coincide with the launch of the 2012 Conversion Rate Optimisation Survey, here are seven tips to boost a website’s success...

1. You’ve got to test to be the best

Don’t rely on gut instinct when changing your site. You can obtain meaningful results by randomly displaying alternative content to visitors and measuring how often they reach the desired conversion goal. However, not all testing is the same.  

For example, a common mistake is comparing historical data in before-and-after tests. This can lead to different versions being tested over different time frames with fluctuations caused by factors such as advertising, the weather or day of the week.  

2. Only clear changes bring clear results

Whether a landing page is light or dark blue is rarely important. Versions with explicit differences must be tested in order to obtain meaningful findings about visitor behaviour.  

This approach shouldn’t be a way to find out whether apples or oranges are best, but rather apples or fire extinguishers! All elements of a website are suitable for testing whilst completely different designs can also be tested rather than simply replacing individual elements.

It’s true that some small differences (e.g. copy) on their own may greatly affect visitor behaviour, but these strong elements need to be identified.  The imagery, information, lists, copy and buttons are all very strong elements.   

3. Optimise where the biggest effect will be felt

It’s worth starting where the highest absolute increase can be achieved. A 100% increase might be possible on the last page of the ordering process however the increase in sales will be minor as a smaller percentage of overall site visitors see this page.  

It’s far more effective to start with pages with lots of visitors – and a high bounce rate such as landing pages.   

4. The world is more complex than A & B

Simple split testing with a few alternatives is a good starting point. However, you will quickly reach the stage where more complex test scenarios are needed to obtain meaningful results.

Multivariate tests are useful for refining split tests. During split testing, only individual versions are tested against each other, but not the effect of different elements on each other. In multivariate testing, all possible combinations of alternative elements are tested.

As this can result in 100’s of possible versions, high visitor numbers are needed for successful testing. These results are highly worthwhile as they indicate which combinations are the most promising and also how important the individual elements are. 

5. Checklists add nothing 

Your website has a unique visitor group. No-one can tell you what your specific visitors want and don’t want. This applies to all aspects of your website.  

Therefore, you have to be careful when applying general recommendations or tips. All changes to a website should be tested. This avoids wasting unnecessary time and expense. 

6. Conversion is not the be all and end all

It’s important to clearly define individual conversion goals (shopping basket, order process, purchase, contacts, downloads, etc.) and to keep in mind which of these defined goals you are hoping to influence with each optimisation.  

Each conversion should also be qualitatively assessed in order to increase the success of conversion optimisation. For example, completeness is relevant when looking at registrations or requests for contact whilst greater importance may be attached to records with postal addresses than to those without.

For brand building campaigns, assessment on the basis of conversions may show a short-term success but prove a competitive disadvantage in the long term.

7. Cheat chance

The test has hardly begun and the conversion rate has already reached unimaginable heights. So why not quickly turn off the old version and just use the new one right away? Don’t do it.  

Statistics are prone to errors and errors can be expensive as they negate the entire optimisation and ensure that the conversion rate not only fails to rise but may even sink drastically.  

A small data pool is the most common error. A test should run for at least a week and two weekends. In addition to this, there should be at least 50 conversions per version over this period. Depending on the target group, the numbers may fluctuate heavily on individual days making a longer test period necessary. 

Avatar-blank-50x50

Published 18 July, 2012 by Ellie Edwards-Scott

Ellie Edwards-Scott is Managing Director at QUISMA and a contributor to Econsultancy. 

4 more posts from this author

Comments (2)

Avatar-blank-50x50

Deri Jones, CEO at SciVisum.co.uk

Testing new website page changes is of course not wrong - but I sometimes feel that marketers are not great statisticians - ie that design decisions are made based on ridiculuously small amounts of data !

You mention that problem "... a common mistake is comparing historical data in before-and-after tests ... with fluctuations caused by factors such as advertising, the weather or day of the week".

For real statistical meaningfulness - you'd really have to compare differencies over such long periods of time... that no one does that!

But the aim of having evidence based decisions is right of course - but in the end success is probably more down to the skills of the team making the calls than the test data itself!

(NB pedantic point here - but this statistical value is in contrast to testing of Performance as experienced by users - where you can run automated mystery shopper virtual user Journeys 24/7 365: and changes made to the software or hardware that impact the speed of user experience are immediately clear and 100% statistically meaningful. The statistics are real if you can say that Page 4 in the CheckOut Journey was never faster thn 2.4 seconds. but since the latest software release is never faster than 3.9 seconds. That is an easy action item to bring up with the software team - how do we get page 4 speed back up again - and the performance data means they can't deny the problem!)

about 4 years ago

Avatar-blank-50x50

Ellie Edwards

You're right that in some cases it can be called into question as to why a design decision has been made without the robust data to back it up and that is where MVT (muti-variate testing) can come in. As long as the proposed testing time frame has been agreed beforehand and the website has enough users to ensure robust conversion data then you should get tangible results that can be used in website redesign.
Rather than testing over a long period of time frequent testing is often more realistic and will ensure that fluctuations such as sesonal affects, competition and pricing can be taking into consideration.

almost 4 years ago

Comment
No-profile-pic
Save or Cancel
Daily_pulse_signup_wide

Enjoying this article?

Get more just like this, delivered to your inbox.

Keep up to date with the latest analysis, inspiration and learning from the Econsultancy blog with our free Daily Pulse newsletter. Each weekday, you ll receive a hand-picked digest of the latest and greatest articles, as well as snippets of new market data, best practice guides and trends research.