{{ searchResult.published_at | date:'d MMMM yyyy' }}

Loading ...
Loading ...

Enter a search term such as “mobile analytics” or browse our content using the filters above.

No_results

That’s not only a poor Scrabble score but we also couldn’t find any results matching “”.
Check your spelling or try broadening your search.

Logo_distressed

Sorry about this, there is a problem with our search at the moment.
Please try again later.

In the final post of the five-part series, I explain why you want to swap out your vanity metrics for sanity metrics in conversion optimisation.

Overview

This article focuses on what will become a crucially important consideration as the industry matures and more and more businesses recognise the importance of data driven optimisation as having the potential to significantly grow their business.

One of the primary themes running through this series is quality,  from the quality of your research, the quality of your test hypotheses, all the way through to the quality of your people responsible for delivering your testing strategy and the quality of your optimisation methodology.

This article adds to this list by explaining about the quality metrics you should be focusing on.

The definition of vanity and sanity

Just to make this clear, here is how Dictionary.com describes each of these words: 

  • Vanity: lack of real value; hollowness; worthlessness.
  • Sanity: the state of being sane; soundness of mind. 

Would you rather your optimisation strategy provide you and your business stakeholders with a sound state of mind, or do you want your strategy to feel like it lacks real value and impact?

What are vanity metrics in conversion optimisation?

Here are some of the most often quoted metrics, along with some rationale:

Average number of tests run per month

This may sound good, especially if you have a high number for this, but what has been the impact of all these tests?

How strong was your test hypotheses? What have you been able to learn and take away from each of these tests?

Number of tests run in the last 12 months

For those brands who have a high number of tests running each month (think brands like AO.com, Shop Direct, Amazon, Booking.com to name a few) what you are then able to do is shout about how many tests you have run in the last 12 months.

For some, businesses this will be a very high number, but to bring it back to the key point of this article – what has been the quality and average success rate of all these tests?

Number of tests running at any one time

If you have multiple tests running at any one time, how are you accounting for behavioural changes if visitors see conflicting or non-seamless test variations at different stages of their browsing journey?

Are you measuring both the micro and macro conversion rates for each test, or are you just aiming to push people down the funnel without a clear understanding of their ultimate end point?

Are you just running these tests to hit a test volume KPI?

The number of variations in your MVT

For some businesses, multi-variate testing is one of the major ways of testing their on-site experience.

We have worked with brands who have run a 32 variation product page test. A huge challenge for running MVT tests is the significant increase in traffic and conversions needed to get accurate, significant results compared to a simpler A/B test.

In addition, when there are potentially bigger single key improvements to a core page in your user journey that could be achieved by a straight A/B test, running a large number of variations that are tweaking a key page will not get you to the promised land of bigger conversion uplifts through more progressive, radical testing.

The click-through rate increases (irrespective of impact on primary CR)

In some cases, businesses can take a short term view of the impact a test has had, focusing on the click-through rate on the page that is being tested.

Not having a clear understanding of what impact the changes have had on your primary conversion rate metric is dangerous territory.

You can easily get huge increases in click-through rates by misinforming visitors, using dark patterns of persuasion, but what is going to happen with these visitors when they get to the final conversion point and realise they have been mis-sold?

What are the key issues with vanity metrics?

Here are some of the key issues which can mean focusing on vanity metrics won’t help really grow (not just optimise) your business through optimisation:

  • The numbers may sound good (and probably far bigger than most businesses), but are they actually delivering genuine, sustainable uplifts in your primary conversion metrics?
  • Running lots of tests simultaneously can lead to far less clarity of what has and hasn’t ultimately influenced user behaviour (unless measures are in place to be able to measure each individual journey and the impact they have on your primary conversion metrics).
  • It can be easy to overlook that the more tests you have, the more hypotheses you need. The more hypotheses you need means more planning, design and tech time required - not to mention the increase in analysis and identification of learnings required (when doing optimisation in a structured, high-quality way).
  • Geoff Bezos, Amazon CEO said in 2004 (yes 2004!) “If you double the number of experiments you do per year you're going to double your inventiveness.”

    I say, for every other business that isn’t as big and significant as Amazon "If you double the intelligence that goes into your test hypotheses, you double the potential impact and increase those hypotheses will have on your primary growth metrics".

  • You are less likely to end up planning and delivering more innovative or radical tests that have the potential of delivering bigger impact results, as you are mainly focused on quick and simple tests probably delivered just through your testing tool.

One thing to make clear: if your business focuses on a combination of some of the vanity metrics along with a range of the sanity metrics (I will explain this shortly), then potentially you are in the best place to grow through research and data driven optimisation. 

What are sanity metrics in conversion optimisation?

Here are some of the far less used, yet ultimately vital metrics in Conversion Optimisation, along with some rationale:

The percentage of tests that deliver an uplift

This metric immediately focuses attention on whether or not the tests you are running are actually making an impact to your primary growth metrics.

So if you are running at over 90%, your business is either one the best optimisation case studies around, or you have an incredibly poor website, where pretty much any sensible change will improve the user experience.

At the other end of the spectrum, if you have a 50% of lower rate of tests which deliver an uplift in your primary metric, there is the potential that you are using valuable resource and time developing tests which, unfortunately, just aren’t delivering real impact to your primary metrics.

The average % uplift per test 

By having a clear focus on this metric, businesses are really embracing the idea that testing is all about the quality of the hypotheses and the potential impact the test will have.

Not only are these businesses focussing on having a high success rate of tests delivered, they are then looking at how they can increase the actual impact their successful tests are having on their business.

The percentage of successful tests that deliver over 5% CR increase

Taking things a step further than the average percentage uplift per test, those few and far between businesses who are seriously looking at how they can optimise the impact their testing is having for their business.

They will be keen to understand how many of the tests that they plan and deliver are actually achieving a substantial lift in their primary conversion metric.

We are now moving away from just looking at marginal gains through testing.

The percentage ROI per test 

Businesses measuring this metric are really focusing on what, if any, commercial impact each test is having for their business.

Providing you have a high percentage of tests that deliver an uplift, and even more so if your average uplift per test is good (3-5% or above), by using this ROI metric you should have a continual commercial business case to push on with your optimisation strategy.

The percentage reduction in cost-per-acquisition (CPA)

At the time of writing this (but hopefully not for too many more years), almost all businesses would rather increase their spend on acquisition to increase sales, rather than investing in on-site optimisation to increase conversion, reduce cost-per-acquisition and ultimately increase sales.

For those businesses who are using research and data driven optimisation to grow their business, one of their most satisfying metrics they like to reduce is their cost-per-acquisition, potentially providing them with more budget to invest in further growth optimisation in the process.

What are the key benefits of sanity metrics?

Here are some of the key benefits which can mean focusing on sanity metrics will help actually optimise and more importantly grow your business through optimisation:

  • It focuses everyone involved on ensuring there is a strong, research and data driven hypotheses for each test.
  • It ensures that there is accountability as to the quality of the tests that are chosen and the impact they are having on your business performance.
  • It will lead to a higher level of increases in your primary conversion metrics.
  • These bigger impact tests provide more compelling stories to share though your business.
  • You are much more likely to build test momentum and get full company buy-in as there are tangible, stronger increases to business performance.
  • There is time to carry out the appropriate and analysis and identification of learnings on each test.
  • Rather than "finish a test and move on" mentality, you are focused on identifying learnings and building upon completed tests.
  • You are far more likely to develop the business case and strong hypotheses for carrying out not just simple tests but radical, innovative tests.
  • As a business and as a team you are focusing on the effectiveness of your testing strategy.
  • Your reduced CPA mean you have further budget to grow even quicker through optimisation.
  • Your business moves from “doing testing” to “testing being central to your businesses growth”.

Why are sanity metrics so important in conversion optimisation?

If the key benefits of sanity metrics aren’t enough to create a business case for focusing on these rather than the usual vanity metrics, here are some further key points for consideration:

Time is money

Testing takes time. Doing lots of testing, therefore, takes lots of time.

Doing a lot of testing without strong hypotheses and without a clear methodology is potentially a waste of valuable time.

The vast majority of businesses in our experience simply don't have the time and resource to run lots of tests on a monthly basis.

Resource is money

Would you be rather spending peoples precious time getting lots of tests live, or getting fewer high quality tests live?

As with most businesses, people involved in testing already have a range of other business responsibilities, so how they choose or are able to spend their time is hugely important.

The effectiveness of your resource should not be underestimated.

Expertise is money

Whether internal or external, people’s expertise costs money.

Would you rather their expertise is used to deliver five tests with weak hypotheses and execution, or three tests with a really strong hypotheses and a strong, persuasive execution?

Everyone has their existing day jobs (well, apart from the lucky few who have dedicated roles around CRO!)

In time people will be given dedicated roles focussed purely on delivering different parts of their company’s optimisation strategy.

Until this time, almost all people will be tasked with juggling their existing responsibilities with new requirements to support the testing strategy.

For everyone’s sake, the time that is made for optimisation should be quality time.

You can be testing for one or two years and not really feel the commercial impact of running tests

Many times we speak to businesses who have running A/B tests for one-two years, and when we ask them how has your testing grown your business, most often we are met with “mmm it’s hard to say really… we’ve had some good results though”.

Worse still we sometimes hear a business saying that they used to do testing quite frequently, but they have been too busy to test recently.

Testing should never be seen as something that your business is just doing, it should be looked at as being the most significant enhancement in digital marketing today.

Data driven optimisation has the potential to drive growth 

Other businesses we speak to, when asked about the testing they have been doing over the last one-two years, talk about how “we’ve done loads of testing, about 100 in the last year! (Or some other large number).

Testing isn’t just a race to the top of who can be running the most tests – quality & growth should be your go-to words when assessing the impact of your optimisation strategy.

The five words that should describe your optimisation strategy

I’m almost at the end of my five part series and it’s now time that I condense everything down to just five words.

Before I do that, if you haven’t already seen my previous four articles in this series I recommend that you do first:

  1. Conversion optimisation – assess the maturity of your current approach
  2. Five characteristics of businesses ready to grow through data driven optimisation
  3. The four critical areas for long term growth through optimisation
  4. What the German football team can teach us about conversion optimisation

OK so the five words that should describe your optimisation strategy are:

  1. Intelligent
  2. Quality
  3. Central
  4. Educational
  5. Transformational 

Thanks for reading, and please do share your challenges and successes in the comments.

Happy growing!

Paul Rouke

Published 12 May, 2015 by Paul Rouke

Paul Rouke is Founder & CEO at PRWD, author and a contributor to Econsultancy. You can follow him on Twitter or hook up with him on LinkedIn.

39 more posts from this author

Comments (4)

Avatar-blank-50x50

Harekrishna Patel, eCommerce Marketing Consultant at XtremeUX

Really interesting thoughts!

This article can help to improve efficiency if you are working as an analyst with deadlines. We need to analyze that "What really works" in place of "What really looks good".

almost 2 years ago

Avatar-blank-50x50

Tom Waterfall, Director of Optimisation Solutions at Webtrends

Good post, thanks Paul.

I'd add that while both average uplift % per test and # of tests that have > 5% uplift are worthy of tracking for your CRO program, they depend greatly on where and when you're running your campaigns.

If, for example, you're running a test on a higher converting page in the funnel, let's say a payment page converting at 80%+ (which wouldn't be uncommon for an airline), then a 2% lift is pretty fantastic and translates to a significant revenue potential. Conversely, running a campaign on a landing page that converts to your bottom line goal at 2% only needs statistically lift to 2.1% in order to hit the 5% lift benchmark.

I'd suggest perhaps looking at revenue/return per test as a better indication of the success of a program (or # of leads, subscriptions, etc. - whatever your primary metric may be).

almost 2 years ago

Paul Rouke

Paul Rouke, Founder & CEO at PRWDSmall Business Multi-user

@Harekrishna- thanks for your comment. Replacing "What really looks (or sounds) good" with "What really works" is a nice simple way to describe this.

@Tom - thanks for your comments and additional input. It sounds like you are underlining the importance of the % ROI metric I have detailed in the sanity metrics section. Not only does this metric take in to account the commercial impact, but also it takes in to account (where feasible to estimate) the effort and investment that goes in to a particular test.

It certainly sounds like you prefer to focus on sanity metrics which is good to hear!

almost 2 years ago

Avatar-blank-50x50

Lukasz Twardowski, CEO at UseItBetter - analytics for UX and CRO

@Paul, that's a good summary. Quick question regarding the benchmarks you gave. You say 50% or lower is a poor number and that a test should aim for 5% uplift in conversion. Can you give me a ballpark for an average yearly revenue generated by experimentation programme running at such rates?

almost 2 years ago

Comment
No-profile-pic
Save or Cancel
Daily_pulse_signup_wide

Enjoying this article?

Get more just like this, delivered to your inbox.

Keep up to date with the latest analysis, inspiration and learning from the Econsultancy blog with our free Daily Pulse newsletter. Each weekday, you ll receive a hand-picked digest of the latest and greatest articles, as well as snippets of new market data, best practice guides and trends research.