Finding out the real impact of display advertising is vital for marketers in their struggle to justify budgets.

However, most are hampered by misleading click & view-based measurements.

However, new developments in programmatic have made it possible to directly credit sales to specific campaigns, leading to truly incremental advertising.

So, how does an incrementality model work, and how can advertisers make the switch?

My previous posts have talked about the advantages and disadvantages of the different ways we measure campaigns.

So, I wanted to take a step back and talk about the point of all these forms of measurement - what are we actually trying to achieve?

The quote that gets pulled out most often on this is John Wanamaker's (“Half my advertising is wasted...”) and with good reason.

It accurately describes the point of campaign measurement.

We’re essentially trying to work out which parts of any campaign are having the most cost effective impact on an audience, so the overall campaign efficiency can be increased. 

Improving display measurement

When agencies and advertisers look at measurement models and attribution models, they are (or should be) trying to find the fairest and most accurate way of sharing credit for the audience response, to allow them to improve the campaign plan over time. 

I previously highlighted some of the ways we can work to make traditional methods of measurement more accurate reflections of what is actually driving efficiency, but these are all just steps on the way to a better model.

For instance, whilst post-visible conversions do discount those conversions that came from unseen ads, they don’t take into account the 'natural' baseline of people who would have bought the product anyway.

The next question then is what can be done about this?

How do we filter out those conversions that would have happened anyway, to make sure we aren’t optimising towards the cheap and easy conversions over and above genuine influence the campaigns are having on user behaviour?

This is a hot topic, often labelled 'incrementality'.

There are already some attribution companies offering solutions and most clients we talk to are aiming to resolve it.

So, why does incrementality still seem to be a problem? There are a few reasons:

  • Not many companies offer it as a form of measurement, making it difficult for advertisers to find someone who can calculate this for them.
  • It can be challenging to explain incrementality internally, particularly in large businesses that have traditionally favoured a more click-based approach.
  • The most popular methods for measuring genuine campaign uplift are very inflexible, generally assessed quarterly, and are often not granular enough to make any real difference to your campaign.

How to measure incrementality 

We started doing this a couple of years ago when a client asked us to help them prove the true value of display activity; they knew click-based attribution was undervaluing it, but felt that view-based attribution was overvaluing it.

Initially, we approached the challenge in a traditional way, comparing the performance of a charity ad to the performance of the branded display ad.

This was when we first came across two problems with this way of measuring uplift.

Firstly, it was very expensive - the client in question spent half their budget for the month on a banner promoting another company.

The second problem came when looking at the data. It was clear that the biggest influence on the advert’s effect was whether it was viewed, and in any campaign there is a certain amount of non-viewable ads.

Essentially, the unseen branded ads saw the same performance as the charity ads, while the branded ads that were visible had a much stronger response from customers.

That led us to a new way of analysing results from that test, but also how we measured uplift from future tests.

So the method for measuring incrementality is basically the same as an A:B test, but easier and less expensive to carry out.

By knowing unseen ads don’t affect user behaviour this gives us a control group who we've targeted but haven't seen the message.

We compare the conversion rates of this group with the one that did see the ads. Any conversions over the ‘baseline’ of the control group are incremental, so must have been generated by the campaign.

Moving to an incremental model

We've been doing this for a few years, but it's yet to be widely adopted. So, as an advertiser interested in moving to measuring the incremental uplift of your campaign, what can you do?

The first step is to speak to your agency or partner to ask if they are tracking the viewability of your campaigns.

Can they get data at a user level? If they can, they should be able to calculate incrementality for you.

If they can’t get hold of the data required, it might be worth reviewing what you track currently and whether you can make changes to allow them to pick up the necessary data.

Changing the way display is measured and reported can be an upheaval. But, confidently attributing a portion of sales to display spend allows budget conversations to run more smoothly.

Over the last year many of the advertisers we work with have found moving to incremental measurement crucial in getting internal buy-in for the value of advertising.

Rachael Morris

Published 9 June, 2016 by Rachael Morris

Rachael Morris is Head of Optimisation Strategy at Infectious Media. You can connect with her via LinkedIn.

4 more posts from this author

You might be interested in

Comments (4)

Pete Austin

Pete Austin, CINO at Fresh Relevance

Re: "By knowing unseen ads don’t affect user behaviour this gives us a control group who we've targeted but haven't seen the message."

I like this approach. Basically if you don't have numbers for something, but do have numbers for something similar, use those numbers instead.

But you have to take care. Although "unseen ads don’t affect user behaviour", this doesn't mean there's no relationship between not seeing adverts and the amount bought. I think there's reason to suspect that your control group buys less than average.

Consider people who spend little time online. These people (1) have less time to see adverts online (so they see less adverts) and also (2) have less time to buy products online (so they buy less products).

Your control group who don't see the adverts is likely to have a bigger proportion of "people who spend little time online" than average - hence it is likely to make less sales than average.

Now follow the same logic for people who spend a lot of time online. These will have more time to buy and so are likely to be above average buyers, as well as more time to see adverts so they are less likely to be in your control group. The control group contains less of these high spenders, so once again it's likely to make less sales than average.

In conclusion, the difference between the control group and the test group may be partly due to the makeup of the control group.

over 1 year ago

Avatar-blank-50x50

Mickael ROBIN, Digital marketing manager at Jumeirah group

Using "unseen ads" to create a control group is a seducing idea to run an easy and chep A/B test.
However, we may have a conflict between hit and user scopes here: in a context where average ad-impression per user is 10, a given user may view 5 impressions and unview 5 others impressions
=> how would you manage this?

Are you aware of any other approach to set a control group for a display campaign?

over 1 year ago

Rachael Morris

Rachael Morris, Head of Optimisation Strategy at Infectious Media

Hi Pete,

That's a really good question - to make sure we weren't seeing differences just because of a difference in the makeup of the groups, we compared the test and control groups, looking at browsing behaviour, socio-demographic group, and behaviour onsite, all of which were aligned across the two groups. We had also previously carried out a more traditional A/B test, and compared the unseen brand ads to the seen and unseen charity ads, seeing the same behaviour across all groups except for the users exposed to the viewable brand message.

over 1 year ago

Rachael Morris

Rachael Morris, Head of Optimisation Strategy at Infectious Media

Hi Mickael,

In our 'unseen' group, we only include users who haven't seen any impressions at all, which helps to make sure this isn't an issue. If a user has any 'seen' impressions at all, they go into the test group, rather than the control group.

over 1 year ago

Comment
No-profile-pic
Save or Cancel
Daily_pulse_signup_wide

Enjoying this article?

Get more just like this, delivered to your inbox.

Keep up to date with the latest analysis, inspiration and learning from the Econsultancy blog with our free Digital Pulse newsletter. You will receive a hand-picked digest of the latest and greatest articles, as well as snippets of new market data, best practice guides and trends research.