Late last year, the Media Rating Council (MRC) said it would be premature for online advertising to transact on 'viewable impressions'.
The findings of our own research show why we couldn't agree more.
'Viewability' was a hot topic in online display in 2012 and that looks set to continue this year. Ad Age put out an article a couple of weeks ago about viewability with the subtitle “lots of debate, little action”.
But, while viewable impressions as a measurement metric make sense in theory, the practical application is complicated and potentially dangerous to the short-term health of the industry for both sellers and buyers.
Thankfully, the MRC finally released its advice on viewable impressions as a standard for measuring online display advertising on November 14. It stated that it 'believes it is premature to transact on viewability' with its main concerns relating to wide variances in vendor capabilities and measurement barriers, such as cross-domain iFrames.
Subsequent information emerged that the study on which they based their recommendation saw wide variances in viewability results. We have been running tests with viewability vendors on our own media for some time and our results run counter to some of the proposed current assumptions of viewability.
To illustrate my points I’m using a correlation measure called R2 (stay with me) to measure how viewability correlates with performance.
So, R2=1 is a perfect match. R2=0 means the relationship is random. The data comes from a basket of campaigns run across our display division Tribal Fusion in February.
Assumption 1: If you can’t see an ad then you can’t click on it so there should be a correlation between viewability and click-through rate
There isn’t (R2 = 0.024). Viewability tends to be measured as an average at the domain or placement level and takes no account of site design or usage.
So a placement with 20% viewability may still be drawing users to engage (good) or be designed to harvest clicks in less viewable areas (not good) and so have high click through.
Conversely a placement with 80% viewability might have a non-clicking audience or be designed to discourage clicking out. Across hundreds of domains the relationship is near random.
Assumption 2: An impression shouldn’t get last-view credit if it wasn’t seen so there should be a correlation between viewability and the last-view conversion rate
There isn’t (R2 = 0.018). Granted, viewability data exposes a cluster of low-performing sites on both metrics, which has been useful from a network management perspective, but otherwise there is no relationship today.
This is more serious if ad placements are deliberately at the footer to ensure they load last but it will take some time before measurement changes site design.
In the meantime, as with click through rates, what would viewability do to publishers with a low viewable placement that is attractive to some valuable users?
Assumption 3: “Premium” sites should have higher viewability rates
They don’t (R2 = 0.004). We ran qualitative scoring across 84 of our largest publisher partners on factors like editorial content, professional design, ad clutter, and units above the fold and then compared the final quality score with viewability stats.
Gaming, moderated forums, product review and ecommerice sites can have very high viewability rates but are 'premium' only to some advertisers but only satisfactory to most.
Beautifully designed, content-rich sites can have very low viewability. So, targeting on viewability does not equal targeting 'premium'.
Assumption 4: Publishers should get paid more for creating viewable impressions so there should be a correlation between viewability and site eCPM
There isn’t (R2 = 0.001). Money talks so why should publishers listen until the industry puts a demonstrable value on this metric? I’m sure some campaigns are structured for viewability today but, despite the blog hype, it’s still a small proportion.
At worst, this transition needs to be a net zero change for publishers, i.e. they need to make at least what they would have done before, not less.
So what have we learned?
Well, it’s clear that market forces have yet to shape the metrics for viewability and that the transition, if managed poorly, could harm more than help the industry.
The tide of opinion briefly turned back to those arguing for a gradual transition at the end of last year, if anything, and away from the measurement vendor driven message of 'adopt as fast as you can'.
But, as with many disruptive trends in ad tech over the last few years, it’s also raised our game and provided another useful tool with which to evaluate media.
The more we can use this to encourage publisher partners to create web experiences with fewer ads with richer creative options that are more likely to be seen and the more we encourage advertisers to value that then the better the web will be for publishers, advertisers and consumers alike.
We’ll leave the debate as to whether online display should hold itself to a higher form of measurement than other media channels to another time shall we?