However, in rushing to embrace this opportunity, it is critical that the industry holds itself to the very highest data standards, as all that glitters isn’t gold.

We have never known so much about people, their behaviours and their travel patterns. The level of business intelligence and insight that can now go into planning an OOH campaign is incredible. But, in this world of geo-location data and analytics, the truth is that many providers have questionable data but glorious front-end visualisations that obscure the inadequacies of their data.

Having spent eight years working in sectors that use geo-data (I worked for four years at Telstra, Australia’s largest telecommunications company), I have seen all kinds of bogus claims based on highly over and under-modelled behavioural segments, footfall analyses and attribution models.

Let me be clear, this is certainly not true of all of the providers out there but it is vital to look beyond the shiny user interface and moving vector pins to ratify the data and ensure it is robust and relevant for the specific needs of the location-based communications being planned.

This relevance point is key. There is data that’s good for some things and bad for others, and the trick is knowing which. Seldom is one data set good for everything.

For example, let’s imagine that we want to look at exposure to some outdoor advertising for a retailer and then see how many of the people exposed to it went into the relevant store. If your location data is derived from SDK’s embedded in apps, then this simply won’t give you the level of location accuracy to achieve that level of attribution and to infer a level of performance.

When done right though, location data really works. Being able to ingest and use the right data sources to reveal the ‘hidden’ geographic behaviour of your most valuable customers and using this to inform what message is delivered, when and where, can produce some spectacular business results. This is a methodology that Posterscope has applied at scale over recent years and we now have a body of evidence, not just of what works, but how it works. And we are seeing campaign performance uplifts of 25-30%.

We apply three key rules when looking at location data that you may find useful:

1. Location accuracy

Always question and understand the geo-spatial accuracy of location data. For example, recognise if you’re viewing data from a 50x50m grid or 100x100m grid as it makes a big difference in understanding the location you are studying.

2. Sample

Always question the sample base. Too often, we see a mobile SDK solution that, upon closer examination, is based on data that is heavily skewed to a major or capital city and is not more nationally representative. Or perhaps too weighted to transport hubs rather than the complete city.

3. Hardware or software

Advertisers who want to understand customers in-store should consider the differences between a software and a hardware approach. Software (for example mobile-based, adtech-derived data) provides scale while using hardware (a small cell or beacon), provides much greater depth.

The other critical thing to consider is that using only one source of data in planning location-based campaigns is never a good idea. Our data strategy is based on the principle of ‘no single point of truth’. We use 33 different location data vendors and ingest, fuse and overlay these in a myriad of combinations to create the best solution for each campaign.

There is no doubt that the much, much greater availability of location data offers the potential to plan significantly more effective campaigns. But without the expertise to understand how data is sourced, how it is cleaned and modelled, how robust it is and how appropriate it is for the task in hand, there is a real danger that money will be wasted and that the resulting campaign will actually be less effective.

The rise of programmatic outdoor: what advertisers need to know