Auditing sites based on unique users may be a step in the right direction but until figures are adjusted appropriately, online will lack the credibility to win serious brand spend.

The UK must adopt the same measurement standards as the US.

After a great deal of browbeating from the online advertising industry and many, many committee debates later, the process of website auditing finally got into gear, lurched forward and turned a corner.

Jicwebs (The Joint Industry Committee for Web Standards), the governing industry council that decides how ABC electronic (ABCe) goes about its business, made some fundamental changes to the metrics being measured in the UK.

The committee decreed that from January 2007 the Unique User metric would replace the Page Impression metric as the mandatory minimum to be certified by ABCe.

It’s great to see some movement at long last but instead of turning into the wide open road where we can really motor, I believe we’ve turned into yet another traffic jam.

ABCe claims that it is now able to provide more attractive, tradable and comparable certificates at a lower price. The price may be lower but should we really be compromising accuracy for the sake of standardisation – thus stifling the efforts of those publishers that dare to go further?

As marketers we all agree that uniform audience measurement is important for the acceptance of the internet as an advertising medium. It’s the definition of the audience, or in this case user or visitor, that I have an issue with. 

The problem goes back to September 2004 when IAB US published an excellent paper regarding audience measurement and advertising campaign reporting standards.

The standards attempted to address the issue of different methods being used to create user figures by giving them different names.

They insisted that unique users should only be used either if users had explicitly identified themselves through login, or alternatively, if a cookie based approach had been used, a suitable adjustment should be made to equate unique cookies to people.

If no such adjustments were made, the term ‘Users’ could not be used and instead ‘Unique Browsers’ were opposed. They also said:

“Certain organizations rely on unique IP address and user agent string with a heuristic as a sole measurement technique for visits. This method should not be used solely because of inherent inaccuracies arising from dynamic IP addressing which distorts these measures significantly.”

All of this was put into an appendix so that Europe could opt out, giving you no way of knowing what a unique user figure actually is.

This difference between measurement approaches really is quite staggering. In November 2003, whilst at RedEye, I examined data from two of the UK’s busiest e-commerce websites, and to compare IP-based and cookie-based analysis.

The results were pretty conclusive. The IP-based approach overestimated visitors by up to 7.6 times whilst the cookie-based approach overestimated visitors by up to 2.3 times.

More worrying from a standards point of view was that the size of the error on one site was more than double that on the other - i.e. the figures could not be cross-compared.

RedEye’s report concludes that cookie-based tracking, combined with appropriate weightings, is the only way to ensure data that is accurate enough to base strategic decisions upon.

This makes perfect sense to me. What I’m less able to comprehend however is why we aren’t doing this here, or indeed why the UK hasn’t adopted the US definition of a user.

Online marketing agencies in the UK are rolling out multi-lingual campaigns across the globe. They are crying out for a standardised, unified method of measuring websites and understanding audiences.

Print publications know their readership. They know the profile of their readers, they know what they want to read and they duly oblige.

They also know that circulation and readership are two different measurables. A copy of the The London Metro is read by an average of three more people after it has been left on the 5.15 from Paddington by the first taker.

Advertisers respond accordingly because print publications adjust their methodologies to provide a credible currency.

They are able to say how they collected the figures, which is very useful information to would-be advertisers. There is scope for improvement and change if both parties know what they are dealing with. 

The online industry in the UK needs to look to this model as a benchmark. To encourage traditional advertisers and agencies to increase spending on online advertising, web publishers and ad reps increasingly need to sell inventory on the basis of reach, frequency and profile.

The current unique user definition is just not good enough – but there is no sign of any imminent change here.

In the US, all ad serving applications used in the buying and selling processes are recommended to be certified as compliant with these guidelines, which are strongly supported by the AAAA (American Association of Advertising Agencies) and other members of the buying community.

We’re trying to get our new behavioural ad server certified at the moment ourselves and the IPA is making similar moves here.

I believe that the industry needs a solution which combines panel data with site-centric measurement.

By counting the number of unique cookies at the server and comparing it to panel data, it is possible to calculate the correct weighting to be applied to the Unique Cookie figures. Advertisers then get the best of both worlds and should sleep easier in the knowledge that both approaches broadly agree.

We need solid user figures to enable us to give accurate reach and frequency metrics. If media owners want to get more brand money, then advertisers need figures that are comparable to offline - as video advertising will be planned and bought in the same way regardless of how it will deliver.

Without this we will continue to sit in this measurement jam getting frustrated.

Paul Cook is the CEO of