A lot of page views is generally a broad indicator of quality, at least on this blog. Quality could be defined as great entertainment, or helpful best practice.

Quality doesn’t necessarily dictate time spent on page, a great post can still be quick to digest. Nor does quality dictate a low bounce rate, especially as we get lots of views from social referral (of course, if time on page was 30 seconds, and bounce rate 96%, there would be mighty cause for concern).

If a post doesn’t get many views, but I receive some good comments from learned readers, I’ll generally be happy, and hope the post will bring in more traffic over time.

What am I trying to get at here? Well, measurement is a science and an art. There are trends that cannot be repudiated, as well as intuition that must be followed.

There are many ways to skin a cat, but the worst thing you can do is skin the cat without explaining why the cat was flayed in such a way.

In this post I’m going to look at a possible crisis in some areas of measurement and market research. Please add your comments below.

I’m going to be a little negative, and list some things that dismay the level-headed marketer. I think there is a fair amount of misuse of statistics and their interpretation, in marketing.

I’ve tried to flesh out each sour note as best I can, to bring something constructive to the debate. I’ll start with the easy ones.

Market research

Leading questions are asked of a favourable audience.

See South Bank Centre’s campaign to redevelop the ‘skate park’ beneath them. Here’s the excellent Dan Barker on the survey that went out only to South Bank Centre members (supporters) and included questions listing only the benefits of the proposals.

There’s not much to add here. When there’s such an emotive agenda as redevelopment of a heritage site, it’s easy to spot leading questions and flawed data capture.

But when market research is carried out on more mundane subjects, the same things can occur if a marketing agency (for example) has a vested interest in showing the importance of their field.

This could be an issue for every company surveying their customers. If these inaccuracies are pointed out in bright enough light, there may be consequences for the brand image.

PRs report findings of research ahead of publication, with no access to methodology.

This is particularly relevant for marketing services companies that are content marketing with research, but can also apply to B2C companies releasing survey results.

In most of academia, it’s really bad form to release results, let alone conclusions, of a study, without at the same time releasing methodology and detail on the dataset.

However this still goes on, no more often than in market research. When we do the Econsultancy ‘digital marketing statistics of the week’ blog posts, we see this a lot. We are sent conclusions and stats, with nothing to back them up. As much as we can, we try to weed these out, but the practice is rife.

What it does is devalue accurate reporting, and encourage people to believe that minimum effort into research can easily yield a set of ‘conclusive’ findings to shout about.

Research is done with too small a sample size. Categories have significantly different numbers of data points in them.

Rigour of statistical analysis is often not detectable. Some weighting, probability values etc should be used if statistics are to be accurate/representative. The visualisation of results should not disguise important facets of the dataset.

Don’t use bar charts when each category’s sample size differs by orders of magnitude.

Publishing

PRs and publishers mention research without linking to it (we are occasionally guilty).

This is often propagates poor research, that in turn propagates a low level of diligence in statistics and measurement in general. 

Simple statistics are misreported.

E.g. we once reported that X% of people shop on a mobile. The actual stat was X% of people with a smartphone shop on a smartphone.

Two very different things, especially globally. We try not to make these mistakes, but if they happen it’s imperative that audiences point this out. Often, our readers do.

Data is being devalued

Raw qualitative data is often written, literally, in a language anyone can understand – tweets, feedback from surveys etc. Quantitative data, on the other hand, is often impenetrable in its raw form, and sadly, when the data is analysed, the results can be manipulated at so many stages, not least when choosing what findings to report.

Of course, if quantitative data is gathered correctly and diligently, analysed accurately and reported in full, it trumps all. It is incontrovertible, it can be acted on, e.g. in conversion rate optimisation, with confidence.

Until such a time as measurers in the marketing industry are fully aware of best practice, then qualitative data, as well as our intuition, our PR and branding hunches, are perhaps our most faithful instruments.

Publishers have an important part to play in championing proper analysis, and not publishing spurious stats, consequently devaluing the industry.

Sentiment analysis

Positive sentiment isn’t conveyed efficiently to upper management

The problem with data can also impact on the softer metrics. When targets are handed out, they are necessarily quantitative and well-defined. Does this impact on communication of brand success to the board?

Edited retweets, @replies, comments, favourites. These are all things that can be counted up. In theory, reach can be ascertained, too, with many of the social analytics SAAS’s available.

And yet, accurate reporting of how much benefit social brings to the brand is still a thorny issue, particularly for brands that don’t have to provide customer service on social, and for B2B brands. It’s that fairly old issue of social ROI.

(Matt Owen has written much on the Econsultancy blog about this issue – see this post on the difficulty of attribution).

The only solution to this is to view the influential and positive tweet as one would a news cutting for a high-profile rag. In the ‘old days’ one would circulate details of mentions in articles etc in the monthly reporting, and I feel the same thing is entirely appropriate with social.

Doubtless some Social Managers do this. ‘This week @AshleyFriedlein tweeted one of our articles’ or similar.

There may also be a greater cause for concern. Is PR and content becoming more nebulous and consequently undervalued through lack of understanding from upper management of softer metrics? More persuasive reporting is needed to start addressing these issues.

When salespeople sell, they talk about all the soft, qualitative, emotional metrics, as well as the hard stuff. They’ll give an accurate picture of our audience, reach, sentiment etc., because they know this is the important, human side of the business.

We make some money with this rhetoric, but the same rhetoric about love from our audience is often, paradoxically, not conveyed accurately to stakeholders (on paper at least).

B2B sales attribution

Salespeople don’t care enough about the brand and attribution is difficult. Can we kill two birds with a social CRM stone?

There’s no getting around the fact that salespeople have to be incentivised on the contracts they bring in.

However, if salespeople are using ‘contract value’ as their only metric, there’s no doubt that some brands can harm their image by being too salesy. What if every salesperson was encouraged to push their own personal brand, through integration of all public communications channels (social, email, phone)?

Social CRM software (see What is social CRM?), and opening up all social networks to each member of a sales team, is good for the salespeople. They know that if they can concentrate on the relationship and the conversation, they’re more likely to sell, and more likely to increase the customer lifetime value. It’s also good for the brand if sales aren’t intimidating prospects, but are soft selling.

So there’s a new softer metric born – ‘customer value as a product of contract value, customer goodwill and consequent amplification of brand’. OK, it can’t be measured yet, but it’s how sales teams should be thinking.

See Minter Dial and Eric Mellet’s free report on The Sales Organization of the Future.

If salespeople are doing more of the relationship building, pointing customers to relevant content, engaging with them and essentially curating a service, then more of that can necessarily be safely attributed to sales. And other teams will be less sceptical about commisioned teams taking their money.

So, what are my conclusions?

  1. We need to question data, even if it nicely fits our agenda.
  2. On the other side of the coin, we need to be transparent when we present data, methodology and conclusions (without hiding behind confidentiality BS).
  3. Although professional development and remuneration will doubtless entail KPIs, hard metrics etc, qualitative findings and positive sentiment should be included more in reporting.
  4. Encouraging your staff to pursue their own ‘personal branding’ will not only lead to more sales (directly or otherwise, depending on the department) but will also lead to greater appreciation of PR and sentiment, business-wide.

This isn’t a paean to other, more rigorous industries. This isn’t a eulogy for the departing hopes of a digital, transparent market. This is an invitation to all of you, to discuss what you are happy with, and what you aren’t.

We should probably start being better at all these numbers AND report some of the love in the room, too.