{{ searchResult.published_at | date:'d MMMM yyyy' }}

Loading ...
Loading ...

Enter a search term such as “mobile analytics” or browse our content using the filters above.

No_results

That’s not only a poor Scrabble score but we also couldn’t find any results matching “”.
Check your spelling or try broadening your search.

Logo_distressed

Sorry about this, there is a problem with our search at the moment.
Please try again later.

Marketers are adrift in a sea of data. Do we need a bigger boat? Some of the most common online data pitfalls are easy to identify, but hard to avoid. In Part One of this two-part series, we look at why 72.8% of surveys aren't valid -- and the phenomenon of testing against the wrong metric.

Data Trap #1: Most surveys are junk, but some do real damage.

Now that every company is also a publisher, the survey is a set piece in content marketing.  Unfortunately most surveys aren’t valid or well designed, but these days marketers know to view surveys skeptically until they understand the source, methodology and motivation for the research.

That healthy skepticism tends to break down when a survey is internal; polls of customers and prospects that are used to explore their needs, possible features or shifts in strategy.  Often these surveys are fielded specifically to support a favored business case within the company.

At issue is an outsized respect for numbers, combined with an inability to see anecdotes for what they are. Here’s how it often goes down:

A survey goes out. Response isn’t great, but a hundred or so responses are collected. Whoever is responsible starts slicing up the sample by the people who matter, perhaps by role, company size, current vs. prospective customers, and the like. The small groups of respondents in each of these slices aren’t large enough to be statistically valid, but when the figures are there in black and white they become impossible to discount. “Forty percent of our respondents want something” is powerful stuff...even if what we’re really saying is, "five people want this.”

Then there’s the power of the anecdote, that story from customer service, a blog comment, or email that resonates. Anecdotes are best used to humanize a statistical reality. “Joyce lost her health care in 2007 when she contracted external lung syndrome…etc, etc.” The danger is when there’s no corroborating evidence to back up the anecdote. It’s a part of our psychology to latch onto stories, especially if they agree with our beliefs. So, one comment becomes the rallying cry, and its impact has more to do with the status of its champion than with any sort of formal inquiry.

I watched this happen years ago at a company that was debating the length of online articles. Two camps emerged (long vs. short) and a survey was fielded to readers. What emerged was a split picture. When asked specifically about length, most people found the articles were too long. However, their answer to a question about overall value suggested a high level of article detail was an important benefit and differentiator. In other words, articles were too long, unless it was a topic of interest to the person. This is a more nuanced finding than "people like shorter articles," so it lost the political battle at the expense of the readers and ultimately, the company itself.

Data Trap #2:  We don’t have enough conversions, so…

Sample size is a problem for marketers that test. They’d like to compare results based on a hard metric like conversion, cost of acquisition, or customer lifetime value, but there isn’t enough volume. This is particularly common in B2B, where closed deals are rare and communication between sales and marketing is endangered.

The solution for many is to look at more plentiful measures that they hope correlate with financial metrics. The most common choices tend to be search and email clicks. That’s a problem because very often the aspects of an ad or email that generate a click aren’t those that make a sale down the road. This leads to optimization efforts that actually depress conversion in favor of more page views, clicks, etc.

The solution for some has been to seek a middle ground. Some companies have found that signs of true engagement are a better indicator of future sales than traffic measures. Examples of this include time on site, repeat visits, white paper downloads, the reading of certain pages, widget interaction, email sign-ups, and podcast/webinar attendance.

The next post will cover three other potholes to avoid: our lust for quantity, disrespect for our own time, and belief in the omniscience of the voice of the customer.

Stefan Tornquist

Published 16 March, 2010 by Stefan Tornquist @ Econsultancy

Stefan is Vice President of Research (US) for Econsultancy. You can connect with him via LinkedIn.

39 more posts from this author

Comments (3)

Martin Dower

Martin Dower, CEO at Connected-uk.com LLP

Spot on Stefan. It's interesting to see how much data can be abused by (mainly) marketing departments in support of their strategy rather than a "true picture" of what is going on. Far too much weight is then applied to numbers that don't actually mean too much and, worse still, the statistical skills of most marketing departments is close to zero so the numbers are interpreted poorly.

I think we will see a big increase in the hiring of Chief Information Officers who are senior exec level appointments and with the brief of properly understanding the important numbers in their business. This will give us some sanity, at last.

over 6 years ago

Avatar-blank-50x50

dineplaces

A good Article here.

over 6 years ago

Avatar-blank-50x50

Deri Jones, CEO at SciVisum.co.uk

Stefan

whilst I agree that basically marketing guys don't understand statistical theory!

I am also often surprised that they don't always have any data to hand at all, for key measurements online.

For example. when we're speccing up a Load Test for a client, their tech team maybe be clear that they want to ensure Module X is tested because they have a hunch that it may be a bottleneck.

Then we ask the MKtg team, which User Journeys the test should focus on - and they want to discuss that a  little, that's cool.

But when we ask for specific numbers  - eg what was the maximum number of people completing the CheckOut  in your busiest hour last year... everyone around the table looks at each other and shakes heads and say ' we'll have to find that out'.

Or the peak of failed CheckedOuts that hour.

Or sometimes even just  £ per hour is not known!

We often ask about the ratio of Add to basket journeys versus CheckOut journeys  so we can model it - and folks will usually -maybe someone will know the average across the year: but what about the ratio after a major email campaign? Or during a sale?

(Occassionally, people will have numbers to hand that are less than useful. Eg they may have brought in a supplier for a load test of the site, confidence that the output will be of value, when actually it is as vague as 'we can handle 10,000 pages/minute for URL XYZ', and '2,000 minute if we ht URL X followed by URL Y' - with no meaningful User Journey data)

I guess I'm saying, if we mostly we don't know enough about the usage peaks of our site, confidence in knowing what to do about it is hard to have.

And that our lack of reay numbers is not because we don't understant statistics, but more because we haven't asked, or found out...

over 6 years ago

Comment
No-profile-pic
Save or Cancel
Daily_pulse_signup_wide

Enjoying this article?

Get more just like this, delivered to your inbox.

Keep up to date with the latest analysis, inspiration and learning from the Econsultancy blog with our free Daily Pulse newsletter. Each weekday, you ll receive a hand-picked digest of the latest and greatest articles, as well as snippets of new market data, best practice guides and trends research.