The value of analytics data is without question, and countless companies use quantitative analysis of this data to improve their user experiences.
The numbers don’t lie: changes informed by quantitative analysis are used all the time to reverse negative trends like shopping cart abandonment and produce higher conversions.
Unfortunately, the efficacy of quantitative analysis leads far too many companies to underestimate the value of qualitative analysis. Usability testing with focus groups, for instance, can be time-consuming and expensive, and if not run properly, produce misleading results. But that doesn’t mean focus groups are not worthwhile.
One of the biggest problems companies face when they rely too heavily on quantitative analysis is that they assume they understand why changes they make in response to it produce the results they produce.
While it’s often possible to create reasonable hypotheses, without talking to users, the answer to “Why?” is often never definitively known. This is particularly true with A/B testing, for instance, where small, seemingly minor changes produce not insignificant increases or decreases in important metrics.
Some argue that understanding why these changes produce the results they do is unimportant. If conversions are where a company believes they should be, for example, understand why something works is not as important as the fact that it works.
But as more and more companies seek to create good user experiences across channels, and user experience becomes an inseparable part of overall customer experience, companies would be wise to recognize that for all of its value, data and quantitative analyses can still be ambiguous, inconclusive and misleading.
The qualitative information that comes from talking to real users and customers can provide an invaluable layer of context and understanding that companies need to create the best experiences and identify ways to stay ahead of their competitors.