Klout-logoThanks to the rise of social media, there’s a mad race to measure influence and help brands harness it to their advantage. That has created an ecosystem of companies vying to prove that they can most accurately identify the social media users with the most clout.

One of the most prominent players in the space, Klout, is also one of the most controversial.

Klout’s many critics level various charges at the company. Two of the most common:

  • Klout doesn’t really measure influence.
  • The algorithm Klout uses to assign scores to users is opaque.

Hoping to address both of those criticisms, Klout yesterday announced what it considers to be “some of the most significant product updates in Klout’s history.”

A post on the company blog by Klout CEO Joe Fernandez explains, “We have increased the number of social media signals we analyze from less than 100 to more than 400. We have also increased the number of data points we analyze on a daily basis from 1 billion to 12 billion.”

According to the Fernandez, “All of this additional data helps us deliver a more accurate Score for everyone on Klout.” In an effort to convince the world that those scores are more accurate, Klout has also published more detailed information about some of the signals Klout’s algorithm takes into consideration.

So will Klout’s updates silence critics? Probably not.

And for good reason: Klout says that its goal is to measure both online and offline influence, but the more you know, the less realistic that goal looks. For instance, one of changes Klout has implemented incorporate suggests that Wikipedia pages are an indicator of real-world influence:

We see a Wikipedia entry as a significant indicator of one’s ability to drive action in the real world. We’ve tested this method over the past several months and the updated Scores of recognized world leaders like Barack Obama and Warren Buffett more accurately reflect their real-world influence.

Skeptics would argue that Wikipedia is hardly an accurate indicator of real-world influence, and that citing the updated scores for Obama and Buffett as evidence of Wikipedia’s value as a scoring signal is little more than Klout engaging in form-fitting.

Wikipedia aside, there is good reason to be skeptical about Klout, even with its updates. While some of the criticisms leveled at Klout by its detractors aren’t entirely fair, the company’s biggest problem is simple: you can’t measure influence if your definition of it is oversimplified to the point of being fatally flawed. As previously noted, real studies on influence have demonstrated that it’s much more complex and nuanced than Klout and many of its competitors would have us believe.

Unfortunately for Klout, a quick look at some of its score signals suggest that’s (still) precisely the case. From questionable statements such as “posts to your wall indicate both influence and engagement” to dubious assumptions like “your reported title on LinkedIn is a signal of your real-world influence,” Klout may be trying its best to make use of the data the social mediasphere gives it access to, but that doesn’t mean the data is meaningful.

With this in mind, it’s clear that Klout’s move to increase the amount of data it analyzes is far more likely to make Klout look more sophisticated than it is to actually help the company achieve its stated goal of measuring influence. Which means that, whether it uses 100 signals or 100,000 signals, Klout still doesn’t really count.