Online marketing attribution attracts a lot of discussion, as it should. But the industry would be well served by having a more complete discussion about attribution so analysts and marketers fully understand the benefits and pitfalls of various approaches to attribution and set themselves up for success in the future.
Marketing attribution, at the core, is about answering this question: what is the optimal mix of messaging that moves people from discovery through to end goal completion most efficiently?
What’s the problem attribution is trying to solve?
Two concepts worth considering when thinking through attribution measurement models are accuracy and precision:
High accuracy, but low precision
High precision, but low accuracy
Most management people I have worked with prefer the consistent repeatable outcome delivered by leaning towards precision – every time I put X into marketing I get Y return / profit each time.
In contrast, one might think they have accurate knowledge of a process, but each time X is put in the result can be an output of T, Q, or 6 - with no ability to forecast which outcome will occur. If you are looking to build credibility and consistency in your analytical practice, I'd lean towards precision over accuracy; most management people really do not care about the details of how you got there anyway.
What they are financing is the outcome (return) and prefer consistent results to having the details of an interaction model that is unreliable.
Enter the media mix model
Media mix models are the proper and mathematical way to assign the value of marketing touches and deliver answers to the question “how much should I spend on each media to optimize marketing value creation?”
Yes, they are expensive to build and require significant corporate stamina to execute. But mix models are the gold standard for answering the attribution question. Here’s a few interesting things about media mix models worth knowing and thinking about:
- Mix models generally do not consider touches "first" or "last" or assume any media sequence at all. The model realizes people will move in and out and across different media types and exposure situations that will affect people differently, depending on who they are.
- Mix models assume that the same piece of media and any impression or touch can perform different jobs, creating Awareness, Intent, Desire, or Action (the AIDA model) even though specific media are often better at some of these tasks than others. What’s important in a mix model is the way all the marketing touches react with and reinforce each other. This approach is very representative of what actually happens to humans in the real world of media and marketing.
The output of a mix model usually specifies the “weights” of each media in the mix that will achieve maximum impact, based on a series of controlled tests where specific media types are added or removed. These models are quite precise in terms of predicting outcome. But they never attempt to be accurate if you are looking for “how” the outcome occurs; the model is not about what each piece of media does by itself or how the media interact, it’s about X input reliably delivers Y output.
Most companies are not ready to go the media mix model route, so cheaper and faster alternatives are desired.
Here’s a way to get a feel for mix issues: when launching a new product or line, initiate one fully measured marketing channel at a time and “let it run” before introducing another channel, then another, etc. Measure the performance of each channel unto itself and the performance of the different mixes against each other.
When a new channel is added how does the performance of the new mix change? This approach does not create a mathematical model or a scientific measurement, but if you measure carefully it can give you a very solid "feeling" for how different media interact and produce results for a specific product or line. Of course you'll need enough corporate stamina to do this.
We all know last click attribution is just a partial story, which is why all the vendor effort to support moving up the source chain. This frequently results in what I would call “sequential attribution”, a chain of marketing touches or other events of different kinds that happen first, second, third and so on before end value is created.
When people see these sequences of events, their reaction is to try to assign values to each step in the chain. In fact, analytics vendors offer different models of allocating value creation to the different steps, and ability to compare the different outcomes.
This is all very interesting and certainly worth looking at; sequential attribution provides great fodder for speculation on what might be going on in the value creation process. But a list of events is not a measurement, and it’s difficult to take authoritative action on these touch chains for several reasons:
- Touch sequence often does not include touches through multiple browsers, computers and devices, so the “online chain” itself is not representative of reality. Add in offline media and it’s a real mess in terms of actual touches.
- There is no ability to measure the actual contribution of each touch, which just begs people to start jumping to conclusions about step contribution and engaging in often-political weighting schemes. This is a really bad idea because it builds faux confidence, opens the door for damaging business decisions, and could eventually lead to lack of faith in the analytical team.
In fact, I have seen the vendors caution people against jumping to these sorts of value-contribution conclusions. This Econsultancy whitepaper Marketing Attribution: Valuing the Customer Journey has more info on the current practices in this area.
Here’s an example of these human (as opposed to mathematical) weighting schemes going badly. I’ve never seen one of these weighting schemes assign a zero or negative value to a touch. How can that be? Controlled testing of campaigns often shows it’s possible for a campaign to contribute zero or negative net value to marketing outcome. But when there are people with budgets in the cross hairs sitting around a table? All media always generates net positive value, right?
In other words, sequential attribution models can’t be accurate, even though the visual representation of them surely implies they are. And they can’t be precise either, because humans are randomly assigning values to the steps based on speculation, gut feel, and political considerations rather than the mathematics of marketing lift measurement.
Using the attribution sequence to justify or tweak current media budgets quarter after quarter with virtually the same outcome each time is not “proof” of anything. This is not a model and the result of this exercise does not optimize media spend for value creation.
In fact, this approach to attribution sounds a lot like the fractional allocation between online and offline discussion back in 2007.
Here’s what can be done with sequential attribution – people can learn a quite a bit about pieces of the story and create tests that might prove or disprove any theories. But I’d suggest a different thought process than the above "budget games" should be used for this activity, one with more concrete (precise) goals and outcomes.
Instead of trying to figure out what each step contributes to final outcome, focus on the ability of a particular media type to move people to the next step in a sequence. Why? Because while you cannot predict or control a sequence, you can influence the interactions of two media types. If certain patterns in the sequences are detected, and clues are found on how different media tend to interact with each other for your company or product line, attempts can be made to boost the performance of these interactions.
For example, let’s say when looking at sequential attribution chains, among the highest volume value creation interactions, you find a common pattern in many different touch sequences that looks like this:
Display > Social
Now, we don’t know why or how this happens or necessarily the value of it, but if this sequence keeps occurring in multiple situations generating high value, it’s pretty safe to assume it is important.
So, instead of spending your time arguing about what these two touches might contribute to final value – a concept you can’t determine without proper controlled testing / mix models – start by asking yourself why this interaction is happening, looking for hard evidence of this interaction that might expose more details, and using this information to amplify the natural effect that is already taking place.
Ask yourself, what is it about our display campaign(s) that might create social interaction? If this sequence is desirable because it leads to positive outcomes, how can we amplify it? If the ads do not currently promote a social component, yet people are following this touch sequence somehow, perhaps you should test an intentional social-driving component in the display ads? Maybe create a new “social display” campaign, and see if it drives more value creation than the previous campaigns did?
Each media / channel should drive and be measured by movement to next step in sequence, not to end goal, because the level of precision you can achieve with this approach is much better than guessing at step value in a sequence, which lacks precision and accuracy.
Not ready for this level of complexity? Try this next idea.
First click - with a goal / value twist
Here’s something many people do not know: first click segments are reliably predictive of value creation over time. Note the phrase “over time”, meaning heading this way requires a change in the marketing value definition for the attribution efforts from “single conversion” to “value at 3 months” or “value at 6 months”. Said another way, the visitor or customer coming back and repeating the value creation event again and again has to be meaningful to the business. In most business models, this repeat action is desirable.
And yes, first impressions count – a lot. If some mental justification is needed for why this scenario might be true (other than simply testing it), think about it this way: initial expectations created by the first interaction continue to persist throughout the customer journey. The closer the match between expectations and remaining customer experience, the more likely the customer will proceed and continue the relationship in the future. Different types of marketing effort attract different audiences, create different expectations, and result in different values over time.
The outcome of this attribution model is very consistent and creates a durable tie between campaign spend and business value generated. However, this approach requires ignoring potential value contributed by the other media “in the middle”, between first touch and outcome. In other words, this approach is quite precise (which appeals to management) though it lacks accuracy because the “complete story”, the contribution of all marketing touches, is never told.
Not a perfect world, but it’s much better than sitting around a table and guessing – at least the result is a consistent value that can be acted on. The fact there is some value not being measured is known, and as with the sequential attribution example above, one can test different levels of other types of media to see how first click volume or value could be improved, optimizing movement through to end goal.
The downside of this approach is the value over time component. Depending on the tools available, it may be difficult to tie the first click to customer value over time, though this is getting much easier as the tools get better and new innovations emerge.
For example, the email vendor Listrak is building customer value profiles for commerce clients using tagged checkout pages, which eliminates the need for determining customer value using back end operations data. This same data also enables the customizing of communications based on where the customer is in their lifecycle.
Can’t go this route? Many have found first click is also predictive of initial goal achievement, and that result certainly lines up with the experience of first click as predictive of value creation over time. So if you can’t get to value over time as in the example above, you could stick with the immediate goal conversion as value and still be better off than using last click. However, this method is not as precise as using value over time as the goal; there will be larger fluctations in the predicted outcome.
Failing this approach, lots of people still get along fine with…
Last click – trusted and reliable
It’s well known last click attribution is a partial story. The upside of this approach is reliable and easy to implement measurement of cause and effect. It’s no wonder this approach is still so popular; it’s both accurate and precise within the definition of the measurement.
Meaning, there’s an accurate link between last click and action, and the outcome is very precise over time. Sure, lots of touches happened prior to the last touch, but by definition, those touches are not included in this model, so they’re not a concern.
Since we’re at the bottom of the attribution stack, there are not a lot of alternatives here other than what I suspect people are already looking at. You can’t really improve the practice of attribution at this level because it’s already pretty darn good. But you can borrow from the ideas outlined above. It’s perfectly acceptable to admit you can’t measure the contribution of other media to last click, but you can test more ways to drive people into a last click scenario.
For example, let’s say when you look at your sequential attribution mapping you see some common patterns covering the interaction previous to a specifc last click touch. As in the sequential attribution example above, you can try modifying your approach in these media and testing whether you can bring people into the last click scenario more efficiently or effectively. Then let the last click model take the Awareness, Intent, and Desire you have created in other media and turn it into action – with accurate and precise attribution to source of this last action.
There has been a lot of excellent work done by the analytics vendors in the area of online marketing attribution, and the data they now provide opens up tons of exciting new territory to be explored. If you’re not evaluating these new insights, you are surely missing out on developing promising ideas on how to optimize media spend for value creation.
At the same time, we need to be prudent about how this data is interpreted and used. I’m pretty sure when the analytics vendors are ready, they will start providing the capability to actually measure the contribution to value creation of the different pieces in a media mix. Until they are, let’s just work with exactly what the vendors have provided and avoid assigning values to events / touches that so far cannot be properly measured for contribution to end goal.
Editors note: If you’d like to know more about this topic, James Novo will be speaking at the upcoming eMetrics Summit in Boston October 2nd, and would be glad to answer questions as part of the session or at any point you run into each other during the show.