Insights

The arising issue of proper attribution

A chocolate cake with a slice cut out

Imagine you wanted to make the perfect chocolate cake. Every time you made it, you could record every possible variable to make it perfect: the type of chocolate, the oven temperature, the time in the oven. You could rate the cake on a scale of 1-10, then try different recipes and score the cake again each time. With enough combinations, you would be able to build a model that told you the ‘right’ recipe to make the perfect cake!

But the fact that there are so many chocolate cake options at supermarkets suggests we haven’t settled on the perfect recipe. And, beyond the simple subjectivity of it, you can immediately see why it’s a complex task. Every ingredient affects every other ingredient. Even in a perfectly controlled environment, where you can experiment and measure every detail, understanding the impact of your inputs on the output is challenging.

So, accurately attributing business performance to a specific activity, where you can’t control or measure variables, is an impossible task. Even if you could measure every variable, then deploy the best ‘real-time, AI-driven’ model, it wouldn’t perfectly predict performance tomorrow – what was true yesterday does not determine what happens today (think cultural changes, or new competition, or a pandemic, which changes the equation).

If an attribution model was noisy but unbiased (noise referring to variation around a bullseye, bias referring to missing the bullseye), then on average, it would still be a helpful decision-making tool. But too many attribution models are both biassed and noise, for many reasons:

  1. The ‘brand building’ challenge. Attribution models can devalue longer-term marketing channels. ‘Brand building’ typically drives small changes in behaviour across a mass audience. In a market where you are competing for a share of voice, your ‘brand building’ activity may be protecting sales, rather than driving additional sales. These nuances are hard to pick up in a model, particularly in low-frequency industries. We often heavily rely on econometric models, without giving sufficient thought to what we are trying to measure and why.
  2. The ‘interaction term’ challenge. The combined impact of two factors; let’s say discounting + PPC spend, is almost impossible to measure. Firstly, relationships like these are almost always non-linear which presents its challenges. But technology like smart bidding means that many factors change all at the same time. For example, a 20% discount with heavy PPC spend may not have been effective, but a 30% discount with heavy PPC spend might have pushed people over a threshold. The point is, relationships between channels, customer targeting, creative and content, and products and pricing are more complicated than ever, so are harder to measure.
  3. The ‘offline-online’ challenge. If you operate both online and offline, two opposing forces impact attribution. Firstly, people purchase in-store but if no shop existed, they would have purchased online. This means you’re likely to undervalue the impact of digital activity, as people complete in-store. Conversely, a halo effect is caused by the shop acting as an out-of-home placement. This means you are likely to falsely attribute performance to digital activity rather than a shop. It is almost impossible to measure these two effects on attribution.
  4. The ‘temporal’ challenge. If I hadn’t prompted a purchase today, would they have just purchased tomorrow? This is particularly true when trying to attribute discounting, and also any call-to-action. Bringing forward a purchase at a discounted rate represents a net loss to the business, but a net gain when reporting on that campaign alone.

 

So given these challenges, should we ditch attribution models?

It’s not all doom and gloom! For me, it’s a case of trying to ‘cheat’ the attribution model wherever possible with three quick tricks:

  1. Try to find short-term proxies for long-term value. For example, can you prove that reach and short-term unprompted awareness have a measurable effect on ROI by looking at patterns across the industry?
  2. Can you run smarter experiments? Tracking specific cohorts with specific marketing elements vs. a holdout can give you some understanding of the impact of some combination of factors.
  3. Can you see what cake worked well for other bakers? We have historic insight from publications such as The Institute of Practitioners in Advertising (IPA) showing aggregate ad-spend and results. We don’t need to learn from our own mistakes when we can learn from others.
  4. Most importantly, start by getting the data in one place! You can notice and test correlations between variables once you start plotting them out, without the need for a complex econometric model.

It can be tempting to simply deploy an off-the-shelf attribution model that fixes all our problems. But we must go back to the basics of understanding the data, and the variables we are trying to optimise.

Featured in PerformanceIN.