The remarkable impact of Smart Bidding with a Google Analytics attribution model

When you think of attraction tickets (in the UK, at least), people will immediately think of either theme parks such as Alton Towers, or tourist attractions like the Tower Of London or Madame Tussauds. Because the average ticket value of these is relatively low, the amount people are spending isn’t huge, and so the benefits of shopping around for the best price can be relatively small.
When I started to manage the PPC for a major UK attraction tickets company around 10 years ago, it became apparent quite quickly that whilst this may be true for many ‘local’ attractions, when it came to international theme parks (particularly those in Florida), the buying process was very different. With average order values around £1,500 – £2,000, it was relatively unusual for somebody to simply visit the site and make a purchase.
Instead, they would visit the site repeatedly (and the higher the average order value, the more times they’d visit on average before purchasing). Further analysis showed that what people were searching for changed significantly as well. Whilst it varied from customer to customer, it was certainly commonplace to see people start by searching for something like ‘orlando theme park tickets’ before refining their search to something more specific like ‘disneyworld orlando tickets’ (and possibly a few other park searches), before finally searching for the website’s name and making a purchase.
Under a ‘last click’ model, the sale would simply be attributed to the brand term, making it appear that the Non-Brand terms weren’t really driving much in terms of sales. This was especially true for the more generic terms, as somebody who is searching for ‘orlando theme park tickets’ doesn’t know which parks they intend to visit yet – they certainly aren’t ready to make a purchase worth thousands of pounds.
At the time, the simple solution to this was to build an attribution model within Google Analytics, which assigned part of the sale to each visit. This isn’t all that ground-breaking, and the results were largely as expected – the attribution model reported lower revenue on Brand terms, higher revenue on ‘specific attraction terms and even higher revenue on non-specific terms.
We used this (in combination with their margin data) to set bids manually in Google Ads, and the impact on performance was very positive – over the next five years, PPC profit increased by a factor of 5:

For years, we continued to refine the model and the bids (along with other optimisations, obviously), but then Google introduced Smart Bidding. On most accounts where we tested Smart Bidding, we found that the performance of the account was significantly stronger than with manual bidding (even when these were being managed very effectively). But the problem was that Smart Bidding could only use sales data it could see within Google Ads. And whilst you can easily pull GA data into Google Ads, you can only pull in last (non-direct) click data, rather than the sales from an attribution model.
We looked at various tech solutions for this, but found that none of them achieved what we needed to, and so we had to look at alternatives.
Within Google Ads, there is also the option to use models other than a last click model – the most commonly used being DDA (Data-Driven Attribution). This works a little differently to the model that we’ve been using in Google Analytics, but the output is largely similar, in that it assigns part of the sale to each touchpoint the customer makes with the site.
In some ways, it’s more advanced, as it estimates this impact dynamically, by comparing paths including the touchpoint to similar paths without it, to determine how important that visit was in the journey. However, as it uses Google Ads data, it can only see Google Ads visits to determine this, whereas the GA model looked at all touchpoints.
Given the different approaches used by the two models, and the inability of Google Ads to see touchpoints from other sources, it wasn’t certain how strongly correlated these would be. And the client was (understandably) adamant that Google Ads not claim total responsibility for a sale that was partially (or largely) driven by another medium.
So the question was whether optimising to the ‘true’ objective (in the client’s eyes) of the attribution model in GA using manual bidding would deliver better or worse performance than using Smart Bidding with a different revenue metric.
Testing this would be difficult – there was no way to split test the campaigns and optimise them to different metrics. Even if we could, if one model rewarded lower-funnel visits and the other was more biassed to higher-funnel visits, given the length of the purchasing journey (often many months), the test would need to run for a very long time. Around 10% of all Orlando ticket sales involved a recorded path length of 10+ visits!
Similarly, the nature of the market meant that we couldn’t just switch it over and see what happened – because price competitiveness changed over time, not to mention visitor behaviour, the fluctuations in performance over time due to other factors would be too great.
The obvious solution was to compare the metrics – and see how closely related the DDA revenue per click was to the attribution model figure.
To be clear, it didn’t matter whether the figures themselves were similar. We already knew that the DDA model would give a very different figure in general as it assigned the whole sale to Google Ads touchpoints. We could deal with this by adjusting the Target ROAS accordingly.
What was perhaps less obvious was that DDA would actually come up with less revenue for Google Ads than the GA model. On the face of it, since it was assigning 100% of the sale to Google Ads, you’d expect the opposite to be true. But the sales and revenue that Google Ads could see were coming from the Last (Non Direct) Click sales from GA – and this was less than the revenue from the attribution model (even allowing for the model not assigning all of the revenue to Google Ads).
For example, on a last click basis, revenue may have been £100,000 for Google Ads. Feeding this into Google Ads meant that the DDA assigned all of it to Google Ads touchpoints. But the attribution model could (and did) give more than £100,000 to Google Ads touchpoints, as there were many sales that started with a Non-Brand PPC click before converting via an Organic Brand search (for example).

Even though the actual revenue figures are quite different, using different methodology and based on different touchpoints, the correlation is remarkable. You don’t need to see the R2 value to see that they fit very closely to the line (even if you exclude the Brand campaign, which is the top-right point).
Unfortunately, at this point, we ran into a bit of a problem. Whilst Google introduced Smart Bidding in 2018, by the time we’d tested it and found that it was (generally) reliable and offered a significant benefit, and then we’d performed this analysis, we found ourselves looking to switch to Smart Bidding with the DDA model in early 2020. Then Covid happened.
Needless to say, there was a ‘significant’ drop in people buying tickets for Florida theme parks, and the results dropped off. When the USA opened its borders again in late 2021, there was an initial surge as people who had cancelled previous holidays planned new holidays, and we left off putting the new model live until early 2022.
So – after all of that, did it work?
Honestly, we’ll never know for certain. A number of competitors in the attraction tickets market disappeared during Covid, which inevitably had a positive impact on profitability once the market picked up again. On the other hand, the cost of living has tightened purse strings, and the government accidentally crashing the exchange rate didn’t help people looking to buy tickets for USA attractions either.
So we really don’t know what would have happened had we remained with manual bidding. What we do know is that the six most profitable months ever (for PPC at least) and nine of the top ten came in 2022.

To put it another way, 2022 profit was more than double anything they’d seen before. Not all of this can be laid at the feet of Smart Bidding, but there is certainly no evidence that performance was harmed in any way.
Unfortunately, it’s not quite all good news. Smart Bidding doesn’t react very well to sudden changes in performance. Perhaps this is because it is aware of the long paths to conversion, and it needs to wait longer than it would on other accounts before taking action. Perhaps the way that the path length can vary at different times of the year causes issues.
On this latter point, it’s certainly the case that in January a lot of people are planning their holidays, and booking flights and hotels, but not necessarily tickets. At this time of year, the conversion rate is low, but search volumes are very high, with many people at the start of their research phase. In contrast, over the Black Friday period, people aren’t researching with a view to buying later – they are looking to buy whilst there’s a sale on!
Smart Bidding doesn’t react well to these periods, and using seasonal adjustments doesn’t tend to work very well either (you need to know what the impact will be, and for how long), and we’ve had to switch back to manual bidding on more than one occasion in order to ensure that the account is as profitable as possible.
This highlights the importance of remembering that Smart Bidding is a tool, not a solution. You still need to monitor the performance, and step in from time to time as required. Smart Bidding works well 90% of the time, but it isn’t a replacement for a skilled analyst.
Whether this approach would work for other businesses with a long path to conversion is hard to say – it’s something that you would need to test before implementing it, certainly.
Of course, this is just one example of where you can use Smart Bidding where you can’t track your actual goal in Google Ads. There are probably many different circumstances where this is the case – the important thing to remember is that you just need the conversions in Google Ads to be strongly correlated to the true objective, and you can potentially see improved results from using it.
Like many things in the modern PPC environment, there are few absolutes – testing and innovating to find ways to get the most out of Google Ads and its many tools is the only way forward. Google is trying to simplify things as much as possible, and it’s becoming more and more challenging to beat the competition through better account management, but as the results for this client show, the benefits of proactive and innovative PPC account management can be remarkable!