How We Successfully Proved Advertising Effectiveness

Finally. You’ve pulled the trigger, the CFO is convinced (or blindfolded), and the media planning has started. The first above the line campaign for your scaleup is almost underway. But now for the hard work: making sure you’ll be able to prove advertising effectiveness.

Whether you want to achieve clear and non-negotiable targets or whether you just want to find enough confidence to continue investing in this type of advertising, analysing effectiveness is not so easy. At least not if you wanna get it right the first time.

For our first above the line campaign we did manage to prove its effectiveness, and in this article, I’ll share exactly how we did it.

What can you expect?

Any successful analysis starts with clear expectations. But don’t get overly scientific about it. Just keep it simple and pragmatic.

We had bought 3,5 weeks of radio advertising across 3 channels operating nation wide, and then a thousand bus shelters for 1 of those weeks, and we simultaneously started online advertising across social media and display.

So I simply asked myself, what kinds of effects could I possibly see from this combined investment, and what metrics could we look into to find out more?

ONLINE SEARCH BEHAVIOuR

The first and perhaps most obvious thing that came to mind was an increased interest in our brand, which would translate to an increase in branded searches and possibly direct traffic.

Additionally and most importantly, I wanted to see an increased interest in our product category. Increasing category demand was the primary objective for this campaign and the need to grow demand was the direct reason we had even begun to consider above the line advertising. So if we’d be successful, we should expect to see an increase in searches for keywords specific to our category, like ‘campervan hire’.

WEBSITE BEHAVIOuR

In addition to search effects, I expected that the use of radio and out of home advertising would make our online advertising channels more effective. Specifically, I anticipated higher click-through rates for social and search advertising, resulting in additional traffic from these existing channels, as well as lower costs per acquisition for search advertising.

I also expected an increase in the number of marketing qualified visitors, which is a metric we have defined based on pretty in-depth data analysis. Basically, there are 3 signals in on site behaviour that are indicative of an above average chance to convert. Whenever a user displays such behaviour, they are an MQV. So in essence, increasing the number of MQV’s today, means increasing the number of conversions in the days or weeks to come.

AND LAST BUT NOT LEAST: CONVERSIONS

Finally, I couldn’t help myself from also writing down additional conversions as a result to be expected of the campaign. Of course radio and out of home are first and foremost an investment in visibility. But if I was convinced I would see more quality traffic, then how could that not result in extra conversions?

So as you can see, my expectations touched on the entire funnel. From search behaviour to website behaviour, and even down to revenue. By looking at all these metrics across the entire funnel, I would be able to truly understand the extent of the campaign’s effectiveness.

RESULTS ANALYSIS

So with these expectations noted down, now came the moment of truth: analysing the results. Fortunately, it started off simple by looking into online search behaviour. We could just pull out branded search data directly from Google Search Console, and the same goes for category keywords. Since we take up the top positions in organic search for the commercial keywords, we basically have a 100% impression share. That means that observed search volumes are in fact equivalent to market volumes.

I compared search behaviour during the campaign period with data from the same number of days right before the campaign. Given how important seasonality is for our business (we sell a travel product), I also made the same comparison for the previous year.

The uplift in search behaviour for category keywords was quite significant, which was great news. There was also an uplift for branded search, but that one wasn’t all too impressive to be honest.

The increase in impressions for both types of keywords was statistically significantly bigger this year when compared to last year. That is most definitely a good sign. Though, technically, it doesn’t prove advertising effectiveness just yet. But more on that later.

Testing for statistical significance wasn’t necessarily very straightforward because technically we weren’t just analysing the difference between two data points, but we were analysing the difference between two observed differences. In cases like these, it helps when you happen to have a statistical genius in your team (<3 Levon).

As for the increase in brand interest, besides brand searches we also looked at direct traffic. Whereas we did see strong growth numbers compared to the previous year, they actually followed the exact same pattern as last year. So there was no hint of an additional effect of the campaign. After all, we are a scaleup, so growth is the norm. Any growth that follows from this additional investment should be extra compared to what’s already forecasted. And that clearly wasn’t the case.

MORE EFFECTIVE ONLINE ADVERTISING

To see if the CTR of search and social advertising increased due to the use of radio and outdoor advertising, we looked into two things.

For search ads, the question was whether the CTR during the campaign period was higher than before the campaign, and how long the effect might have lasted. Unfortunately, we weren’t able to draw any conclusions about this because we had made some other changes to our advertising setup during the campaign period which also affected the CTR.

However, we did see a clear effect in the cost per conversion during the campaign period. Of course in the best case scenario, you’d want to see the cost per conversion go down as a result of your extra investment in visibility. And that’s exactly what happened.

The cost per conversion dipped with about 30% and then after the campaign it shot back up again.

We then compared the campaign performance with last year and the year before. We found that usually our cost per conversion in the campaign period is 40 to even 150% higher than the overall annual average, but this year it was 12% lower than the annual average. Needless to say, this made me and a bunch of others very happy.

For social advertising, we had set up a collection of ads specifically for this campaign. That meant we couldn’t make a before vs during performance comparison.

The social campaign did give us the opportunity to test the effectiveness of our out of home advertising. So we made two duplicates of the campaign and ran one set of ads in the areas where the outdoor ads were up, and then another set in the rest of the country.

This way, we of course weren’t measuring a general campaign effect but specifically the effect of the out of home ads. It turned out that the CTR on our first social advertising touch points was 29% higher in areas where we had the outdoor ads running. This difference was highly significant, and we saw a similar effect for search advertising as well. That was great news.

This nifty geographical split for social advertising is a good example of how you need to make sure your online advertising set up will actually allow you to draw solid conclusions afterwards. And make sure not to do anything during the campaign that could mess with the results, like we did for search advertising.

TRAFFIC QUALITY

To find out whether we had indeed seen more high quality traffic on our site during the campaign, we analysed the share of marketing qualified visitors (MQV). Specifically, we checked which percentage of visitors with explicit product interest met at least one of the MQV criteria before, during, and after the campaign.

The good news is that this percentage increased by a lot during the campaign period. We had made sure that advertising on radio, outdoor, and online went live at different times. And of course the radio and outdoor campaigns also had a clear end date. This made it very easy to look into the effect of these channels individually. We were happy to see that all three clearly had their own impact, at least when looking at the share of MQV metric.

When we looked at how the share of MQVs developed after the campaign, we found something interesting. After the share of MQV’s increased by more than 10 percentage points during the campaign period, it remained stable at this higher level when the radio and outdoor campaign were over. That’s a great thing obviously, but an effect like this would be highly unexpected.

So after taking a closer look, I saw that there was already an upward trend happening before the campaign, albeit a very modest one. So, my take is that the campaign significantly amplified an already existing seasonal effect. In other words, even without the campaign, the share of high quality website visitors would likely have increased any way this time of year, but very probably less significantly.

CONVERSION ANALYSIS

So far we’ve seen a whole lot of positive effects already, so imagine how excited I was to look into bottom line effects. Now again, I was never going to judge this investment in visibility solely based on bottom line results and it was by no means the primary metric to look at. But, to really understand the campaign’s value, you want to go over the entire funnel. And with the additional quality traffic, it seemed likely that we might just find a sales effect too.

For this analysis, we did a side by side comparison of the revenue trend lines from 2019 to 2023. It immediately became clear that something funny was going on. For the past three years, revenue started to decline exactly in the same week, just before summer. That makes sense, as far and away most Summer bookings have already come in at this point.

Now, this specific week happens to be the one immediately following our radio and outdoor advertising. And this year, instead of a sharp decline, we saw a significant increase in revenue for this week. The decline did come eventually, but no less than three weeks later than usual. This means we saw three weeks worth of additional and unexpected revenue. This is something we haven’t seen in any of these previous years.

Now it would be too easy to claim that this bottom line effect is entirely and solely attributable to the campaign. There are many factors that could potentially have influenced booking behaviour, quite a few of which we don’t even have control over. But at the same time, I’m certain the campaign did in fact contribute to this, even if we’ll never know exactly how much.

You’re looking for growth that wouldn’t have happened without the campaign.

It’s vital you keep this in mind when setting up the campaign and when buying your media.

WHAT STORY DOES THE ANALYSIS TELL?

And perhaps that might be the most important part of your analysis: to not interpret the outcomes too optimistically. You rather wanna be a bit conservative. Because if you want a compelling case for the CFO or your investors, you don’t want to come across as though you were just looking for confirmation of your own suspicions. Make sure to tell an honest story, and always include the learnings, disappointments, and the questions that have been left unanswered.

Also, you need to understand that pointing to just any growth effect isn’t enough. As a scaleup, year over year growth is the default. So when you analyse your above the line campaign you’re looking for growth that wouldn't have happened without the campaign.

It’s vital you keep this in mind when setting up the campaign and when buying your media. So for instance, make sure that the different advertising channels you end up using don't go live or end simultaneously. Otherwise you won’t be able to attribute the effect of each individual channel. Perhaps it works best if you see the campaign as one big A/B test. You need to be able to prove the incremental value of the campaign.

Afterwards, it’s almost like you’re a detective searching for clues. For example, if you see a CTR that increased from 1.1% to 1.21%, that might not seem all too impressive at first glance. But you need to realise that this new CTR means you're getting 10% more visitors to your website. Now if you also happen to see an increase in interest at the top of the funnel on top of that to begin with, all these little effects start to seriously add up. Or rather, they multiply, but you get the point.

CONCLUSION: WAS THE CAMPAIGN SUCCESSFUL?

So, let’s recap. What do we make of our first above the line campaign?

We noted the following effects:

  • An increase in brand interest that wasn’t all too impressive.

  • An increase in category interest that was actually most important for us. This is the most meaningful result and gives us the confidence to continue investing in these types of campaigns.

  • More effective online advertising, very probably thanks to the use of radio and outdoor.

  • An increase in marketing qualified visitors that can be seen as a predictor for additional future sales.

  • A revenue trend line that we haven't seen before, bringing in additional revenue that I think should at least in part be attributed to the campaign.

There’s no way for me to know for any of these effects exactly how big of a role the campaign played versus any other external factors. However, the fact that so many different metrics across the entire funnel show such positive signs leads me to conclude that the campaign must have had a pretty serious positive effect on our business.

In the end, the main goal of this analysis was to answer one question: did we find enough confidence to continue investing in this type of advertising? The answer is yes. And we will.

The short version for sleepyheads

  • Planning an above the line campaign? Make sure to write down your expectations beforehand.

  • Make sure to set up your media and ad campaigns in a way that really allows you to draw conclusions afterwards.

  • You need to be able to prove the incremental effect of the campaign.

  • It’s not about just any growth, but growth that wouldn’t have happened without the campaign.

  • Take the time to conduct proper analysis and involve your brightest team mates, and at least one who can test for statistical significance or who can show you how to do it yourself.

  • Don’t forget your CFO if they’re still blindfolded in a closet somewhere.

A 3 part SERIES ON ABOVE THE LINE CAMPAIGNS

This is the last article in a three-part series about above the line advertising. Keep reading:

Part 1: Why would you consider above the line advertising?

Part 2: How to set up your first above the line campaign

Previous
Previous

The Thing About Competition

Next
Next

How To Set Up Your First Above The Line Campaign