Controlled experiments and incrementality tests are used in many fields to assess effectiveness and enhance data-driven decision-making. The benefits and challenges of these approaches are explored in this chapter.

  • Controlled experiments and incrementality tests
    • Using controlled experiments and incrementality tests
    • Benefits and challenges of controlled experiments and incrementality tests

Controlled experiments and incrementality tests

Controlled experiments measure the impact of marketing by creating test and control groups, and then exposing them to different marketing messages, channels, formats or creative, as illustrated in Figure 1, and then subsequently tracking their behaviour.

Figure 1: Test and control groups

A diagram illustrating the use of test and control groups for marketing measurement

Source: Econsultancy

Incrementality tests in marketing enable the isolation of a variable that is being tested using a control and experiment methodology. Controlled experiments can be conducted while a campaign is running to track how a particular variable, such as website visits or ad views, affects conversions. They have often been used alongside econometrics and attribution modelling to isolate the impact of a marketing campaign and drill down into areas of detail that the models cannot, otherwise known as measuring the true impact or ‘incrementality’ of a particular marketing activity.

A number of interviewees highlighted the value of adding experiments and incrementality tests to the blend of approaches used to understand how a company’s marketing is performing, and then being able to adjust budgets accordingly.

Go-MMT is one of India’s largest online travel providers and is the parent company of multiple hotel and travel brands. Due to its reliance on last-click attribution, the company was struggling to measure the true impact of its advertising and determine its highest-performing advertising platforms. In order to make data-led budget decisions, the company specifically wanted to compare the performance of its mobile-centric app install campaigns on Meta with the performance of campaigns on another publisher.

Go-MMT adopted the incremental measurement approach outlined in Measure to Grow, a report from Meta and the Boston Consulting Group.[75] The report revealed that companies in India who were adopting measurement based on incrementality were unlocking additional business growth.

Working with a marketing analytics company and Meta, Go-MMT set up an incrementality-based geo-lift experiment to measure their app downloads. The study involved twenty-three cities: the control group was made up of twelve, leaving the remaining eleven in the test group. The test group was further divided into three groups, Meta ads were shown to the first group, ads from another digital channel were shown to the second, with the third group being shown ads from both Meta and the other channel. The cities in the control group were not shown any ads.

As a result of the geo-lift experiment Go-MMT were able to see the true incremental impact of its app install campaigns and compare performance across platforms. A key finding revealed that Meta outperformed the other channels, delivering an 8% higher incremental lift for app install campaigns on Meta technologies.

The analysis has allowed Go-MMT to introduce geo-lift as a common method of measurement across platforms, and to continue to measure the incremental performance of its campaigns.

Go-MMT / Facebook[1]

Using controlled experiments and incrementality tests

Consideration needs to be given as to how to create a robust test and control group, particularly in an environment where third-party cookies and device IDs cannot be depended upon to create these discrete groups. In order to create valid test and control cells there must be enough people in a given group to account for any anomalies in the data, so that it is more likely any uplifts were not due to chance but instead a direct result of the marketing message the test group has been exposed to. Statistical significance can easily be checked through online calculators,[2] which offer an easy method of ensuring that any one isolated group is of a large enough sample size to be representative of the wider population.

Types of experiments a company can use include brand lift studies, conversion lift studies, A/B tests and geographical experiments. A/B testing could involve groups of existing customers where first-party data is available to target ads or emails to different groups to assess the impact. A geographical experiment takes two regions with similar makeups and conducts a test where marketing is switched on in one and off in the other.

There are also opportunities to do controlled experiments in partnership with publishers and platforms that have access to first-party data, such as Facebook and Google. Roxane Panopoulos, Group Manager of Regional Measurement & Insights – Netherlands and Nordics at Snap, talks about the platform’s focus on geographical lift measurement as an option that works well for them. “One of the challenges for a platform like Snapchat, where typically share of voice is low – i.e. our share of spend as part of total digital is relatively low – is that smaller platforms like ourselves are harder to measure using traditional approaches like MMM when looking at overall impact. This means that, in some reporting tools, the amount attributed to our platform is often underrepresented in driving a conversion. Conducting a geographical lift study can help a company to determine incremental sales of using Snapchat.”

Benefits and challenges of controlled experiments and incrementality tests

Experiments are a very valuable tool as they allow businesses to determine the incremental impact of specific marketing activities.

Delivers robust tactical insight. Controlled experiments can deliver robust, yet tactical insights that can help a marketer as they run a campaign. Overall, controlled experiments are considered one of the best-in-class options to those seeking to measure.

Jewellery company Pandora wanted to conduct a series of tests focused on incrementality to determine the additional business value generated as a direct result of a marketing campaign or media exposure. A key priority was to assess the effectiveness of a full-funnel strategy and identify best practices.

The results revealed that after three weeks a full-funnel approach surpassed a performance-only approach. Based on this, a cross-regional strategy was identified for Facebook’s platforms where they ran awareness campaigns targeted to broad audiences and optimised for either reach or brand awareness.

The work highlighted the benefits of incrementality-based studies and showed what was highly effective for the brand was adding awareness to performance activities. As a result, Pandora made lasting changes to its budgeting, testing, campaign targeting and media mix strategies to help to drive reach and efficiency.

Results revealed that executing a full-funnel strategy achieved a 73% lower cost per incremental conversion and a 148% increase in reach. When bidding for broad audiences compared with interest-based audiences this led to a four-times increase in the size of the retargeting audience with a 60% lower cost per incremental conversion. As part of Pandora’s paid social mix Facebook’s efficiency increased by three times.

Pandora / Facebook[3]

Requires resource and time. The need to have valid test and control groups can be challenging both from a practical and resource viewpoint, as it can be either costly or time-consuming, or both, to get robust test and control groups where companies do not have easy access to data sources. For example, an online retailer will have access to first-party data as a result of purchases made by customers. An FMCG or pharma company that is selling through a third-party will not have access to first-party data sources, except where they have started to build these up through their own marketing initiatives or by expanding into direct channels.

Lacks the long-term picture. Controlled experiments give a great snapshot in time as to the effectiveness of the campaign in question but do not address some of the longer-term brand impacts.

Requires a valid test and control cells. To truly understand the impact of the marketing that is being undertaken, each iteration of the marketing message needs to be tested independently. It is possible to create a few different test cells to understand more than one of these iterations at a time, but only within the limitations of keeping a good enough sample size to ensure the test is statistically significant.

Does not scale easily and quickly, especially if using multiple channels. However, it can be good when being combined with other solutions such as marketing mix modelling.

  • When executed well, controlled experiments and incrementality tests are valuable measurement tools for many businesses, delivering tactical insights and allowing marketers to demonstrate the incremental impact of specific activities.
  • Time, resource and difficulty to scale are some of the key challenges associated with incrementality tests.
  • With less access to third-party cookies and the decline of device IDs, it is important to ensure test and control groups are robust in terms of how groups will be identified.
  • Leverage opportunities to access publishers’ and platforms’ first-party data – which often sits behind a walled garden – through controlled experiments.
  • Consider where it makes sense to add experiments and incrementality tests to other measurement approaches, and use the enhanced insights to adjust budgets accordingly.
  • In addition to controlled experiments with publishers, brand lift studies, conversion lift studies, A/B tests and geographical experiments are other types of experiments marketers can conduct.

This guide is based on primary research which involved exploring findings from two reports:

  • Econsultancy’s 2023 Future of Marketing report, which was based on a survey of 835 client, vendor and agency-side marketers. The survey was fielded to Econsultancy and Marketing Week’s audiences between 9 June and 3 July 2023.
  • The Language of Effectiveness 2023 report has been produced using responses to an online survey of 1,369 qualifying marketers conducted by Econsultancy’s sister brand Marketing Week between 27 March and 28 April 2023.

In-depth interviews were carried out with industry experts. Econsultancy would like to thank the following interviewees for their invaluable contribution of time and expertise to this guide:

  • Kumar Amrendra, Head of Digital Marketing, Sky UK Ltd
  • Amy Blasco, Partner, Enterprise Data, Experience and Marketing Lead, IBM
  • Laura Chaibi, Director, International Ad Marketing and Insights, Roku Inc
  • Sebastian Cruz, Regional Digital Marketing and Media Director, Shiseido, Asia Pacific
  • Gary Danks, General Manager, AIM, Kochava
  • Mauricio Ferreira, Marketing Effectiveness Lead, Confused.com
  • James Hurman, Founding Partner, Previously Unavailable
  • Gabriel Hughes, CEO and Founder, Metageni
  • Dr Grace Kite, Economist and Founder, Magic Numbers
  • Chloe Nicholls, Head of Ad Tech, IAB UK
  • Roxane Panopoulos, Group Manager, Regional Measurement & Insights – Netherlands and Nordics, Snap Inc
  • Marina Peluffo, Head of Business Intelligence, Prima (speaking as industry expert)
  • James Sharman, Northern Europe Digital Acceleration Lead, Haleon
  • Steven Silvers, EVP, Global Creative and Media Solutions, Kantar

Lynette Saunders is a Senior Analyst at Econsultancy, where
she works on delivering industry-leading research, briefings and
reports for the digital marketing industry and speaks at a number
of external conferences.

Lynette’s previous experience includes delivering web analytics, measurements and insights, as well as leading usability and
customer experience programmes focusing on improving the
overall online customer experience for Cancer Research UK
and the Royal Mail Group.