A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook


Speaker


Abstract

Advertisers are keenly interested in knowing the effectiveness of their online advertising. However, the industry seldom uses randomized experiments to estimate effectiveness, relying instead on observational methods such as matching and regression. This is partly because, until recently, randomized experiments have been difficult or expensive to implement in online advertising contexts, and partly because observational methods are widely considered within the industry to be ``good enough.'' We analyze whether observational methods for causal inference can reliably substitute for randomized experiments in online advertising measurement. This is of particular interest because there have been enormous recent improvements in observational methods for causal inference (Imbens & Rubin, 2015}. Using data from 12 US advertising lift studies at Facebook comprising 435 million user-study observations and 1.4 billion total impressions, we contrast the experimental results to those obtained from a variety of observational methods. We show that observational methods often fail to produce the same results as the randomized experiments, even after conditioning on information from thousands of behavioral variables and using non-linear models. Our findings suggest that common approaches in industry used to measure advertising effectiveness fail to measure accurately the true effect of ads