Real-time recommendation engines enable effective personalization in e-commerce. Yet, the development of such engines is not trivial. It remains challenging to optimize across many options, especially while utilizing context information in real time. To meet these challenges, we aim to provide an easy-to-implement personalization method to support online retailers and marketers in making fast adaptive decisions. We formalize the personalization problem under the multi-armed bandit framework and propose a new contextual bandit algorithm based on the particle-filtering technique. Our method allows firms to flexibly introduce new personalized options, calibrate their impact using prior knowledge from historical data and rapidly update these prior beliefs as new observations arrive. In an application to news-article recommendation, we show that the proposed method achieves a Click-Through-Rate (CTR) of 5.96%, compared to the state-of-the-art methods such as UCB and LinUCB which achieve a CTR of 5.44% and 5.97%, respectively.
For catering purposes, please let us know if you will be joining us by filling in the online form.