Cdr: Conservative Doubly Robust Learning for Debiasing Recommendation

By Zijie Song et al.
Published on Oct. 21, 2023
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Analyses Over Doubly Robust Learning
3. Methodology
4. Theoretical Analyses
5. Experiments

Summary

CDR: Conservative Doubly Robust Learning for Debiased Recommendation addresses the issue of poisonous imputation in recommendation systems. The work proposes a Conservative Doubly Robust strategy (CDR) to filter imputations by examining their mean and variance. Theoretical analyses demonstrate that CDR offers reduced variance and improved tail bounds. Empirical experiments on three real-world datasets, Coat, Yahoo!R3, and KuaiRand-Pure, show that CDR improves debiasing performance, reduces the ratio of poisonous imputation, and outperforms baseline methods. Overall, CDR provides a promising approach for enhancing recommendation system performance.
×
This is where the content will go.