Journal of Machine Learning Research

By Huafeng Liu et al
Published on June 10, 2021
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Related Work
3. Notations and Problem Formulation
4. User Preference Modeling
5. Interpretable Recommendation
6. Proposed Method: InDGRM
7. Experimental Results
8. Conclusion
References

Summary

Interpretable Deep Generative Recommendation Models Huafeng Liu and colleagues propose an Interpretable Deep Generative Recommendation Model (InDGRM) to characterize user behavior from both inter-user preference similarity and intra-user preference diversity modeling, achieving latent-level and observed-level disentanglements for interpretable recommendation. The model promotes disentangled latent representations by introducing structure and sparsity-inducing penalties into the generative procedure. Experiments on real-world datasets demonstrate the superior performance of InDGRM in terms of popular evaluation metrics, showing interpretability in learned disentanglement on latent representation and observed behavior.
×
This is where the content will go.