Zero-shot Item-based Recommendation via Multi-task Product Knowledge Graph Pre-Training
By Ziwei Fan et al
Read the original document by opening this link in a new tab.
Table of Contents
1. INTRODUCTION
2. RELATED WORK
2.1 Product Knowledge Graph
2.2 Graph Pre-Training
3. PRELIMINARIES
4. PROPOSED METHOD
4.1 Product Knowledge Graph Construction
4.2 Multi-Relation Graph Encoder
4.3 Task-oriented Adaptation Layer
4.4 Multi-task Pre-training
4.4.1 Knowledge Reconstruction (KR)
Summary
Existing recommender systems face difficulties with zero-shot items, i.e. items that have no historical interactions with users during the training stage. This paper presents a novel paradigm for the Zero-Shot Item-based Recommendation (ZSIR) task, which pre-trains a model on product knowledge graph (PKG) to refine the item features from pre-trained language models (PLMs). The proposed Multi-task Product Knowledge Graph model for pre-training enhances the inductive ability of the graph encoder for zero-shot items. The pre-training tasks include Knowledge Reconstruction (KR), High-order Neighbor Reconstruction (HNR), Universal Feature Reconstruction (FR), and Meta Relation Adaptation (MRA). These tasks address challenges such as multi-type relations in PKG, semantic divergence, and domain discrepancy. The framework is fine-tuned on a recommendation task, showing effectiveness in solving ZSIR.