14:40 〜 15:00
[3K4-IS-2a-04] MORE: Modality-Embracing Contrastive Learning for Multimodal recommendation
キーワード:Recommendation System, Contrastive Learning, Multi-Modal
Multimodal recommendation helps users find their items of interest by utilizing items’ multimodal features, such as visual and textual modalities, in addition to interaction information with alleviating information overload problems. Although significant progress has been made on this challenge, existing research remains limitation of the modality embracing. Specifically, current research focuses on collaborative filtering signals, while the information included in the content (modality information) is not effectively represented in the constructed model. In this context, MONET proposes a well-designed Graph Convolutional Networks (GCNs) and achieves state-of-the-art performance for multimodal recommendations. However, it can be pointed out that employing specific GCNs architectures is insufficient to enhance retention rate of modalities. To address this limitation, we propose a simple yet effective model named Modality-Embracing COntRastive LEarning (MORE), which leveraging one of the self-supervised methods contrastive learning to synchronize modality information, thus enhancing the final embedding and the quality of recommendations. Our comprehensive experiments across two public datasets validate the enhanced performance of the MORE model.
講演PDFパスワード認証
論文PDFの閲覧にはログインが必要です。参加登録者の方は「参加者用ログイン」画面からログインしてください。あるいは論文PDF閲覧用のパスワードを以下にご入力ください。