2024-11-11 | 推荐系统最新进展

文摘   2024-11-11 09:51   安徽  

点击蓝字 关注我们


论文分享 | 推荐系统相关研究进展

  1. Dual Contrastive Transformer for Hierarchical Preference Modeling in Sequential Recommendation
  2. Real-Time Personalization for LLM-based Recommendation with Customized In-Context Learning
  3. Beyond Utility: Evaluating LLM as Recommender
  4. Efficient and Effective Adaptation of Multimodal Foundation Models in Sequential Recommendation
  5. Facet-Aware Multi-Head Mixture-of-Experts Model for Sequential Recommendation

1.Dual Contrastive Transformer for Hierarchical Preference Modeling in Sequential Recommendation

Authors: Chengkai Huang, Shoujin Wang, Xianzhi Wang, Lina Yao

https://arxiv.org/abs/2410.22790

论文摘要

Sequential recommender systems (SRSs) aim to predict the subsequent items that may interest users by comprehensively modeling users' complex preferences embedded in the sequence of user-item interactions. However, most existing SRSs often model users' single low-level preferences based on item ID information while ignoring the high-level preferences revealed by item attribute information, such as item category. Furthermore, they often utilize limited sequence context information to predict the next item while overlooking richer inter-item semantic relations. To this end, in this paper, we propose a novel hierarchical preference modeling framework to substantially capture the complex low- and high-level preference dynamics for accurate sequential recommendation. Specifically, in this framework, a novel dual-transformer module and a dual contrastive learning scheme have been designed to discriminatively learn users’ low- and high-level preferences and to effectively enhance both types of preference learning. In addition, a novel semantics-enhanced context embedding module has been devised to generate more informative context embeddings for further improving recommendation performance. Extensive experiments on six real-world datasets demonstrate both the superiority of our proposed method over state-of-the-art approaches and the soundness of our design.

论文简评

该论文提出了一种层次偏好建模框架,用于序列推荐系统,旨在捕捉低级用户偏好的同时也能考虑高级用户的偏好。它包含一个双变换器模块和一个双对比学习方案,以增强偏好建模。实验证明了该框架在多个真实世界数据集上的优越性能,超越了现有方法,从而解决了当前序列推荐系统中的重要空白区域。论文的关键创新在于提出了双变换器模块和双对比学习方案,这两个模块共同构建了一个全面的模型来处理用户的偏好,并通过大量实验结果验证了其有效性。此外,作者还成功地实现了对序列推荐系统的改进,使其能够更好地满足用户的个性化需求。总的来说,这篇论文是一个非常有影响力的成果,对于提升序列推荐系统的性能具有重要的参考价值。

2.Real-Time Personalization for LLM-based Recommendation with Customized In-Context Learning

Authors: Keqin Bao, Ming Yan, Yang Zhang, Jizhi Zhang, Wenjie Wang, Fuli Feng, Xiangnan He

https://arxiv.org/abs/2410.23136

论文摘要

Large language model (LLM)-based recommendation has shown great potential in enhancing various recommendation scenarios. However, the dynamic nature of user interests poses a significant challenge, as frequent updates to LLMs are impractical due to their massive size and high training costs. This paper introduces FastRec, a novel approach that leverages in-context learning to capture users' latest preferences without model updates. FastRec optimizes the instruction tuning phase to align LLMs with recommendation tasks while preserving and enhancing their in-context learning capabilities. During training and inference, FastRec utilizes few-shot examples from users' recent interactions to provide the model with up-to-date interest information. Experimental results demonstrate that FastRec significantly outperforms existing LLM-based baselines and traditional recommendation methods across multiple datasets. Moreover, FastRec maintains robust performance over extended periods, even when user interests change significantly.

论文简评

这篇论文主要提出了一种名为RecICL的方法,旨在为实时动态用户兴趣提供LLM(大型语言模型)推荐系统的适应性解决方案。该方法利用了In-Context Learning(ICL)技术,通过少量示例展示最新用户交互,以展示其有效性。RecICL的主要贡献在于解决了动态用户兴趣对LLM推荐系统的挑战,并能够实现实时的用户兴趣适应,而无需进行模型更新。该方法有效地展示了其在真实世界数据集上的效果,表明其改进了现有基线方法。总体来说,这篇论文是关于如何在动态用户兴趣中使用LLM推荐系统的一个有趣探索。作者提出了一种有效的解决方法,并且通过实际的实验验证了这种方法的有效性和实用性。这些发现对于未来研究和实践具有重要意义。

3.Beyond Utility: Evaluating LLM as Recommender

Authors: Chumeng Jiang, Jiayin Wang, Weizhi Ma, Charles L. A. Clarke, Shuai Wang, Chuhan Wu, Min Zhang

https://arxiv.org/abs/2411.00331

论文摘要

With the rapid development of Large Language Models (LLMs), recent studies have employed LLMs as recommenders to provide personalized information services for distinct users. Despite efforts to improve the accuracy of LLM-based recommendation models, relatively little attention has been paid to beyond-utility dimensions. Moreover, there are unique evaluation aspects of LLM-based recommendation models that have been largely ignored. To bridge this gap, we explore four new evaluation dimensions and propose a multidimensional evaluation framework. The new evaluation dimensions include: 1) history length sensitivity, 2) candidate position bias, 3) generation-involved performance, and 4) hallucinations. All four dimensions potentially impact performance but are largely overlooked in traditional systems. Using this multidimensional evaluation framework, alongside traditional aspects, we evaluate the performance of seven LLM-based recommenders with three prompting strategies, comparing them with six traditional models on both ranking and re-ranking tasks across four datasets. We find that LLMs excel at handling tasks with prior knowledge and shorter input histories in the ranking setting, and perform better in the re-ranking setting, outperforming traditional models across multiple dimensions. However, LLMs exhibit substantial candidate position bias issues, and some models hallucinate non-existent items much more often than others. We intend for our evaluation framework and observations to benefit future research on the use of LLMs as recommenders. The code and data are available at https://github.com/JiangDeccc/EvaLLMasRecommender.

论文简评

该论文提出了一个多层次评估框架来评估LLMs作为推荐器的能力,并引入了四个新的评估维度(历史长度敏感性、候选位置偏见、生成参与性能和幻觉)以弥补现有评估方法中的不足。论文对七种基于LLM的推荐系统进行了全面的评估,并将其与六个传统模型进行了比较,以考察它们在不同数据集上的表现。论文强调了多个关键问题,如候选位置偏见和幻觉,这些都是LLMs面临的关键挑战之一。总的来说,这篇论文为评估LLMs作为推荐器的能力提供了有价值的见解,推动了相关研究的发展。

4.Efficient and Effective Adaptation of Multimodal Foundation Models in Sequential Recommendation

Authors: Junchen Fu, Xuri Ge, Xin Xin, Alexandros Karatzoglou, Ioannis Arapakis, Kaiwen Zheng, Yongxin Ni, Joemon M. Jose

https://arxiv.org/abs/2411.02992

论文摘要

Multimodal foundation models (MFMs) have revolutionized sequential recommender systems through advanced representation learning. While Parameter-efficient Fine-tuning (PEFT) is commonly used to adapt these models, studies often prioritize parameter efficiency, neglecting GPU memory and training speed. To address this, we introduce the IISAN framework, significantly enhancing efficiency. However, IISAN was limited to symmetrical MFMs and identical text and image encoders, preventing the use of state-of-the-art Large Language Models. To overcome this, we develop IISAN-Versa, a versatile plug-and-play architecture compatible with both symmetrical and asymmetrical MFMs. IISAN-Versa employs a Decoupled PEFT structure and utilizes both intra- and inter-modal adaptation. It effectively handles asymmetry through a simple yet effective combination of group layer dropping and dimension transformation alignment. Our research demonstrates that IISAN-Versa effectively adapts large text encoders, and we further identify a scaling effect where larger encoders generally perform better. IISAN-Versa also demonstrates strong versatility in our defined multimodal scenarios, which include raw titles and captions generated from images and videos. Additionally, IISAN-Versa achieved state-of-the-art performance on the Microlens public benchmark. We will release our code and datasets to support future research.

论文简评

这篇论文关注如何高效利用多模态基础模型在序列推荐任务中的适应性问题。作者提出了一个名为IISAN-Versa的新框架,旨在解决现有参数效率微调方法面临的局限性,并能够有效处理既可采用对称结构也可采用不对称结构的多模态模型。通过引入群组层剪切和维度变换等技术,该框架可以有效提高系统的性能。实验结果显示,IISAN-Versa在多个公共基准测试数据集上表现出色,其性能优于现有研究方法。此外,文中提到的一些创新技术,如群组层剪切和维度变换对齐,使得系统在面对不对称结构时也能保持良好的表现。总体来说,这篇文章展示了多模态推荐系统面临的挑战及其新框架的实际效果,为相关领域的研究人员提供了一个有价值的视角,帮助他们更好地优化模型以应对日益复杂的数据环境。

5.Facet-Aware Multi-Head Mixture-of-Experts Model for Sequential Recommendation

Authors: Mingrui Liu, Sixiao Zhang, Cheng Long

https://arxiv.org/abs/2411.01457

论文摘要

Sequential recommendation (SR) systems excel at capturing users’ dynamic preferences by leveraging their interaction histories. Most existing SR systems assign a single embedding vector to each item to represent its features, and various types of models are adopted to combine these item embeddings into a sequence representation vector to capture the user intent. However, we argue that this rep resentation alone is insufficient to capture an item’s multi-faceted nature (e.g., movie genres, starring actors). Besides, users often ex hibit complex and varied preferences within these facets (e.g., liking both action and musical films in the facet of genre), which are chal lenging to fully represent. To address the issues above, we propose a novel structure called Facet-Aware Multi-Head Mixture-of-Experts Model for Sequential Recommendation (FAME). We leverage sub embeddings from each head in the last multi-head attention layer to predict the next item separately. This approach captures the potential multi-faceted nature of items without increasing model complexity. A gating mechanism integrates recommendations from each head and dynamically determines their importance. Further more, we introduce a Mixture-of-Experts (MoE) network in each attention head to disentangle various user preferences within each facet. Each expert within the MoE focuses on a specific preference. Alearnable router network is adopted to compute the importance weight for each expert and aggregate them. We conduct extensive experiments on four public sequential recommendation datasets and the results demonstrate the effectiveness of our method over existing baseline models.

论文简评

综上所述,该论文主要介绍了Facet-Aware Multi-Head Mixture-of-Experts Model(FAME)对序列推荐系统的改进,旨在捕捉物品和用户偏好的多面性。通过采用混合专家框架和多头注意力机制来增强用户意图建模,并通过四个公共数据集的实验验证了其有效性。实验结果表明,FAME模型优于现有基准方法,显示了其对复杂用户意图建模的有效性。因此,该研究为解决序列推荐系统的问题提供了一个有效的解决方案,具有重要的理论意义和应用价值。


我们欢迎您在评论区中留下宝贵的建议!包括但不限于:

  • 可以提出推文中论文简评的不足!
  • 可以分享最近更值得推荐的论文并给出理由!


END

推荐阅读

论文分享 | DLCRec: 基于大语言模型的多样性推荐系统
2024-11-8 论文分享 | 大语言模型最新进展
2024-11-7 论文分享 | 多模态大模型最新进展
2024-11-6 论文分享 | 智能体最新进展


智荐阁
介绍生成式大模型与推荐系统领域的前沿进展,包括但不限于:大语言模型、推荐系统、智能体学习、强化学习、生成式推荐、引导式推荐、推荐智能体、智能体推荐
 最新文章