信管·讲座 | Incorporating LLMs for Effective and Efficient...

教育   2024-12-04 20:11   上海  

时间

TIME

2024年12月10日(周二)14:00 – 15:00

地点

VENUE

信管学院308会议室

主讲人

SPEAKER

Du Yingpeng(杜鹰鹏) received the Ph.D. degree in software engineering from Peking University, Beijing, China, in 2023. He is currently a research fellow with the College of Computing and Data Science, at Nanyang Technological University. His research interests include recommender systems and ensemble learning. He has published over 20 papers in top-tier journals and conferences, such as JMLR, Pattern Recognition, AAAI, SIGKDD, ICDM, IJCAI, and WWW.


主题

TITLE

Incorporating LLMs for Effective and Efficient Recommendation


摘要

ABSTRACT

Recently, there has been a growing interest in harnessing the extensive knowledge and powerful reasoning abilities of large language models (LLMs) to recommender systems (RSs) for more effective decision-making. However, integrating LLMs into RSs isn't a one-size-fits-all solution. Challenges arise due to hallucinations within LLMs, hindering the generation of reliable suggestions without domain-specific knowledge and effective guidance. To bridge this gap, we propose integrating domain-specific knowledge graphs (KGs) into LLMs to enhance the knowledge of LLMs and provide effective guidance, thus facilitating more effective recommendation results. KGs, with structured representation of facts and relationships, can significantly enrich the knowledge and fact understanding of LLMs. Thus, this integration facilitates LLMs in discerning relevant knowledge effectively, thereby overcoming their limitations of hallucinations and providing reliable suggestions for users in various domains. In addition, employing LLMs usually demands substantial computational time and memory, leading to a high latency and computation requirement during the serving time and limiting real-world applications. To this end, we propose an active LLM-based knowledge distillation (KD) method for sustainable AI. Specifically, we propose to elicit student learning from a small proportion of instances,  maximizing the minimal gains of distillation by selecting effective instances to ensure effective LLM-based KD theoretically. 


编审:唐志皓 江波

欢迎 关注

上财信息
上海财经大学信息管理与工程学院官方新媒体平台,用于学院各类信息发布,欢迎关注!
 最新文章