摘要
现在基于知识图谱的推荐方法中,大多采用单一用户或项目表示,存在用户兴趣干扰、信息不完全利用和数据稀疏的问题。本文提出一种基于多视图的知识感知推荐模型(Multi-view Knowledge-aware,MVKA)。首先,该模型在用户-项目图融合注意力机制捕获用户的兴趣表示;引入项目-实体图,设计图注意力网络进行特征提取获取项目的嵌入表示;然后在2个视图之间构造图视角的对比学习方法,最后进行求和和串联操作得到用户和项目的最终表示,并通过内积预测用户对项目的匹配分数。为了验证本文模型的准确性和计算效率,在MovieLens-1M、Book-crossing和Last FM公开数据集上进行了大量的实验,并且与其他传统方法和图神经网络模型相比,AUC和F1值评价指标均有明显提升,说明MVKA模型可显著利用各种信息关系数据来改善知识感知推荐任务。
At present,most of the recommendation methods based on knowledge graph use single user or item representation,which has the problems of user interest interference,incomplete use of information and sparse data.This paper proposes a multiview knowledge-aware recommendation model(MVKA).Firstly,the model captures the user’s interest representation in the user-item graph fusion attention mechanism.Introduce the project-entity diagram,the graph attention network is used for feature extraction to obtain the embedded representation of the item.Then,a comparative learning method of graph perspective is constructed between the two views.Finally,summation and concatenation operations are carried out to get the final representation of the user and the project,and the matching score of the user to the project is predicted by the inner product.In order to verify the accuracy and computational efficiency of the experiment,a large number of experiments were carried out on the three public datasets of MovieLens-1M,Book-crossing and Last FM,and compared with other traditional methods and graph neural network models,the AUC and F1 value evaluation indicators were significantly improved,indicating that the MVKA model can significantly use various information relationship data to improve the knowledge perception recommendation task.
作者
王晓霞
孟佳娜
江烽
丁梓晴
WANG Xiao-xia;MENG Jia-na;JIANG Feng;DING Zi-qing(School of Computer Science and Engineering,Dalian Minzu University,Dalian 116600,China)
出处
《计算机与现代化》
2024年第2期100-107,共8页
Computer and Modernization
基金
国家自然科学基金资助项目(61876031)
辽宁省自然科学基金一般项目(2022-BS-104)。
关键词
知识感知推荐系统
注意力机制
图注意力网络
对比学习
knowledge perception recommendation system
attention mechanism
graph attention network
contrastive learning