摘要
协同过滤是目前最为成功的推荐技术之一,但它只利用了评分数据,忽视了大量可以利用的用户评论。针对该问题提出了一种基于概率图的深度神经网络推荐模型,即共享表示模型(Shared representation model,SRM),并在SRM的基础上提出一种基于多任务学习思想的改进模型,即隐因子共同学习模型(Joint learning model with latent factor,LF-JLM)。LF-JLM结合了基于矩阵分解的隐因子推荐算法和doc2vec语言模型,它在doc2vec和隐因子模型的映射层使用共享的用户、商品、评论文档的向量表示,因此能够学习到具有跨任务不变性的底层特征。在Amazon数据集上对提出的两种模型与作为基线的隐因子模型和HFT模型进行了对比实验,实验结果表明:LF-JLM能够有效地抽取出评论中隐含的语义信息;与隐因子模型和HFT模型相比,该算法评分预测的均方误差分别减小了7.85%和1.19%。
Collaborative Filtering is one of the most successful recommendation technologies so far,but it only considers the rating information while ignoring agreat amount of usable user comments.In order to solve this problem,a deep neural network recommendation model based on probabilistic graph was proposed,i.e.Shared representation model(SRM).And on the basis of SRM,an improved model based on multitask learning was proposed,i.e.Joint learning model with latent factor(LF-JLM).LF-JLM combined latent factor recommendation algorithm based on matrix decomposition and the doc2 vec language model.By the method of introducing the shared vector representation of the user,item and comment document in the mapping layer of the doc2 vec and the LFM,LF-JLM could learn the underlying features with the cross-task invariance.On the Amazon dataset,the proposed two models were compared with the LFM and the HFT model as the baseline.The experimental results showed that LF-JLM could effectively extract implied semantic information in the comments.Compared with LFM and the HFT(Hidden Factors as Topics)model,the MSE of rating of this algorithm decreased by 7.85%and 1.19%respectively.
作者
张佳晖
张宇
ZHANG Jiahui;ZHANG Yu(School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou 310018, China)
出处
《浙江理工大学学报(自然科学版)》
2019年第1期79-91,共13页
Journal of Zhejiang Sci-Tech University(Natural Sciences)
基金
浙江省科协青年科技人才培育工程资助培养项目(17038001-Q)
关键词
推荐系统
协同过滤
深度神经网络
多任务学习
词嵌入
recommendation system
collaborative filtering
deep neural network
multitask learning
word embedding