Graph neural networks(GNNs)have shown great power in learning on graphs.However,it is still a challenge for GNNs to model information faraway from the source node.The ability to preserve global information can enhance...Graph neural networks(GNNs)have shown great power in learning on graphs.However,it is still a challenge for GNNs to model information faraway from the source node.The ability to preserve global information can enhance graph representation and hence improve classification precision.In the paper,we propose a new learning framework named G-GNN(Global information for GNN)to address the challenge.First,the global structure and global attribute features of each node are obtained via unsupervised pre-training,and those global features preserve the global information associated with the node.Then,using the pre-trained global features and the raw attributes of the graph,a set of parallel kernel GNNs is used to learn different aspects from these heterogeneous features.Any general GNN can be used as a kernal and easily obtain the ability of preserving global information,without having to alter their own algorithms.Extensive experiments have shown that state-of-the-art models,e.g.,GCN,GAT,Graphsage and APPNP,can achieve improvement with G-GNN on three standard evaluation datasets.Specially,we establish new benchmark precision records on Cora(84.31%)and Pubmed(80.95%)when learning on attributed graphs.展开更多
关键词抽取技术是自然语言处理领域的一个研究热点。在目前的关键词抽取算法中,深度学习方法较少考虑到中文的特点,汉字粒度的信息利用不充分,中文短文本关键词的提取效果仍有较大的提升空间。为了改进短文本的关键词提取效果,针对论文...关键词抽取技术是自然语言处理领域的一个研究热点。在目前的关键词抽取算法中,深度学习方法较少考虑到中文的特点,汉字粒度的信息利用不充分,中文短文本关键词的提取效果仍有较大的提升空间。为了改进短文本的关键词提取效果,针对论文摘要关键词自动抽取任务,提出了一种将双向长短时记忆神经网络(Bidirectional Long Shot-Term Memory,BiLSTM)与注意力机制(Attention)相结合的基于序列标注(Sequence Tagging)的关键词提取模型(Bidirectional Long Short-term Memory and Attention Mechanism Based on Sequence Tagging,BAST)。首先使用基于词语粒度的词向量和基于字粒度的字向量分别表示输入文本信息;然后,训练BAST模型,利用BiLSTM和注意力机制提取文本特征,并对每个单词的标签进行分类预测;最后使用字向量模型校正词向量模型的关键词抽取结果。实验结果表明,在8159条论文摘要数据上,BAST模型的F1值达到66.93%,比BiLSTM-CRF(Bidirectional Long Shoft-Term Memory and Conditional Random Field)算法提升了2.08%,较其他传统关键词抽取算法也有进一步的提高。该模型的创新之处在于结合了字向量和词向量模型的抽取结果,充分利用了中文文本信息的特征,可以有效提取短文本的关键词,提取效果得到了进一步的改进。展开更多
基金partially supported by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant No.18kJB510010the Social Science Foundation of Jiangsu Province of China under Grant No.19TQD002the National Nature Science Foundation of China under Grant No.61976114.
文摘Graph neural networks(GNNs)have shown great power in learning on graphs.However,it is still a challenge for GNNs to model information faraway from the source node.The ability to preserve global information can enhance graph representation and hence improve classification precision.In the paper,we propose a new learning framework named G-GNN(Global information for GNN)to address the challenge.First,the global structure and global attribute features of each node are obtained via unsupervised pre-training,and those global features preserve the global information associated with the node.Then,using the pre-trained global features and the raw attributes of the graph,a set of parallel kernel GNNs is used to learn different aspects from these heterogeneous features.Any general GNN can be used as a kernal and easily obtain the ability of preserving global information,without having to alter their own algorithms.Extensive experiments have shown that state-of-the-art models,e.g.,GCN,GAT,Graphsage and APPNP,can achieve improvement with G-GNN on three standard evaluation datasets.Specially,we establish new benchmark precision records on Cora(84.31%)and Pubmed(80.95%)when learning on attributed graphs.
文摘关键词抽取技术是自然语言处理领域的一个研究热点。在目前的关键词抽取算法中,深度学习方法较少考虑到中文的特点,汉字粒度的信息利用不充分,中文短文本关键词的提取效果仍有较大的提升空间。为了改进短文本的关键词提取效果,针对论文摘要关键词自动抽取任务,提出了一种将双向长短时记忆神经网络(Bidirectional Long Shot-Term Memory,BiLSTM)与注意力机制(Attention)相结合的基于序列标注(Sequence Tagging)的关键词提取模型(Bidirectional Long Short-term Memory and Attention Mechanism Based on Sequence Tagging,BAST)。首先使用基于词语粒度的词向量和基于字粒度的字向量分别表示输入文本信息;然后,训练BAST模型,利用BiLSTM和注意力机制提取文本特征,并对每个单词的标签进行分类预测;最后使用字向量模型校正词向量模型的关键词抽取结果。实验结果表明,在8159条论文摘要数据上,BAST模型的F1值达到66.93%,比BiLSTM-CRF(Bidirectional Long Shoft-Term Memory and Conditional Random Field)算法提升了2.08%,较其他传统关键词抽取算法也有进一步的提高。该模型的创新之处在于结合了字向量和词向量模型的抽取结果,充分利用了中文文本信息的特征,可以有效提取短文本的关键词,提取效果得到了进一步的改进。