期刊文献+

结合残差与自注意力机制的图卷积小样本图像分类网络 被引量:1

Graph Neural Network Few Shot Image Classification Network Based on Residual and Self-attention Mechanism
下载PDF
导出
摘要 小样本学习的提出是为了解决深度学习中模型学习所需数据集规模小或者数据标注代价昂贵的问题,图像分类作为深度学习研究领域的重要研究内容,也存在训练数据不足的情况。研究人员针对图像分类模型缺乏训练数据的情况,提出了许多的解决方法,利用图神经网络进行小样本图像分类就是其中的一种。为了更好地发挥图神经网络在小样本学习领域中的作用,针对图神经网络中的卷积操作过程易受偶然因素影响,导致模型不稳定,使用残差网络对图神经网络进行改进,设计了残差图卷积网络,以提高图神经网络的稳定性。在残差图卷积网络的基础上,结合自注意力机制设计残差图自注意力机制,深入挖掘节点之间的关系,提高信息传播效率,从而提高分类模型的分类准确率。经过测试,改进后的残差图卷积网络训练效率得到提高,在5way-1shot任务中的分类准确率相比GNN模型提高了1.1%,在5way-5shot任务中比GNN模型提高了1.42%。在5way-1shot任务中,残差图自注意力网络的分类准确率比GNN模型提高了1.62%。 Few shot learning is proposed to solve the problem of small size of data set required for model learning or high cost of data annotation in deep learning.Image classification has always been an important research content in the research field,and there may be insufficient annotation data.In view of the lack of image annotation data,researchers have put forward many solutions,one of which is to classify small sample images by using graph neural network.In order to better play the role of graph neural network in the field of small sample learning,aiming at the unstable situation of graph neural network convolution operation,residual graph convolution network is used to improve the graph neural network,and residual graph convolution network is designed to improve the stability of graph neural network.Based on the convolutional network of residual graph,the self-attention mechanism of residual graph is designed in combination with the self-attention mechanism,and the relationship between nodes is deeply mined to improve the efficiency of information transmission and improve the classification accuracy of the classification model.After testing,the training efficiency of the improved Res-GNN is improved.The classification accuracy in 5way-1shot task is 1.1%higher than that of GNN model,and 1.42%higher than that of GNN model in 5way-5shot task.In the 5way-1shot task,the classification accuracy of ResAT-GNN is 1.62%higher than that of GNN model.
作者 李凡 贾东立 姚昱旻 涂俊 LI Fan;JIA Dongli;YAO Yumin;TU Jun(School of Information and Electrical Engineering,Hebei University of Engineering,Handan,Hebei 056000,China;Hunan Technology Innovation Center of Blockchain,Changsha 410000,China)
出处 《计算机科学》 CSCD 北大核心 2023年第S01期366-370,共5页 Computer Science
基金 湖南省科技厅高新技术产业科技创新引领计划项目(2020GK2005)。
关键词 小样本学习 图像分类 图神经网络 残差网络 自注意力机制 Few shot learning Image classification Graph neural network Residual network Self-attention mechanism
  • 相关文献

参考文献7

二级参考文献17

共引文献47

同被引文献5

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部