期刊文献+

基于跨模态特权信息增强的图像分类方法

Image Classification Method Based on Cross-modal Privileged Information Enhancement
下载PDF
导出
摘要 图像分类算法的性能受限于视觉信息的多样性和背景噪声的影响,现有研究通常采用跨模态约束或异构特征对齐算法学习可判别力强的视觉表征.然而,模态异构带来的特征分布差异等问题限制了视觉表征的有效学习.针对该问题,提出一种基于跨模态语义信息推理和融合的图像分类框架(CMIF),引入图像语义描述及统计先验知识作为特权信息,使用特权信息学习范式在模型训练阶段指导图像特征从视觉空间向语义空间映射,提出类感知的信息选择算法(CIS)学习图像的跨模态增强表征.针对表征学习中的异构特征差异性问题,使用部分异构对齐算法(PHA)实现视觉特征与特权信息中提取的语义特征的跨模态对齐.为进一步在语义空间中抑制视觉噪声带来的干扰,提出基于图融合的CIS算法选取重构语义表征中的关键信息,从而形成对视觉预测信息的有效补充.在跨模态分类数据集VireoFood-172和NUS-WIDE上的实验表明,CMIF能够学习鲁棒的图像语义特征,并且能够作为通用框架在基于卷积的ResNet-50和基于Transformer架构的ViT图像分类模型上取得稳定的性能提升. The performance of image classification algorithms is limited by the diversity of visual information and the influence of background noise.Existing works usually apply cross-modal constraints or heterogeneous feature alignment algorithms to learn visual representations with strong discrimination.However,the difference in feature distribution caused by modal heterogeneity limits the effective learning of visual representations.To address this problem,this study proposes an image classification framework(CMIF)based on cross-modal semantic information inference and fusion and introduces the semantic description of images and statistical knowledge as privileged information.The study uses the privileged information learning paradigm to guide the mapping of image features from visual space to semantic space in the training stage,and a class-aware information selection(CIS)algorithm is proposed to learn the cross-modal enhanced representation of images.In view of the heterogeneous feature differences in representation learning,the partial heterogeneous alignment(PHA)algorithm is used to achieve cross-modal alignment of visual features and semantic features extracted from privileged information.In order to further suppress the interference caused by visual noise in semantic space,the CIS algorithm based on graph fusion is selected to reconstruct the key information in the semantic representation,so as to form an effective supplement to the visual prediction information.Experiments on the cross-modal classification datasets VireoFood-172 and NUS-WIDE show that CMIF can learn robust semantic features of images,and it has achieved stable performance improvement on the convolution-based ResNet-50 and Transformer-based ViT image classification models as a general framework.
作者 李象贤 郑裕泽 马浩凯 齐壮 闫晓硕 孟祥旭 孟雷 LI Xiang-Xian;ZHENG Yu-Ze;MA Hao-Kai;QI Zhuang;YAN Xiao-Shuo;MENG Xiang-Xu;MENG Lei(School of Software Engineering,Shandong University,Jinan 250101,China)
出处 《软件学报》 EI CSCD 北大核心 2024年第12期5636-5652,共17页 Journal of Software
基金 山东省优秀青年科学基金(海外)计划(2022HWYQ-048) 济南市科技局“新高校20条”资助项目引进创新团队计划(2021GXRC073) 国家重点研发计划(2021YFC3300203)。
关键词 图像分类 跨模态学习 特权信息 特征对齐 图卷积网络 image classification cross-modal learning privileged information feature alignment graph convolution network(GCN)
  • 相关文献

参考文献4

二级参考文献7

共引文献48

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部