期刊文献+

基于特征空间多分类对抗机制的红外与可见光图像融合 被引量:1

Infrared and Visible Image Fusion Based on Multiclassification Adversarial Mechanism in Feature Space
下载PDF
导出
摘要 为突破传统融合规则带来的性能瓶颈,提出一个基于特征空间多类别对抗机制的红外与可见光图像融合网络.相较于现存方法,其融合规则更合理且性能更好.首先,训练一个引入注意力机制的自编码器网络实现特征提取和图像重建.然后,采用生成式对抗网络(generative adversarial network,GAN)在训练好的编码器提取的特征空间上进行融合规则的学习.具体来说,设计一个特征融合网络作为生成器融合从源图像中提取的特征,然后将一个多分类器作为鉴别器.这种多分类对抗学习可使得融合特征同时逼近红外和可见光2种模态的概率分布,从而保留源图像中最显著的特征.最后,使用训练好的译码器从特征融合网络输出的融合特征重建出融合图像.实验结果表明:与最新的所有主流红外与可见光图像融合方法包括GTF,MDLatLRR,DenseFuse,FusionGAN,U2Fusion相比,所提方法主观效果更好,客观指标最优个数为U2Fusion的2倍,融合速度是其他方法的5倍以上. To break the performance bottleneck caused by traditional fusion rules,an infrared and visible image fusion network based on multiclassification adversarial mechanism in the feature space is proposed.Compared with existing methods,the proposed method has more reasonable fusion rule and better performance.First,an autoencoder introducing attention mechanism is trained to perform the feature extraction and image reconstruction.Then,the generative adversarial network(GAN)is adopted to learn the fusion rule in the feature space extracted by the trained encoder.Specifically,we design a fusion network as the generator to fuse the features extracted from source images,and then design a multi-classifier as the discriminator.The multiclassification adversarial learning can make the fused features approximate both infrared and visible probability distribution at the same time,so as to preserve the most salient characteristics in source images.Finally,the fused image is reconstructed from the fused features by the trained decoder.Qualitative experiments show that the proposed method is in subjective evaluation better than all state-of-theart infrared and visible image fusion methods,such as GTF,MDLatLRR,DenseFuse,FusionGAN and U2Fusion.In addition,the objective evaluation shows that the number of best quantitative metrics of our method is 2 times that of U2Fusion,and the fusion speed is more than 5 times that of other comparative methods.
作者 张浩 马佳义 樊凡 黄珺 马泳 Zhang Hao;Ma Jiayi;Fan Fan;Huang Jun;Ma Yong(Electronic Information School,Wuhan University,Wuhan 430072)
出处 《计算机研究与发展》 EI CSCD 北大核心 2023年第3期690-704,共15页 Journal of Computer Research and Development
基金 国家自然科学基金项目(62075169,62003247,62061160370) 湖北省自然科学基金项目(2019CFA037,2020BAB113)。
关键词 图像融合 融合规则 深度学习 自编码器 生成式对抗网络 image fusion fusion rule deep learning autoencoder generative adversarial network
  • 相关文献

参考文献6

二级参考文献66

共引文献62

同被引文献6

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部