摘要
目的 红外与可见光图像融合的目标是将红外图像与可见光图像的互补信息进行融合,增强源图像中的细节场景信息。然而现有的深度学习方法通常人为定义源图像中需要保留的特征,降低了热目标在融合图像中的显著性。此外,特征的多样性和难解释性限制了融合规则的发展,现有的融合规则难以对源图像的特征进行充分保留。针对这两个问题,本文提出了一种基于特有信息分离和质量引导的红外与可见光图像融合算法。方法 本文提出了基于特有信息分离和质量引导融合策略的红外与可见光图像融合算法。设计基于神经网络的特有信息分离以将源图像客观地分解为共有信息和特有信息,对分解出的两部分分别使用特定的融合策略;设计权重编码器以学习质量引导的融合策略,将衡量融合图像质量的指标应用于提升融合策略的性能,权重编码器依据提取的特有信息生成对应权重。结果 实验在公开数据集RoadScene上与6种领先的红外与可见光图像融合算法进行了对比。此外,基于质量引导的融合策略也与4种常见的融合策略进行了比较。定性结果表明,本文算法使融合图像具备更显著的热目标、更丰富的场景信息和更多的信息量。在熵、标准差、差异相关和、互信息及相关系数等指标上,相较于对比算法中的最优结果分别提升了0.508%、7.347%、14.849%、9.927%和1.281%。结论 与具有领先水平的红外与可见光算法以及现有的融合策略相比,本文融合算法基于特有信息分离和质量引导,融合结果具有更丰富的场景信息、更强的对比度,视觉效果更符合人眼的视觉特征。
Objective Infrared and visible image fusion is essential to computer vision and image processing. To strengthen the scenes recognition derived of multisource images, more multi-sensors imagery information is required to be fused in relation to infrared and visible images. A fused image is generated for human perception-oriented visual tasks like video surveillance, target recognition and scene understanding. However, the existing fusion methods are usually designed by manually selecting the characteristics to be preserved. The existing fusion methods can be roughly divided into two categories in the context of traditional fusion methods and the deep learning-based fusion methods. For the traditional methods, to comprehensively characterize and decompose the source images, they need to manually design transformation methods. The fusion strategies are manually designed to fuse the decomposed subparts. The manually designed decomposition methods become more and more complex, which leads to the decline of fusion efficiency. For the deep learning-based methods, some methods define the unique characteristics of source images via human observation. The fused images are expected to preserve these characteristics as much as possible. However, it is difficult and unsuitable to identify the vital information through one or a few characteristics. Other methods are focused on preserving higher structural similarity with source images in terms of the fused image. It will reduce the saliency of thermal targets in the fusion result, which is not conductive to the rapid location and capture of thermal targets by the human vision system. Our method is designed to solve these two issues. We develop a new deep learning-based decomposition method for infrared and visible image fusion. Besides, we propose a deep learning-based and quality-guided fusion strategy to fuse the decomposed parts. Method Our infrared and visible image fusion method is based on the information decomposed and quality-guided fusion strategy. First, we design an
作者
徐涵
梅晓光
樊凡
马泳
马佳义
Xu Han;Mei Xiaoguang;Fan Fan;Ma Yong;Ma Jiayi(Electronic Information School,Wuhan University,Wuhan 430072,China)
出处
《中国图象图形学报》
CSCD
北大核心
2022年第11期3316-3330,共15页
Journal of Image and Graphics
基金
国家自然科学基金项目(61773295)
湖北省自然科学基金项目(2019CFA037)。
关键词
图像融合
特有信息分离
质量引导
红外与可见光图像
深度学习
image fusion
unique information decomposition
quality guidance
infrared and visible images
deep learning