摘要
针对现有异源图像匹配存在的模态差异大、匹配难度大、鲁棒性差等问题,基于生成对抗网络转换思想及传统的局部特征提取能力,提出基于生成对抗模型的可见光-红外图像匹配方法.依据生成对抗网络(GAN)的风格转换思想,增加了损失函数计算通路并构建新的损失函数,改进模型在异源图像上的转换效果.利用SIFT算法分别提取转换后同源图像的特征信息,确定待匹配点的位置和尺度.依据匹配策略间接完成待配准图像的特征匹配及相似性度量.在实景航拍数据集上进行实验验证.结果表明,利用该方法能够有效地处理多模数据,降低异源图像的匹配难度,为多模态图像匹配问题提供新的思路.
A visible-infrared image matching method based on generative adversarial model was proposed based on the style transfer of generative adversarial network and traditional local feature extraction capability in order to analyze the problems of large modal difference, difficult matching and poor robustness of existing multi-sensor images matching methods. The loss function calculation path was added and a new loss function was constructed according to the idea of style transfer in GAN network in order to improve the transfer effect of the model on the multi-sensor images. The feature information of the transformed homologous images was extracted by using SIFT algorithm. Then the position and scale of the points to be matched were determined. The feature matching and similarity measurement between the two images were indirectly completed according to the matching strategy.Experiments were conducted on the realistic aerial dataset. Results show that the proposed method can effectively deal with multi-modal data and reduce the difficulty of multi-sensor image matching. The method can provide a new solution for multi-sensor images matching.
作者
陈彤
郭剑锋
韩心中
谢学立
席建祥
CHEN Tong;GUO Jian-feng;HAN Xin-zhong;XIE Xue-li;XI Jian-xiang(Department of Missile Engineering,Rocket Force University of Engineering,Xi’an 710000,China;The 96901 Unit of the Chinese People’s Liberation Army,Beijing 100094,China)
出处
《浙江大学学报(工学版)》
EI
CAS
CSCD
北大核心
2022年第1期63-74,共12页
Journal of Zhejiang University:Engineering Science
基金
国家自然科学基金资助项目(62176263,62103434)
陕西省杰出青年科学基金资助项目(2021JC-35)
中国博士后科学基金特别资助项目(2021T140790)。
关键词
航拍图像处理
异源图像匹配
深度学习
生成对抗网络(GAN)
风格转换
aerial image processing
multi-sensor images matching
deep learning
generative adversarial network(GAN)
style transfer