期刊文献+

基于生成对抗模型的可见光-红外图像匹配方法 被引量:5

Visible and infrared image matching method based on generative adversarial model
下载PDF
导出
摘要 针对现有异源图像匹配存在的模态差异大、匹配难度大、鲁棒性差等问题,基于生成对抗网络转换思想及传统的局部特征提取能力,提出基于生成对抗模型的可见光-红外图像匹配方法.依据生成对抗网络(GAN)的风格转换思想,增加了损失函数计算通路并构建新的损失函数,改进模型在异源图像上的转换效果.利用SIFT算法分别提取转换后同源图像的特征信息,确定待匹配点的位置和尺度.依据匹配策略间接完成待配准图像的特征匹配及相似性度量.在实景航拍数据集上进行实验验证.结果表明,利用该方法能够有效地处理多模数据,降低异源图像的匹配难度,为多模态图像匹配问题提供新的思路. A visible-infrared image matching method based on generative adversarial model was proposed based on the style transfer of generative adversarial network and traditional local feature extraction capability in order to analyze the problems of large modal difference, difficult matching and poor robustness of existing multi-sensor images matching methods. The loss function calculation path was added and a new loss function was constructed according to the idea of style transfer in GAN network in order to improve the transfer effect of the model on the multi-sensor images. The feature information of the transformed homologous images was extracted by using SIFT algorithm. Then the position and scale of the points to be matched were determined. The feature matching and similarity measurement between the two images were indirectly completed according to the matching strategy.Experiments were conducted on the realistic aerial dataset. Results show that the proposed method can effectively deal with multi-modal data and reduce the difficulty of multi-sensor image matching. The method can provide a new solution for multi-sensor images matching.
作者 陈彤 郭剑锋 韩心中 谢学立 席建祥 CHEN Tong;GUO Jian-feng;HAN Xin-zhong;XIE Xue-li;XI Jian-xiang(Department of Missile Engineering,Rocket Force University of Engineering,Xi’an 710000,China;The 96901 Unit of the Chinese People’s Liberation Army,Beijing 100094,China)
出处 《浙江大学学报(工学版)》 EI CAS CSCD 北大核心 2022年第1期63-74,共12页 Journal of Zhejiang University:Engineering Science
基金 国家自然科学基金资助项目(62176263,62103434) 陕西省杰出青年科学基金资助项目(2021JC-35) 中国博士后科学基金特别资助项目(2021T140790)。
关键词 航拍图像处理 异源图像匹配 深度学习 生成对抗网络(GAN) 风格转换 aerial image processing multi-sensor images matching deep learning generative adversarial network(GAN) style transfer
  • 相关文献

参考文献5

二级参考文献28

  • 1蔡毅,胡旭.短波红外成像技术及其军事应用[J].红外与激光工程,2006,35(6):643-647. 被引量:39
  • 2JAVED O, RASHEED Z, SHAFIQUE K, et al. Tracking across multiple cameras with disjoint views[C] // Proceedings of the Ninth IEEE International Conference on Computer Vision. Nice : IEEE, 2003.. 952- 957. 被引量:1
  • 3KHAN S, SHAN M. Consistent labeling of tracked objects in multiple cameras with overlapping fields of view[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(10):1355- 1360. 被引量:1
  • 4FRAUNDORFER F, BISCHOF H. Evaluation of local detectors on nomplanar scenes[C]//Proceedings of the 28th Workshop of the Austrian Association for Pattern Recognition. Hagenberg: IEEE, 2004: 125- 132. 被引量:1
  • 5MIKOLAJCZYK K, TUYTELAARS T, SCHMID C, et al. A comparison of affine region detectors[J]. International Journal of Computer Vision, 2005, 65 (1/2): 43 - 72. 被引量:1
  • 6MOREELS P, PERONA P. Evaluation of teatures detectors and descriptors based on 3D objects[C]//Proceedings of International Conference on Computer Vision. Beijing: IEEE, 2005:800-807. 被引量:1
  • 7/MATAS J, CHUM O, URBAN M, et al. Robust wide baseline stereo from maximally stable extremal regions[J]. Image and Vision Computing, 2004, 22 (10) :761 - 767. 被引量:1
  • 8MATAS J, BILEK P, CHUM O. Rotational invariants for wide-baseline stereo [EB/OL].[2007-06-14]. ftp: // crop. felk. cvut. cz/pub/cmp/articles/bilek/Matas- Bilek-CVWW02. pdf. 被引量:1
  • 9PANERO J, ZELNIK M. Human dimension and interior space (Hardcover) [M]. USA: Watson-Guptill Publications, 1979. 被引量:1
  • 10LOWED G. Distinctive image features from scale-invariant keypoints [J]. International Journal of Computer Vision, 2004, 60(2): 91- 110. 被引量:1

共引文献158

同被引文献71

引证文献5

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部