期刊文献+

Edge-guided GAN:边界信息引导的深度图像修复 被引量:6

Edge-guided GAN: a depth image inpainting approach guided by edge information
原文传递
导出
摘要 目的目前大多数深度图像修复方法可分为两类:色彩图像引导的方法和单个深度图像修复方法。色彩图像引导的方法利用色彩图像真值,或其上一帧、下一帧提供的信息来修复深度图像。若缺少相应信息,这类方法是无效的。单个深度图像修复方法可以修复数据缺失较少的深度图像。但是,无法修复带有孔洞(数据缺失较大)的深度图像。为解决以上问题,本文将生成对抗网络(generative adversarial network,GAN)应用于深度图像修复领域,提出了一种基于GAN的单个深度图像修复方法,即Edge-guided GAN。方法首先,通过Canny算法获得待修复深度图像的边界图像,并将此两个单通道图像(待修复深度图像和边界图像)合并成一个2通道数据;其次,设计Edge-guided GAN高性能的生成器、判别器和损失函数,将此2通道数据作为生成器的输入,训练生成器,以生成器生成的深度图像(假值)和深度图像真值为判别器的输入,训练判别器;最终得到深度图像修复模型,完成深度图像修复。结果在Apollo scape数据集上与其他4种常用的GAN、不带边界信息的Edge-guided GAN进行实验分析。在输入尺寸为256×256像素,掩膜尺寸为32×32像素情况下,Edge-guided GAN的峰值信噪比(peak signal-to-noise ratio,PSN)比性能第2的模型提高了15.76%;在掩膜尺寸为64×64像素情况下,Edge-guided GAN的PSNR比性能第2的模型提高了18.64%。结论 Edge-guided GAN以待修复深度图像的边界信息为其修复的约束条件,有效地提取了待修复深度图像特征,大幅度地提高了深度图像修复的精度。 Objective Depth images play an important role in robotics,3D reconstruction,and autonomous driving. However,depth sensors,such as Microsoft Kinect and Intel Real Sense,produce depth images with missing data. In some fields,such as those using high-dimension maps for autonomous driving (including RGB images and depth images),objects not belonging to these maps (people,cars,etc.) should be removed. The corresponding areas are blank (i. e.,missing data)after removing objects from the depth image. Therefore,depth images with missing data should be repaired to accomplish some 3D tasks. Depth image inpainting approaches can be divided into two groups: image-guided depth image inpainting and single-depth image inpainting approaches. Image-guided depth image inpainting approaches repair depth images through information on the ground truth of its color images or its previous frames or its next frames. Without this information,these approaches are useless. Single-depth image inpainting approaches cannot repair images without any information from other color images. Currently,only a few studies have tackled this issue by using and improving depth low-rank components in depth images. Current single-depth image inpainting methods only repair depth images with sparse missing data rather than small or large holes. Generative adversarial network (GAN)-based approaches have been widely researched for RGB image inpainting and have achieved state-of-the-art (SOTA) results. However,to the best of our knowledge,no GAN-based approach is reported for depth image inpainting. The reasons are as follows. On the one hand,the depth image records the distance between different objects and lacks texture information. Some researchers have expressed concerns about whether convolutional neural networks (CNNs) can extract depth image features well due to this characteristic. On the other hand,no public depth image datasets are available for CNN-based approaches to train. For the first reason,CNNs have been verified that they can extract features
作者 刘坤华 王雪辉 谢玉婷 胡坚耀 Liu Kunhua;Wang Xuehui;Xie Yuting;Hu Jianyao(Institute of Unmanned Systems,School of Data and Computer Science,Sun Yat-sen Univercity,Guangzhou 510006,China;The Fifth Electronics Research Institute of Ministry of Industry and Information Technology,Guangzhou 510610,China)
出处 《中国图象图形学报》 CSCD 北大核心 2021年第1期186-197,共12页 Journal of Image and Graphics
基金 国家重点研发计划项目(2018YFB1305002) 国家自然科学基金项目(62006256) 中央高校基本科研业务费专项资金项目(67000-31610134) 广州市重点研发项目(202007050004)。
关键词 生成对抗网络 深度图像修复方法 Edge-guided GAN 边界信息 Apollo scape数据集 generative adversarial network(GAN) depth image inpainting approaches Edge-guided GAN edge information the Apollo scape dataset
  • 相关文献

同被引文献18

引证文献6

二级引证文献15

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部