摘要
传统的显著性检测方法大多在单一的尺度上分辨感兴趣的目标和背景,无法有效地获取多分辨率下的局部细节信息,为此提出多尺度反卷积的深度学习网络模型。首先,在多尺度下对各层特征及各层对比特征进行反卷积,充分利用反卷积层中的卷积核对输入物体的形状进行重建,在多种分辨率特征图上利用反卷积网络来学习细节特征,减少信息的丢失,以此保持不同尺寸特征图的细节信息;然后,将各尺度下的反卷积特征进行融合,形成多层次局部信息;最后,与VGG16网络提取的全局信息融合后,计算各个像素的显著值,从而获得最终的显著性结果。实验结果表明,多尺度反卷积结构表现出较优的性能,与传统方法相比,可以相对增强突出物体与背景之间的对比,保持细节方面的特征;与最新深度学习的方法相比,可以检测出相对清晰准确的区域,一定程度上减少了信息的损失,还原出了更多的细节,能够有效地获取各种分辨率下的显著性目标,而且各反卷积层的独立性也显著提高了本文算法的运算速度。
Saliency detection aims to highlight the regional objects that people pay attention to subjectively in images.However,the traditional methods mainly distinguish the objects against the background under single resolution,so it’s a hard to obtain the local detailed information under various scale.In this paper,we proposed a multi-scale convolution-combined-deconvolution network model.More specifically,we applied the deconvolution on the feature layers as well as their contract features,so that more multi-scale parameters could be maintained;then the fusion of the deconvolution offsets were combined with global information to get the salient result.The experimental results show that with many uncertainty factors in the complex background,compared with traditional methods,the proposed method could get a satisfactory salient detection,Compared with the latest deep learning methods,there can be relatively clear and accurate areas,which reduces the loss of information to some extent and restores more details,at the same time,the runtime of our method has been accelerated due to the design of the independence between the deconvolution layers.
作者
温静
李雨萌
WEN Jing;LI Yu-meng(School of Computer and Information Technology,Shanxi University,Taiyuan 030006,China)
出处
《计算机科学》
CSCD
北大核心
2020年第11期179-185,共7页
Computer Science
基金
国家自然科学基金青年科学基金(61703252)
山西省1331工程项目
山西省应用基础研究计划项目(201701D121053)。
关键词
显著性检测
深度学习
多尺度特征
反卷积
多分辨率
Saliency detection
Deep learning
Multi-Scale features
Deconvolution
Multiresolution