期刊文献+

基于多尺度反卷积深度学习的显著性检测 被引量:1

Salient Object Detection Based on Multi-scale Deconvolution Deep Learning
下载PDF
导出
摘要 传统的显著性检测方法大多在单一的尺度上分辨感兴趣的目标和背景,无法有效地获取多分辨率下的局部细节信息,为此提出多尺度反卷积的深度学习网络模型。首先,在多尺度下对各层特征及各层对比特征进行反卷积,充分利用反卷积层中的卷积核对输入物体的形状进行重建,在多种分辨率特征图上利用反卷积网络来学习细节特征,减少信息的丢失,以此保持不同尺寸特征图的细节信息;然后,将各尺度下的反卷积特征进行融合,形成多层次局部信息;最后,与VGG16网络提取的全局信息融合后,计算各个像素的显著值,从而获得最终的显著性结果。实验结果表明,多尺度反卷积结构表现出较优的性能,与传统方法相比,可以相对增强突出物体与背景之间的对比,保持细节方面的特征;与最新深度学习的方法相比,可以检测出相对清晰准确的区域,一定程度上减少了信息的损失,还原出了更多的细节,能够有效地获取各种分辨率下的显著性目标,而且各反卷积层的独立性也显著提高了本文算法的运算速度。 Saliency detection aims to highlight the regional objects that people pay attention to subjectively in images.However,the traditional methods mainly distinguish the objects against the background under single resolution,so it’s a hard to obtain the local detailed information under various scale.In this paper,we proposed a multi-scale convolution-combined-deconvolution network model.More specifically,we applied the deconvolution on the feature layers as well as their contract features,so that more multi-scale parameters could be maintained;then the fusion of the deconvolution offsets were combined with global information to get the salient result.The experimental results show that with many uncertainty factors in the complex background,compared with traditional methods,the proposed method could get a satisfactory salient detection,Compared with the latest deep learning methods,there can be relatively clear and accurate areas,which reduces the loss of information to some extent and restores more details,at the same time,the runtime of our method has been accelerated due to the design of the independence between the deconvolution layers.
作者 温静 李雨萌 WEN Jing;LI Yu-meng(School of Computer and Information Technology,Shanxi University,Taiyuan 030006,China)
出处 《计算机科学》 CSCD 北大核心 2020年第11期179-185,共7页 Computer Science
基金 国家自然科学基金青年科学基金(61703252) 山西省1331工程项目 山西省应用基础研究计划项目(201701D121053)。
关键词 显著性检测 深度学习 多尺度特征 反卷积 多分辨率 Saliency detection Deep learning Multi-Scale features Deconvolution Multiresolution
  • 相关文献

参考文献1

二级参考文献16

  • 1闫国利,田宏杰,张仙峰.汽车驾驶行为的眼动研究[J].心理科学,2005,28(5):1211-1212. 被引量:13
  • 2杜志刚,蒋宏,潘晓东.眼动仪在道路交通安全与环境评价中的应用[C].见:中国公路学会编.第三届全国公路科技创新高层论坛论文集(下册).北京:人民交通出版社,2005:938-943. 被引量:2
  • 3BUSCHMAN T J, MILLER E K. Top-down versus bottom-up controlof attention in the prefrontal andposterior parietal cortices[J]. Science, 2007, 315(30): 1860-1862. 被引量:1
  • 4ITTI L. Models of bottom-up and top-down visual attention[D]. California, USA: California Institute of Technology, 2000. 被引量:1
  • 5ITTI L, KOCH C. Computational modeling of visual attention[J]. Nature Reviews Neuroseience, 2001, 2(3): 194-230. 被引量:1
  • 6MIRPOUR K, ARCIZET F, ONG W S, et al. Been there, seen that: a neural mechanism for performing efficient visual search[J]. Neurophysiol, 2009, 102(6): 3481-3491. 被引量:1
  • 7SHINN-CUNNINGHAM B t2 Object-based auditory and visual attention[J]. Trends in Cognitive Sciences, 2008, 12(5): 182-186. 被引量:1
  • 8KOCH C, ULLMAN S. Shifts in selective visual attention: towards the underlying neural circuitry[M]. Netherlands: Springer, 1987. 被引量:1
  • 9MAY. Study on drivers' visual search pattern based on analysis of eye movements[D]. Xi'an: Chang'an University, 2006. 被引量:1
  • 10SCOTT H, HALL L, LITCHFIELD D, et al. Visual information search in simulated junction negotiation: Gaze transitions of young novice, young experienced and older experienced drivers[J]. Journal of Safety Research, 2013(45) 111-116. 被引量:1

共引文献1

同被引文献11

引证文献1

二级引证文献17

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部