期刊文献+

融合TDA的深度自编码网络车辆目标检测 被引量:4

Vehicle Target Detection Based on Deep Self-encoding Network with TDA
原文传递
导出
摘要 针对在雪天环境下交通监控检测实时性差和准确率低的问题,提出了融合TDA的深度自编码网络车辆目标检测方法.该方法首先将监控视频帧的图像转化为点云数据;进而通过分割后提取车辆目标的点云数据并利用拓扑数据分析对车辆目标的点云数据进行处理;最后利用量化后的拓扑数据分析得到的车辆目标数据的单纯复形表示作为输入样本,对深度自编码网络进行训练,以栈式自编码结构的最后两层隐藏层作为输出构建车辆目标的特征模型,通过全连接层输入Softmax分类层做分类,使网络可以更加快速精确地对雪天环境下的目标和背景进行分类.实验结果表明,该方法能有效在雪天复杂环境下检测车辆目标并在精度以及速度上均有所提高. Aiming at the problem of poor real-time performance and low accuracy of vehical detection in snowy environment , a deep self-encoding network vehicle target detection method with Topological Data Analysis is proposed. The method converts the image of the monitoring video frame into point cloud data;extracts the point cloud data of the vehicle target by segmentation and processes the point cloud data of the vehicle target by u-sing the topology data analysis;using the quantized topology data analysis The simplicial complex representation of the vehicle target data is used as an input sample to train the depth self-encoding network, and the last two layers of the stack self-encoding structure are used as outputs to construct the vehicle target feature model, and the softmax classification layer is input through the fully connected layer. The experimental results show that the method can effectively detect vehicle targets in snowy complex environment and improve both accuracy and speed.
作者 任亚婧 张宏立 REN Yajing;ZHANG Hongli(School of Electrical Engineering,Xinjiang University,Ummqi 830000,China)
出处 《信息与控制》 CSCD 北大核心 2019年第5期627-633,共7页 Information and Control
基金 国家自然科学基金资助项目(51767022) 中国新能源汽车产品检测工况研究和开发基金资助项目
关键词 交通监控 目标检测 点云区域生长分割 拓扑数据分析 层次聚类 深度自编码网络 traffic monitoring target detection point cloud region growth segmentation topology data analysis hierarchical clustering deep self-coding network
  • 相关文献

参考文献10

二级参考文献74

  • 1陶唐飞,韩崇昭,代雪峰,段战胜.综合边缘检测和区域生长的红外图像分割方法[J].光电工程,2004,31(10):50-52. 被引量:24
  • 2罗印升,李人厚,张维玺.一种基于克隆选择的聚类算法[J].控制与决策,2005,20(11):1261-1264. 被引量:7
  • 3马登武,叶文,李瑛.基于包围盒的碰撞检测算法综述[J].系统仿真学报,2006,18(4):1058-1061. 被引量:111
  • 4STAUFFER C, GRIMSON W E L. Adaptive background mixture models for real-time tracking [ C]// Proceeding of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Rec- ognition. Piscataway: IEEE, 1999,2:246-253. 被引量:1
  • 5MITFAL A, PARAG1OS N. Motion-based background subtraction u- sing adaptive kernel density estimation [C]// Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2004, 2:502 - 509. 被引量:1
  • 6MATSUYAMA T, OHYA T, HABE H. Background subtraction for non-stationa scenes [ C]// Proceeding of the 2000 Asian Confer- ence of Computer Vision. Berlin: Springer-Verlag, 2000:662 - 667. 被引量:1
  • 7KIM K, CHALIDABHONGSE T, HARWOOD D, et al. Real-time foreground-background segmentation using codebook model [ J]. Re- al-time Imaging, 2005, 11(3) : 172 - 185. 被引量:1
  • 8RITI'SCHER J, KATO J, JOGA S, et al. A probabilistic back- ground model for tracking [ C]// Proceedings of the 2000 European Conference Computer Vision, LNCS6312. Berlin: Springer-Verlag, 2000:336 - 350. 被引量:1
  • 9ZHONG J, SCLAROFF S. Segmenting foregruund objects from a dy- namic textured background via a robust Kalman filter [ C]// Pro- ceedings of the 2003 IEEE International Conference o11 Computer Vi- sion. Piscataway: IEEE Press, 2003:44-50. 被引量:1
  • 10BENGIO Y, LAMBL1N P, POPOV1CI D, et al. Greedy layer-wise training of deep networks [ C]// Prcx:eedings of the 20th Annual Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2007:153 - 160. 被引量:1

共引文献160

同被引文献47

引证文献4

二级引证文献9

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部