期刊文献+

基于视觉流形正则化的机器人实时定位新方法 被引量:3

New Approach to Robot Localization in Real-Time Based on Visual Manifold Regularization
原文传递
导出
摘要 提出了一种基于核主分量分析(PCA)正则化的机器人实时定位算法。此算法以半监督学习完成离线训练,首先,以机器人在其预置运动路径上采集到的畸变图像中的稀疏目标面积为观察数据,将部分标定数据的坐标作为其标签,然后以核PCA所揭示的低维视觉流形为正则化约束条件,运用最小二乘方法估计无标签数据坐标。在线定位阶段,利用调和函数估计在线采集到的数据坐标,从而实现基于无标定单目视觉传感器的机器人在线定位。实验结果表明,和其他常规的定位方法相比较,提出的实时定位算法的计算复杂性小、定位精度高、实时性强,能够满足工业机器人和医疗服务机器人等方面的实时定位要求。 This paper presents a new approach to real-time robot localization using kernel principal component analysis(PCA) regularization.The proposed algorithms are formulated as a semi-supervised learning during offline training.Firstly,sparse area features are extracted from the images captured by the camera mounted on the robot which moves along a predetermined path,and labeled a part of the data with their coordinates.Then,the coordinates of the unlabeled data are estimated by least squares with constraint of regularized low dimensional visual manifold in kernel PCA.In online localization stage,harmonic functions are employed to predict new data coordinates so that the real-time robot localization can be implemented using uncalibrated monocular vision.A series of experiments manifest that the proposed algorithms can outperform other conventional methods with low computational complexity,high localization accuracy and well real-time performance,so as to meet the real-time application requirements of industrial robots and medical service robots.
作者 吴华 秦世引
出处 《光学学报》 EI CAS CSCD 北大核心 2010年第1期153-162,共10页 Acta Optica Sinica
基金 国家863计划(2006AA04Z207) 国家自然科学基金(60875072) 教育部博士点基金(20060006018) 中澳国际合作项目(2007DFA11530)资助课题
关键词 机器视觉 流形正则化 核主分量分析 面积特征 machine vision manifold regularization kernel principal component analysis area feature
  • 相关文献

参考文献21

  • 1T. Bailey, H. Durrant-Whyte. Simultaneous localisation and mapping (SLAM): part Ⅱ[J]. IEEE Robotics & Automation Magazine, 2006, 2(13) : 99-110. 被引量:1
  • 2H. Durrant Whyte, T. Bailey, Simultaneous localization and mapping: part Ⅰ[J]. IEEE Robotics & Automation Magazine, 2006, 2(13): 99-110. 被引量:1
  • 3S. Se, D. G. Lowe, J. J. Little. Vision-based global localization and mapping for mobile robots[J]. Robotics, IEEE Transactions on, 2005, 3(21): 364-375. 被引量:1
  • 4沈晔湖,刘济林,杜歆.单目视觉的同时三维场景构建和定位算法[J].光学学报,2008,28(5):907-914. 被引量:8
  • 5徐刚,张文明,楼凤伟,李海滨,刘彬.基于网格点投影灰度相似性的三维重建新方法[J].光学学报,2008,28(11):2175-2180. 被引量:11
  • 6F. Labrosse. The visual compass: performance and limitations of an appearance-based method [J]. J. Field Robotics, 2006, 10(23) : 913-941. 被引量:1
  • 7A. C. Murillo, J. J. Guerrero, C. Sagues. SURF features for efficient robot localization with omnidirectional images[C]. IEEE International Conference on Robotics and Automation, 2007: 3901-3907. 被引量:1
  • 8A. J. Davison, I. D. Reid, N. D. Molton et al.. MonoSLAM: Real-time single camera SLAM [J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2007, 6 (29):1052-1067. 被引量:1
  • 9E. Eade, T. Drummond. Edge landmarks in monocular SLAM [J]. Image Vision Comput. , 2008, In Press, Corrected Proof). 被引量:1
  • 10F. Linaker, M. Ishikawa. Real-time appearance-based Monte Carlo localization[J]. Robotics and Autonomous Systems, 2006, 3(54): 205-220. 被引量:1

二级参考文献43

共引文献22

同被引文献28

  • 1曹喜滨,张世杰.航天器交会对接位姿视觉测量迭代算法[J].哈尔滨工业大学学报,2005,37(8):1123-1126. 被引量:27
  • 2Scaramuzza D, Froundorfer F. Visual odometry[C]. IEEE Robotics & Automation Magazine, 2011, 18(4): 80-92. 被引量:1
  • 3Nister D, Naroditsky O, Bergen J. Visual odometry[C]. Proceeding of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004: 652-659. 被引量:1
  • 4Kurt K, Motilal A, Joan S. Large-scale visual odometry for rough terrain[C]. Robotics Research Spinger Tracts in Advanced Robotics, 2007: 201-212. 被引量:1
  • 5Christopher M, Gabe S, Mark C, et al.. A constant-time efficient stereo SLAM system[C]. BMVC, 2009:1-11. 被引量:1
  • 6Campbell J, Sukthankar R, Nourbakhsh I, et al.. A robust visual odometry and precipice detection system using consumer grade monocular vision[C]. Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 2005: 3421-3427. 被引量:1
  • 7Panahandeh G, Jansson M. Vision-aided inertial navigation based on ground plane feature detection[C]. IEEE/ASME Transactions on Mechatronics, 2014, 19(4): 1206-1215. 被引量:1
  • 8Lovegrove S, Davison A J, Ibanez-Guzman J. Accurate visual odometry from a rear parking camera[C]. IEEE Intelligent Vehicles Symposium, 2011: 788-793. 被引量:1
  • 9Zienkiewicz J, Lukierski R, Davison A. Dense, auto-calibrating visual odometry from a downward-looking camera[C]. Proceedings he British Machine vision Conference, 2013. 被引量:1
  • 10Klein G, Murray D. Parallel tracking and mapping for small AR workspaces[C]. 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007: 225-234. 被引量:1

引证文献3

二级引证文献15

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部