期刊文献+

基于点线特征的解耦视觉伺服控制方法

Decoupled visual servoing control method based on point and line features
下载PDF
导出
摘要 针对机器人的自动对准问题,提出一种基于点线特征的解耦视觉伺服控制方法。所提方法以点和直线作为图像特征,并利用图像特征的交互矩阵解耦姿态控制和位置控制,实现六自由度对准。首先利用直线及其交互矩阵设计姿态控制律,以消除旋转偏差;然后利用点及其交互矩阵设计位置控制律,以消除位置偏差;最后实现机器人末端目标的自动对准。在对准控制过程中,基于执行的相机运动量以及相机运动前后特征的变化量,可实现对深度的在线估计。另外,还设计了监督器对相机的运动速度进行调节,从而确保特征一直处于相机视野当中。在Eye-in-Hand机器人平台上,分别用所提方法和传统的基于图像的视觉伺服方法实现了机器人的六自由度对准。所提方法经过16步实现了机器人的自动对准,对准结束时机器人末端位姿的最大平移误差为3.26 mm,最大旋转误差为0.72°。相较于对比方法,该方法的控制过程更加高效,控制误差收敛更快,对准误差更小。实验结果表明,所提方法可以实现快速高精度的自动对准,能够提高机器人操作的自主性和智能化水平,有望应用于目标跟踪、拾取和定位、自动化装配、焊接、服务机器人等领域。 Aiming at the problem of automatic alignment for robot,a decoupled visual servoing control method based on point and line features was proposed.In the method,the points and lines were used as image features,and the interactive matrix of image features was used to decouple attitude control and position control,so as to realize six degrees of freedom alignment.Firstly,the attitude control law was designed according to the lines and their interactive matrix to eliminate the rotational deviation.Then,the position control law was designed according to the points and their interactive matrix to eliminate the positional deviation.Finally,the automatic alignment between the robot end-effector and the target was realized.In the alignment control process,based on the amount of camera motion and the variation of features before and after camera motion,the online estimation of depth was able to be realized.In addition,a monitor was designed to adjust the motion speed of the camera,thereby ensuring that the features were always in the field of view of the camera.The six degrees of freedom alignment of the robot on the Eye-in-Hand platform was completed by the proposed method and the traditional image based visual servoing method,respectively.The proposed method realizes the automatic alignment of the robot in 16 steps,and has the maximum translation error of 3.26 mm and the maximum rotation error of 0.72°of the robot end-effector after alignment.Compared with the comparison method,the proposed method has more efficient control process,faster convergence of control error and less alignment error.Experimental results show that the proposed method can realize fast and high-precision automatic alignment,improving the autonomy and intelligent level of robot operation,and is expected to be appied in the fields of target tracking,picking and positioning,automatic assembly,welding,service robot and so on.
作者 卢金燕 戚肖克 LU Jinyan;QI Xiaoke(School of Electrical and Information Engineering,Henan University of Engineering,Zhengzhou Henan 451191,China;School of Information Management for Law,China University of Political Science and Law,Beijing 102249,China)
出处 《计算机应用》 CSCD 北大核心 2022年第8期2556-2563,共8页 journal of Computer Applications
基金 国家自然科学基金资助项目(62173126) 河南省重点研发与推广专项(科技攻关)(202102210187) 河南工程学院博士基金资助项目(Dkj2018003)。
关键词 视觉伺服 视觉控制 交互矩阵 解耦控制 六自由度对准 visual servoing visual control interactive matrix decouped control six degrees of freedom alignment
  • 相关文献

参考文献1

二级参考文献19

  • 1Lepetit V, Fua P. Monocular model-based 3D tracking of rigid objects: a survey. Foundations and Trends in Computer Graphics and Vision, 2005, 1(1): 1-89. 被引量:1
  • 2Mooser J, You S, Neumann U, Wang Q. Applying robust structure from motion to markerless augmented reality. In: Proceedings of the 2009 IEEE Workshop on Applications of Computer Vision. Snowbird, Utah, USA: IEEE, 2009. 1-8. 被引量:1
  • 3Liu Y H, Wang H S. An adaptive controller for image-based visual servoing of robot manipulators. In: Proceedings of the 8th World Congress on Intelligent Control and Automation (WCICA). Jinan, China: IEEE, 2010. 988-993. 被引量:1
  • 4Wang H S, Liu, Y H, Chen W D. Visual tracking of robots in uncalibrated environments. Mechatronics, 2012, 22(4): 390 -397. 被引量:1
  • 5Pressigout M, Marchand E. Real-time 3D model-based tracking: combining edge and texture information. In: Proceedings of the 2006 IEEE International Conference on Robotics and Automation. Orlando, USA: IEEE, 2006. 2726 -2731. 被引量:1
  • 6Coutard L, Chaumette F. Visual detection and 3D model-based tracking for landing on an aircraft carrier. In: Proceedings of the 2011 IEEE International Conference on Robotics and Automation. Shanghai, China: IEEE, 2011. 1746-1751. 被引量:1
  • 7Drummond T, Cipolla R. Real-time visual tracking of complex structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24(7): 932-946. 被引量:1
  • 8Petit A, Marchand E, Kanani K. A robust model-based tracker combining geometrical and color edge information. In: Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. Tokyo, Japan: IEEE, 2013. 3719-3724. 被引量:1
  • 9Vacchetti L, Lepetit V, Fua P. Combining edge and texture information for real-time accurate 3D camera tracking. In: Proceedings of the 3rd IEEE and ACM International Symposium on Mixed and Augmented Reality. Arlington, USA: IEEE, 2004. 48-56. 被引量:1
  • 10Espiau B, Chaumette F, Rives P. A new approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation, 1992, 8(3): 313-326. 被引量:1

共引文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部