摘要
针对激光雷达在长直环境下鲁棒性低以及视觉相机受光照条件影响大的问题,提出一种利用容积卡尔曼滤波将两种传感器采集到的信息进行融合的定位方法,同时为算法添加自适应成分,以提高移动机器人在未知环境下的定位精度。首先通过激光与视觉在同一位置同时对周围物体进行观测,利用图优化算法与PnP算法获得当前机器人位姿信息,再以激光和视觉采集的数据分别作为状态值和量测值不断更新,得到滤波融合后的定位结果。通过Sage-Husa自适应滤波理论增加自适应修正项,解决了长距离观测后的数据发散问题。仿真实验结果表明,以自适应容积卡尔曼滤波的形式进行优化,融合定位误差相较于激光与视觉降低超过25%,有效提升了移动机器人长距离行驶过程中的定位精度。
This paper introduces a positioning method that utilizes Volume Kalman Filtering to integrate data from Lidar and Visual Camera sensors to address the challenges of Lidar robustness in long,straight environments and the impact of illumination conditions on Visual Camera accuracy.At the same time,an adaptive component is added to the algorithm to improve the positioning accuracy of mobile robots in unknown environments.Firstly,the Lidar and Vision are used to observe the surrounding objects at the same position simultaneously,and the current robot position information is obtained by using the graph optimization algorithm and PnP algorithm,and then the data collected by the Lidar and Vision are continuously updated as the state and measurement values respectively to obtain the filtered fused localization results.The adaptive correction term is added through Sage-Husa adaptive filtering theory to solve the problem of data divergence after long-distance observation.The simulation results show that the fusion positioning error is reduced by more than 25%compared with laser and vision in the form of an adaptive Volume Kalman Filter,which effectively improves the positioning accuracy of the mobile robot in the long-distance driving process.
作者
孙凌宇
刘文瀚
李庆翔
李鑫宝
王子航
Sun Lingyu;Liu Wenhan;Li Qingxiang;Li Xinbao;Wang Zihang(School of Mechanical Engineering,Hebei University of Technology,Tianjin 300000,China;Optoelectronic Research Institute of China Electronics Technology Corporation,Tianjin 300000,China)
出处
《应用激光》
CSCD
北大核心
2024年第2期104-112,共9页
Applied Laser
基金
国家自然科学基金联合基金项目(U1913211)
河北省应用基础研究计划重点基础研究项目(17961820D)。
关键词
容积卡尔曼滤波
同步定位与建图
多传感融合
定位精度
volumetric Kalman filtering
simultaneous positioning and map building
multi-sensing fusion
positioning accuracy