利用实际航摄资料评价了WGS 84和国家80坐标系下机载POS(position and orientation system)系统直接对地目标定位的精度,分析了不同大地水准面拟合方法对高程精度的影响,检校了POS系统的视准轴误差,讨论了检校场的检校结果。试验表明,基...利用实际航摄资料评价了WGS 84和国家80坐标系下机载POS(position and orientation system)系统直接对地目标定位的精度,分析了不同大地水准面拟合方法对高程精度的影响,检校了POS系统的视准轴误差,讨论了检校场的检校结果。试验表明,基于POS系统的航空遥感直接对地目标定位至少需要利用带有1个平高地面控制点的检校场对POS系统进行检校才能消除系统误差;在WGS84坐标系中,可以获得较高的定位精度,而转换到国家80坐标系下,必须对高程进行大地水准面拟合改正。展开更多
针对红花采摘机器人田间作业时花冠检测及定位精度不高的问题,提出了一种基于深度学习的目标检测定位算法(Mobile safflower detection and position network,MSDP-Net)。针对目标检测,本文提出了一种改进的YOLO v5m网络模型C-YOLO v5m,...针对红花采摘机器人田间作业时花冠检测及定位精度不高的问题,提出了一种基于深度学习的目标检测定位算法(Mobile safflower detection and position network,MSDP-Net)。针对目标检测,本文提出了一种改进的YOLO v5m网络模型C-YOLO v5m,在YOLO v5m主干网络和颈部网络插入卷积块注意力模块,使模型准确率、召回率、平均精度均值相较于改进前分别提高4.98、4.3、5.5个百分点。针对空间定位,本文提出了一种相机移动式空间定位方法,将双目相机安装在平移台上,使其能在水平方向上进行移动,从而使定位精度一直处于最佳范围,同时避免了因花冠被遮挡而造成的漏检。经田间试验验证,移动相机式定位成功率为93.79%,较固定相机式定位成功率提升9.32个百分点,且在X、Y、Z方向上移动相机式定位方法的平均偏差小于3 mm。将MSDP-Net算法与目前主流目标检测算法的性能进行对比,结果表明,MSDP-Net的综合检测性能均优于其他5种算法,其更适用于红花花冠的检测。将MSDP-Net算法和相机移动式定位方法应用于自主研发的红花采摘机器人上进行采摘试验。室内试验结果表明,在500次重复试验中,成功采摘451朵,漏采49朵,采摘成功率90.20%。田间试验结果表明,在选取垄长为15 m范围内,盛花期红花花冠采摘成功率大于90%。展开更多
In order to improve the low positioning accuracy and execution efficiency of the robot binocular vision,a binocular vision positioning method based on coarse-fine stereo matching is proposed to achieve object position...In order to improve the low positioning accuracy and execution efficiency of the robot binocular vision,a binocular vision positioning method based on coarse-fine stereo matching is proposed to achieve object positioning.The random fern is used in the coarse matching to identify objects in the left and right images,and the pixel coordinates of the object center points in the two images are calculated to complete the center matching.In the fine matching,the right center point is viewed as an estimated value to set the search range of the right image,in which the region matching is implemented to find the best matched point of the left center point.Then,the similar triangle principle of the binocular vision model is used to calculate the 3D coordinates of the center point,achieving fast and accurate object positioning.Finally,the proposed method is applied to the object scene images and the robotic arm grasping platform.The experimental results show that the average absolute positioning error and average relative positioning error of the proposed method are 8.22 mm and 1.96%respectively when the object's depth distance is within 600 mm,the time consumption is less than 1.029s.The method can meet the needs of the robot grasping system,and has better accuracy and robustness.展开更多
文摘利用实际航摄资料评价了WGS 84和国家80坐标系下机载POS(position and orientation system)系统直接对地目标定位的精度,分析了不同大地水准面拟合方法对高程精度的影响,检校了POS系统的视准轴误差,讨论了检校场的检校结果。试验表明,基于POS系统的航空遥感直接对地目标定位至少需要利用带有1个平高地面控制点的检校场对POS系统进行检校才能消除系统误差;在WGS84坐标系中,可以获得较高的定位精度,而转换到国家80坐标系下,必须对高程进行大地水准面拟合改正。
文摘如何高效地检测出火灾初期的火源并对其进行准确定位,是有效遏制火情恶化和及时制定消防计划的重要前提.目前火源检测定位所面临的主要问题为火源检测与定位双任务相互分离,这严重制约了火灾预警的实时性.为了克服上述问题,提出以YOLO V5作为火源检测基础模型,同时利用CIOU(Complete intersection over union)损失函数对anchor(anchor-boxes)与GT(Ground Truth)进行精准框定以进一步提高模型标注精度,并将Leaky RELU激活函数替换为正则化和激活函数相结合的GELU(Gaussian Error Linear Unit).另外,在准确检测出火源的同时,采用平行双目定位算法对火源进行空间定位,以实现火源检测与定位的智能一体化.实验结果表明,所提方法的火源检测mAP值比原始算法提高了9.8%,在保证检测火源精确性的同时能准确定位火源位置.
基金supported by National Natural Science Foundation of China(No.61125101)。
文摘In order to improve the low positioning accuracy and execution efficiency of the robot binocular vision,a binocular vision positioning method based on coarse-fine stereo matching is proposed to achieve object positioning.The random fern is used in the coarse matching to identify objects in the left and right images,and the pixel coordinates of the object center points in the two images are calculated to complete the center matching.In the fine matching,the right center point is viewed as an estimated value to set the search range of the right image,in which the region matching is implemented to find the best matched point of the left center point.Then,the similar triangle principle of the binocular vision model is used to calculate the 3D coordinates of the center point,achieving fast and accurate object positioning.Finally,the proposed method is applied to the object scene images and the robotic arm grasping platform.The experimental results show that the average absolute positioning error and average relative positioning error of the proposed method are 8.22 mm and 1.96%respectively when the object's depth distance is within 600 mm,the time consumption is less than 1.029s.The method can meet the needs of the robot grasping system,and has better accuracy and robustness.