摘要
点云实例分割是场景感知中的基本任务。近年来,随着车载毫米波雷达分辨能力的提高,大量基于毫米波雷达散射点的实例分割方案被提出。实例分割的结果可作为跟踪的输入,跟踪得到各个实例的航迹信息,为后续的车辆决策与路径规划提供数据支持。然而,面向毫米波雷达的实例分割方法仍存在以下挑战。一方面,相较于激光雷达,毫米波雷达观测下的散射点更稀疏,信息量较少。当同一实例的散射点距离较远或者多个相邻实例密集分布时,分割性能显著下降;另一方面,雷达穿透性有限,路面障碍物或交通参与者对实例造成部分遮挡时,分割算法无法对实例进行正确分割和判别。考虑到实际行车场景的时间连续性,利用交通参与者的航迹先验信息,即该参与者上一时刻和当前时刻的位置信息,可以克服上述问题。因此,本文提出了一种利用航迹先验融合上一帧散射点特征的车载雷达点云分割算法。该算法利用航迹的连续性,在相邻两帧之间计算实例和散射点的对应关系并基于上述关系完成散射点特征融合。相较于单帧,融合后的高质量特征不仅信息更丰富,不同实例间的特征差异更明显,而且能弥补由于遮挡导致的信息缺失。实验结果显示,所提算法的平均覆盖率和平均精度指标分别优于基于单帧的分割算法6.19%和4.54%。该结果表明,所提算法优于文献中其他方法,能有效解决上述分割算法存在的问题。此外,与基于单帧的分割方案在典型场景的可视化对比中,所提方法也凸显了其有效性和潜力。未来,我们将进一步挖掘轨迹先验信息,以加强特征提取,同时深入探讨分割性能与帧数之间的关系。
Instance segmentation on point clouds is a fundamental task in scene perception.The development of automotive radar in recent years has made it possible to segment based on radar detection points.The output of instance segmentation could be served as valuable input for the tracker,so as to assist the decision planning.However,existing instance segmentation methods for automotive radar face significant challenges that need to be addressed.One of the primary challenges is the sparsity of detection points compared to LiDAR.When multiple adjacent instances are densely distributed or when detection points of the same instance are widely spaced apart,conventional methods prove to be less effective.Additionally,instances can be mistakenly segmented when obstacles or traffic participants cause partial occlusion,leading to incomplete detection point information.These issues of over-segmentation and under-segmentation need to be resolved for accurate scene perception.To overcome this,we proposed an efficient trajectory prior-based framework,considering that the scene has time continuity and the trajectory prior information contains the number and some other information of instances during the time.By using the time relationship of trajectory prior,the feature fused the instance information before occlusion.The fused features were input into the deep learning network,and the center shifted distance from each detection point to the instance center was calculated,which could help us overcome the problems such as instance splitting.This distance measurement aided in maintaining the integrity of instances.Our proposed method fully utilized the temporal relationship between detection points and performed feature fusion based on the matching relationship calculated from the detection points of two adjacent frames.This approach resulted in richer fused features that compensate for occlusion,leading to improve instance segmentation accuracy.Finally,to validate the effectiveness of our method,we conducted a comprehensive simulation
作者
曾大治
郑乐
曾雯雯
张鑫
黄琰
田瑞丰
ZENG Dazhi;ZHENG Le;ZENG Wenwen;ZHANG Xin;HUANG Yan;TIAN Ruifeng(Radar Technology Research Institute,Beijing Institute of Technology,Beijing 100081,China;Beijing Racobit Electronic Information Technology Co.,Ltd.,Beijing 100097,China;Chongqing Innovation Center,Beijing Institute of Technology,Chongqing 401135,China;Beijing Ruixing Electronics Technology Co.,Ltd.,Beijing 100081,China;Racobit Intelligent Traffic System(Beijing)Technology Co.,Ltd.,Beijing 100081,China)
出处
《信号处理》
CSCD
北大核心
2024年第1期185-196,共12页
Journal of Signal Processing
基金
国家自然科学基金(62388102)
国家重点研发计划(2018YFE0202101,2018YFE0202102,2018YFE0202103)。
关键词
车载雷达
环境感知
实例分割
深度学习
automotive radar
scene perception
instance segmentation
deep learning