针对YOLOv3算法在检测公路车道线时存在准确率低和漏检概率高的问题,提出一种改进YOLOv3网络结构的公路车道线检测方法.该方法首先将图像划分为多个网格,利用K-means++聚类算法,根据公路车道线宽高固有特点,确定目标先验框数量和对应宽...针对YOLOv3算法在检测公路车道线时存在准确率低和漏检概率高的问题,提出一种改进YOLOv3网络结构的公路车道线检测方法.该方法首先将图像划分为多个网格,利用K-means++聚类算法,根据公路车道线宽高固有特点,确定目标先验框数量和对应宽高值;其次根据聚类结果优化网络Anchor参数,使训练网络在车道线检测方面具有一定的针对性;最后将经过Darknet-53网络提取的特征进行拼接,改进YOLOv3算法卷积层结构,使用GPU进行多尺度训练得到最优的权重模型,从而对图像中的车道线目标进行检测,并选取置信度最高的边界框进行标记.使用Caltech Lanes数据库中的图像信息进行对比试验,实验结果表明,改进的YOLOv3算法在公路车道线检测中平均准确率(Mean average precision, mAP)为95%,检测速度可达50帧/s,较YOLOv3原始算法mAP值提升了11%,且明显高于其他车道线检测方法.展开更多
为了提高车道线检测的准确性与鲁棒性,降低光照变化与背景干扰的影响,提出了一种改进的Hough变换耦合密度空间聚类的车道线检测算法。首先,建立车道线模型,将车道边界分解为一系列的小线段,借助最小二乘法来表示车道线中的线段。再利用...为了提高车道线检测的准确性与鲁棒性,降低光照变化与背景干扰的影响,提出了一种改进的Hough变换耦合密度空间聚类的车道线检测算法。首先,建立车道线模型,将车道边界分解为一系列的小线段,借助最小二乘法来表示车道线中的线段。再利用改进的Hough变换对图像中的小线段进行检测。引入具有密度空间聚类方法(density based spatial clustering of applications with noise,DBSCAN),对提取的小线段进行聚类,过滤掉图像中的冗余和噪声,同时保留车道边界的关键信息。随后,利用边缘像素的梯度方向来定义小线段的方向,使得边界同一侧的小线段具有相同的方向,而位于相反车道边界的两个小线段具有相反的方向,通过小线段的方向函数得到车道线段候选簇。最后,根据得到的小线段候选簇,利用消失点来拟合最终车道线。在Caltech数据集与实际道路中进行测试,数据表明:与当前流行的车道线检测算法相比,在光照变化、背景干扰等不良因素下,所以算法呈现出更理想的准确性与稳健,可准确识别正常车道线。展开更多
A panoptic driving perception system is an essential part of autonomous driving.A high-precision and real-time perception system can assist the vehicle in making reasonable decisions while driving.We present a panopti...A panoptic driving perception system is an essential part of autonomous driving.A high-precision and real-time perception system can assist the vehicle in making reasonable decisions while driving.We present a panoptic driving perception network(you only look once for panoptic(YOLOP))to perform traffic object detection,drivable area segmentation,and lane detection simultaneously.It is composed of one encoder for feature extraction and three decoders to handle the specific tasks.Our model performs extremely well on the challenging BDD100K dataset,achieving state-of-the-art on all three tasks in terms of accuracy and speed.Besides,we verify the effectiveness of our multi-task learning model for joint training via ablative studies.To our best knowledge,this is the first work that can process these three visual perception tasks simultaneously in real-time on an embedded device Jetson TX2(23 FPS),and maintain excellent accuracy.To facilitate further research,the source codes and pre-trained models are released at https://github.com/hustvl/YOLOP.展开更多
文摘针对YOLOv3算法在检测公路车道线时存在准确率低和漏检概率高的问题,提出一种改进YOLOv3网络结构的公路车道线检测方法.该方法首先将图像划分为多个网格,利用K-means++聚类算法,根据公路车道线宽高固有特点,确定目标先验框数量和对应宽高值;其次根据聚类结果优化网络Anchor参数,使训练网络在车道线检测方面具有一定的针对性;最后将经过Darknet-53网络提取的特征进行拼接,改进YOLOv3算法卷积层结构,使用GPU进行多尺度训练得到最优的权重模型,从而对图像中的车道线目标进行检测,并选取置信度最高的边界框进行标记.使用Caltech Lanes数据库中的图像信息进行对比试验,实验结果表明,改进的YOLOv3算法在公路车道线检测中平均准确率(Mean average precision, mAP)为95%,检测速度可达50帧/s,较YOLOv3原始算法mAP值提升了11%,且明显高于其他车道线检测方法.
文摘为了提高车道线检测的准确性与鲁棒性,降低光照变化与背景干扰的影响,提出了一种改进的Hough变换耦合密度空间聚类的车道线检测算法。首先,建立车道线模型,将车道边界分解为一系列的小线段,借助最小二乘法来表示车道线中的线段。再利用改进的Hough变换对图像中的小线段进行检测。引入具有密度空间聚类方法(density based spatial clustering of applications with noise,DBSCAN),对提取的小线段进行聚类,过滤掉图像中的冗余和噪声,同时保留车道边界的关键信息。随后,利用边缘像素的梯度方向来定义小线段的方向,使得边界同一侧的小线段具有相同的方向,而位于相反车道边界的两个小线段具有相反的方向,通过小线段的方向函数得到车道线段候选簇。最后,根据得到的小线段候选簇,利用消失点来拟合最终车道线。在Caltech数据集与实际道路中进行测试,数据表明:与当前流行的车道线检测算法相比,在光照变化、背景干扰等不良因素下,所以算法呈现出更理想的准确性与稳健,可准确识别正常车道线。
基金supported by National Natural Science Foundation of China(Nos.61876212 and 1733007)Zhejiang Laboratory,China(No.2019NB0AB02)Hubei Province College Students Innovation and Entrepreneurship Training Program,China(No.S202010487058).
文摘A panoptic driving perception system is an essential part of autonomous driving.A high-precision and real-time perception system can assist the vehicle in making reasonable decisions while driving.We present a panoptic driving perception network(you only look once for panoptic(YOLOP))to perform traffic object detection,drivable area segmentation,and lane detection simultaneously.It is composed of one encoder for feature extraction and three decoders to handle the specific tasks.Our model performs extremely well on the challenging BDD100K dataset,achieving state-of-the-art on all three tasks in terms of accuracy and speed.Besides,we verify the effectiveness of our multi-task learning model for joint training via ablative studies.To our best knowledge,this is the first work that can process these three visual perception tasks simultaneously in real-time on an embedded device Jetson TX2(23 FPS),and maintain excellent accuracy.To facilitate further research,the source codes and pre-trained models are released at https://github.com/hustvl/YOLOP.