基于点云特征序列编码和交叉注意力机制的多传感器融合车道及目标检测方法

Multi-sensor Fusion Method for Lane and Target Detection Based on Point Cloud Feature Sequence Coding and Cross-attention Mechanism

  • 摘要: 融合视觉相机图像、毫米波雷达点云数据和先验导航地图,实现复杂场景下的车道线及动态目标检测是当前自动驾驶环境感知面临的技术挑战之一。为解决上述问题,提出一种基于深度学习的多传感器融合检测框架,将雷达点云作为查询对象,设计点云特征序列编码方式和交叉注意力机制模块。通过视觉图像生成注意力权重,并在特征层面融合先验导航地图信息,以有效提升雷达点云数据与视觉图像数据融合的车道线检测性能。基于OpenLanev2和nuScenes公开数据集进行测试,结果显示,该方法不仅实现了最佳的车道线检测性能,而且在动态目标检测方面效果突出。

     

    Abstract: Integrating the visual camera images,millimeter-wave radar point cloud data and prior navigation maps to realize lane line detection and dynamic target detection in complex scenes is one of the technical challenges faced by current automatic driving environment perception.To solve the problem,a multi-sensor fusion detection framework based on deep learning is proposed.To take the radar point cloud as the query object,the coding mode of point cloud feature sequence and the cross-attention mechanism module are designed.The visual image is used to generate attention weights,and prior navigation map information is fused at the feature level,which can improve the lane detection performance of radar point cloud data and visual image data fusion effectively.The OpenLanev2 and nuScenes public data sets are used to test the proposed method,the results show that the method not only achieves the best lane detection performance,but also has outstanding performance in dynamic target detection.

     

/

返回文章
返回