Abstract:Simultaneous localization and mapping (SLAM) technology is widely used in the field of autonomous driving, where accuracy and computational efficiency are the two most important indicators. However, traditional LiDAR odometry faces challenges in accurately and efficiently extracting keyframes, resulting in an excess of redundant frames during map construction. Additionally, the majority of LiDAR odometry systems require aligning each frame to the map, which imposes a substantial computational burden. This paper proposes a dual-mode LiDAR odometry and mapping method based on keyframes. By computing the feature similarity between two point clouds and comparing it with a motion-adaptive threshold, keyframes are extracted. Subsequently, different registration algorithms are applied to keyframes and non-keyframes to minimize computational resource consumption. Furthermore, a weight function is calculated using point horizontal distance information and integrated into the weighted pose constraints. The SLAM system proposed undergoes extensive testing on the KITTI dataset and real vehicles. The results from KITTI sequences 00-10 demonstrate a translational error of only 0.56% and a rotational error of 0.002 1 degree/m. In terms of real-time performance, compared with F-LOAM, our algorithm improves average speed by 26.5%, and even outperforms lightweight system LeGO-LOAM.