[1.College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, China; 2.Wuxi Intelligent Control of Research Institute, Hunan University, Wuxi 214115, China] 在知网中查找 在百度中查找 在本站中查找
A visual inertial localization algorithm integrating semantic information is proposed to address the positioning problems caused by poor GPS signals, dim lighting, limited features, and weak textures in underground parking lots. Firstly, this algorithm fuses visual inertial information through visual odometry and IMU pre-integration. Simultaneously, a panoramic surround view image is constructed using four fisheye cameras, and semantic segmentation algorithms are employed to extract semantic information from the parking environment. Then, the semantic feature projection map is obtained through inverse projection transformation based on the tightly coupled visual inertial pose. Additionally, loop detection and pose graph optimization are employed to reduce accumulated errors and achieve global pose graph optimization, thereby achieving higher localization accuracy. This paper verifies the proposed algorithm through Gazebo simulation and real vehicle testing. The results indicate that this algorithm can fully utilize the semantic information of the environment to construct a complete semantic map and achieve higher vehicle localization accuracy than ORB-SLAM3 based on repeated localization error comparisons.