+Advanced Search

A Visual Inertial Localization Method Integrating Semantic Features in Underground Parking Environment
Author:
  • Article
  • | |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
    Abstract:

    A visual inertial localization algorithm integrating semantic information is proposed to address the positioning problems caused by poor GPS signals, dim lighting, limited features, and weak textures in underground parking lots. Firstly, this algorithm fuses visual inertial information through visual odometry and IMU pre-integration. Simultaneously, a panoramic surround view image is constructed using four fisheye cameras, and semantic segmentation algorithms are employed to extract semantic information from the parking environment. Then, the semantic feature projection map is obtained through inverse projection transformation based on the tightly coupled visual inertial pose. Additionally, loop detection and pose graph optimization are employed to reduce accumulated errors and achieve global pose graph optimization, thereby achieving higher localization accuracy. This paper verifies the proposed algorithm through Gazebo simulation and real vehicle testing. The results indicate that this algorithm can fully utilize the semantic information of the environment to construct a complete semantic map and achieve higher vehicle localization accuracy than ORB-SLAM3 based on repeated localization error comparisons.

    Reference
    Related
    Cited by
Article Metrics
  • PDF:
  • HTML:
  • Abstract:
  • Cited by:
Get Citation
History
  • Online: August 26,2024