Skip to content

Lidar-based Place Recognition Using Bird's-eye View Images

License

Notifications You must be signed in to change notification settings

zjuluolun/BVMatch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BVMatch: Lidar-Based Place Recognition Using Bird's-Eye View Images

BVMatch is a LiDAR-based place recognition method that is capable of estimating 2D relative poses. It projects LiDAR scans to BV images and extracs BVFT descriptors from the images. Place recognition is achieved using bag-of-words approach, and the relative pose is computed through BVFT descriptor matching.

Dependencies

OpenCV >= 3.3

Eigen

Example usage

Just run

mkdir build

cd build

cmake .. && make

./match_two_scan ../data/xxx.bin ../data/xxx.bin

You will see the matching result of two LiDAR scans of the Oxford RobotCar dataset,

Place Recognition Evaluation

1. Download datasets

Download the Oxford Robotcar dataset here (about 3.8GB) or from google drive. Extract the foler on the project directory. Go to the scripts directory and change the root_path in config.py to the dataset path.

Note that all the following operations are under the scripts directory.

2. Generate local descriptors

We extract the BVFT descriptors of each submap in advance for bag-of-words model training and global descriptor generation.

python generate_local_descriptors.py

The local descriptors are stored in the local_des folder on the corresponding sequence directory.

3. Bag-of-words model training

We use the sequences collected in 2014 for training. (You may just skip this step since we have included the pre-trained model in this repo)

python train_model.py

4. Generate pickle files

We use the sequences collected in 2015 for testing. We test the place recognition performance by retrivaling all the submaps in each sequence from all the other sequences. The retrival is successful when the ground turth distance between the query and the match is less than 25 meters. The ground truth correspondences are stored in the pickle files.

python generate_test_sets.py

5. Evaluation

Generate global descriptors and perform evaluation.

python evaluate.py

Note that this step may take hours since all the submaps in the sequences are evaluated.

Citation

Please cite this paper if you want to use it in your work,

@article{luo2021bvmatch,
  author={Luo, Lun and Cao, Si-Yuan and Han, Bin and Shen, Hui-Liang and Li, Junwei},
  journal={IEEE Robotics and Automation Letters}, 
  title={BVMatch: Lidar-Based Place Recognition Using Bird's-Eye View Images}, 
  year={2021},
  volume={6},
  number={3},
  pages={6076-6083},
  doi={10.1109/LRA.2021.3091386}
}

Contact

Lun Luo

Zhejiang University

luolun@zju.edu.cn

About

Lidar-based Place Recognition Using Bird's-eye View Images

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published