Skip to content

Latest commit

 

History

History
52 lines (33 loc) · 2.7 KB

README.md

File metadata and controls

52 lines (33 loc) · 2.7 KB

Visual Teach & Repeat 3 (VT&R3)

Note

We are still in the process of cleaning up the codebase and having all features of VT&R3 on the main branch.

The primary support version of VTR requires an NVidia Driver with Cuda capabilities. The current Dockerfile requires a CUDA driver capable of supporting 11.7. A GPU is required for all versions of the vision (camera) pipeline and features for LiDAR and RADAR that use PyTorch models for processing.

If no GPU is available a CPU only version is available. Note that the CPU version of TorchLib is installed for easier compilation but the models are unlikely to run fast enough on a CPU to be useful.

  • Main branch has support for lidar/radar teach and repeat.
  • v3.0.0 tag has support for stereo camera teach and repeat (to be merged in the future).

What is VT&R3?

VT&R3 is a C++ implementation of the Teach and Repeat navigation framework. It enables a robot to be taught a network of traversable paths and then closely repeat any part of network. VT&R3 is designed for easy adaptation to various sensor (camera, lidar, radar, GPS, etc) and robot combinations. So far, we have explored using VT&R3 to perform teach and repeat navigation using a stereo camera, a lidar, or a combination of stereo camera and GPS.

Reproducing Results of VT&R3 Papers

VT&R3 related papers usually focus on demonstrating one specific feature of VT&R3 instead of the whole system and require additional scripts to run experiements and evaluate results. Therefore, from now on, we will create a separate repository for each paper with instructions on how to reproduce the results.

Knowing the Codebase

The following articles will help you get familiar with VT&R3:

More information can be found in the wiki page.

Citation

Please cite the following paper when using VT&R3 for your research:

@article{paul2010vtr,
  author = {Furgale, Paul and Barfoot, Timothy D.},
  title = {Visual teach and repeat for long-range rover autonomy},
  journal = {Journal of Field Robotics},
  year = {2010},
  doi = {https://doi.org/10.1002/rob.20342}
}