Skip to content

Latest commit

 

History

History
22 lines (14 loc) · 2.13 KB

README.md

File metadata and controls

22 lines (14 loc) · 2.13 KB

VINS-YOLO

We combine a CNN detector with VINS-Mono, a moving visual odometry system, and show reliable improvement in the detection process, especially as the robot starts to accelerate. Our system is ready-to-use in that it has very low deployment cost and requires no calibration. The resulting system allows for simultaneous robot state estimation and object detection, as well as object tracking. Lastly, the architecture is flexible in that it is not restrained to a specific object type or detector.

So far this program only works on gazebo simulated world.

Coded for this paper https://hal.archives-ouvertes.fr/hal-02063181

VINS-Mono

VINS-Mono is a real-time SLAM framework for Monocular Visual-Inertial Systems. It uses an optimization-based sliding window formulation for providing high-accuracy visual-inertial odometry. It features efficient IMU pre-integration with bias correction, automatic estimator initialization, online extrinsic calibration, failure detection and recovery, loop detection, and global pose graph optimization, map merge, pose graph reuse, online temporal calibration, rolling shutter support. VINS-Mono is primarily designed for state estimation and feedback control of autonomous drones, but it is also capable of providing accurate localization for AR applications. This code runs on Linux, and is fully integrated with ROS. For iOS mobile implementation, please go to VINS-Mobile.

Authors: Tong Qin, Peiliang Li, Zhenfei Yang, and Shaojie Shen from the HUKST Aerial Robotics Group

YOLO

YOLO is a real time object detection algorithm. Authors: Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi

Licence

The source code is released under GPLv3 license.

We are still working on improving the code reliability. For any technical issues, please contact Tong QIN <tong.qinATconnect.ust.hk> or Peiliang LI <pliapATconnect.ust.hk>.