Skip to content

zicodasilva/AcinoSet

 
 

Repository files navigation

AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs in the WildCheetah

Daniel Joska, Liam Clark, Naoya Muramatsu, Ricardo Jericevich, Fred Nicolls, Alexander Mathis, Mackenzie W. Mathis, Amir Patel

AcinoSet is a dataset of free-running cheetahs in the wild that contains 119,490 frames of multi-view synchronized high-speed video footage, camera calibration files and 7,588 human-annotated frames. We utilize markerless animal pose estimation with DeepLabCut to provide 2D keypoints (in the 119K frames). Then, we use three methods that serve as strong baselines for 3D pose estimation tool development: traditional sparse bundle adjustment, an Extended Kalman Filter, and a trajectory optimization-based method we call Full Trajectory Estimation. The resulting 3D trajectories, human-checked 3D ground truth, and an interactive tool to inspect the data is also provided. We believe this dataset will be useful for a diverse range of fields such as ecology, robotics, biomechanics, as well as computer vision.

AcinoSet code by:

Prerequisites

  • Anaconda
  • The dependecies defined in conda_envs/*.yml

What we provide:

The following sections document how this was created by the code within this repo:

Pre-trained DeepLabCut Model:

  • You can use the full_cheetah model provided in the DLC Model Zoo to re-create the existing H5 files (or on new videos).
  • Here, we also already provide the videos and H5 outputs of all frames, here.

Labelling Cheetah Body Positions:

If you want to label more cheetah data, you can also do so within the DeepLabCut framework. We provide a conda file for an easy-install, but please see the repo for installation and instructions for use.

$ conda env create -f conda_envs/DLC.yml -n DLC

AcinoSet Setup:

Navigate to the AcinoSet folder and build the environment:

$ conda env create -f conda_envs/acinoset.yml

Launch Jupyter Lab:

$ jupyter lab

Camera Calibration and 3D Reconstruction:

Intrinsic and Extrinsic Calibration:

Open calib_with_gui.ipynb and follow the instructions.

Alternatively, if the checkerboard points detected in calib_with_gui.ipynb are unsatisfactory, open saveMatlabPointsForAcinoSet.m in MATLAB and follow the instructions. Note that this requires MATLAB 2020b or later.

Optionally: Manually defining the shared points for extrinsic calibration:

You can manually define points on each video in a scene with Argus Clicker. A quick tutorial is found here.

Build the environment:

$ conda env create -f conda_envs/argus.yml

Launch Argus Clicker:

$ python
>>> import argus_gui as ag; ag.ClickerGUI()

Keyboard Shortcuts (See documentation here for more):

  • G ... to a specific frame
  • X ... to switch the sync mode setting the windows to the same frame
  • O ... to bring up the options dialog
  • S ... to bring up a save dialog

Then you must convert the output data from Argus to work with the rest of the pipeline (here is an example):

$ python argus_converter.py \
    --data_dir ../data/2019_03_07/extrinsic_calib/argus_folder

3D Reconstruction:

To reconstruct a cheetah into 3D, we offer three different pose estimation options on top of standard triangulation (TRI):

  • Sparse Bundle Adjustment (SBA)
  • Extended Kalman Filter (EKF)
  • Full Trajectory Estimation (FTE)

You can run each option seperately. For example, simply open FTE.ipynb and follow the instructions! Otherwise, you can run all types of refinements in one go:

python all_optimizations.py --data_dir 2019_03_09/lily/run --start_frame 70 --end_frame 170 --dlc_thresh 0.5

NB: When running the FTE, we recommend that you use the MA86 solver. For details on how to set this up, see these instructions.

Citation

We ask that if you use our code or data, kindly cite (and note it is accepted to ICRA 2021, so please check back for an updated ref):

@misc{joska2021acinoset,
      title={AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs in the Wild}, 
      author={Daniel Joska and Liam Clark and Naoya Muramatsu and Ricardo Jericevich and Fred Nicolls and Alexander Mathis and Mackenzie W. Mathis and Amir Patel},
      year={2021},
      eprint={2103.13282},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

About

Camera calibration & 3D pose estimation tools for AcinoSet

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 88.7%
  • Jupyter Notebook 11.0%
  • Shell 0.3%