This is the official version of ECHP dataset in the paper "EgoFish3D: Egocentric 3D Pose Estimation from a Fisheye Camera via Self-Supervised Learning"--paper.
Information:
- Images from the third-person view and the egocentric view for a self-supervised learning.
- Diverse real-world scenarios.
- Different subjects performing many daily actions in different body textures.
To address the challenges in egocentric 3D pose estimation, we propose a real-world egocentric human pose dataset, named ECHP, using a head-mounted GoPro camera with a fisheye lens. The training and validation sets of the ECHP dataset consist of 30 video sequences(fps=30Hz) recorded in 8 diverse real-world indoor/outdoor scenes with 10 different daily actions performed by 9 subjects in 20 different body textures. The ten daily actions include: squatting, walking, dancing, stretching, waving, boxing, kicking, touching, clamping, knocking. The test set of the ECHP dataset consists of 7 video sequences by 4 subjects with new body textures captured by a multi-camera motion capture system with ground truth annotations.
We have published the sub-dataset captured by VICON system Baidu(password:ECHP) OneDrive.
If you want to ask for the whole dataset, please send the request to my email {[email protected]}.
Below shows some qualitative results on our ECHP dataset, you can refer to our paper for more results.
Please cite our work if you find this dataset or paper useful for your research.
@ARTICLE{liuegofish3d,
author={Liu, Yuxuan and Yang, Jianxin and Gu, Xiao and Chen, Yijun and Guo, Yao and Yang, Guang-Zhong},
journal={IEEE Transactions on Multimedia},
title={EgoFish3D: Egocentric 3D Pose Estimation From a Fisheye Camera via Self-Supervised Learning},
year={2023},
volume={25},
number={},
pages={8880-8891},
doi={10.1109/TMM.2023.3242551}}