Despite considerable progress in stereo depth estimation, omnidirectional imaging remains underexplored, mainly due to the lack of appropriate data. We introduce Helvipad, a real-world dataset for omnidirectional stereo depth estimation, consisting of 40K frames from video sequences across diverse environments, including crowded indoor and outdoor scenes with diverse lighting conditions. Collected using two 360° cameras in a top-bottom setup and a LiDAR sensor, the dataset includes accurate depth and disparity labels by projecting 3D point clouds onto equirectangular images. Additionally, we provide an augmented training set with a significantly increased label density by using depth completion. We benchmark leading stereo depth estimation models for both standard and omnidirectional images. The results show that while recent stereo methods perform decently, a significant challenge persists in accurately estimating depth in omnidirectional imaging. To address this, we introduce necessary adaptations to stereo models, achieving improved performance.
The dataset is organized into training and testing subsets with the following structure:
helvipad/
├── train/
│ ├── camera_videos # Raw video footage
│ ├── depth_maps # Depth maps generated from LiDAR data
│ ├── depth_maps_augmented # Augmented depth maps using depth completion
│ ├── disparity_maps # Disparity maps computed from depth maps
│ ├── disparity_maps_augmented # Augmented disparity maps using depth completion
│ ├── images_top # Top-camera RGB images
│ ├── images_bottom # Bottom-camera RGB images
│ ├── LiDAR_pcd # Original LiDAR point cloud data
├── test/
│ ├── camera_videos # Raw video footage
│ ├── depth_maps # Depth maps generated from LiDAR data
│ ├── disparity_maps # Disparity maps computed from depth maps
│ ├── images_top # Top-camera RGB images
│ ├── images_bottom # Bottom-camera RGB images
│ ├── LiDAR_pcd # Original LiDAR point cloud data
We evaluate the performance of multiple state-of-the-art and popular stereo matching methods, both for standard and 360° images. All models are trained on a single NVIDIA A100 GPU with the largest possible batch size to ensure comparable use of computational resources.
Method | Type | Disp-MAE (°) | Disp-RMSE (°) | Disp-MARE | Depth-MAE (m) | Depth-RMSE (m) | Depth-MARE (m) |
---|---|---|---|---|---|---|---|
PSMNet | Stereo | 0.33 | 0.54 | 0.20 | 2.79 | 6.17 | 0.29 |
360SD-Net | 360° Stereo | 0.21 | 0.42 | 0.18 | 2.14 | 5.12 | 0.15 |
IGEV-Stereo | Stereo | 0.22 | 0.41 | 0.17 | 1.85 | 4.44 | 0.15 |
360-IGEV-Stereo | 360° Stereo | 0.18 | 0.39 | 0.15 | 1.77 | 4.36 | 0.14 |
The dataset will be soon available for download here.
For more information, visualizations, and updates, visit the project page.
If you use the Helvipad dataset in your research, please cite our paper:
@misc{zayene2024helvipad,
author = {Zayene, Mehdi and Endres, Jannik and Havolli, Albias and Corbière, Charles and Cherkaoui, Salim and Ben Ahmed Kontouli, Alexandre and Alahi, Alexandre},
title = {Helvipad: A Real-World Dataset for Omnidirectional Stereo Depth Estimation},
year = {2024},
eprint = {2403.16999},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
This dataset is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.
This project was developed at the Visual Intelligence for Transportation Laboratory (VITA) at EPFL. We thank all VITA lab members for their insightful feedback and help in improving the quality of this manuscript. We also express our gratitude to Dr. Simone Schaub-Meyer and Oliver Hahn for their advice towards the end of the project.