Project Page | Paper | Supplementary Materials
- Yoonwoo Jeong [Google Scholar]
- Seungjoo Shin [Google Scholar]
- Junha Lee [Google Scholar]
- Chris Choy [Google Scholar]
- Animashree Anandkumar [Google Scholar]
- Minsu Cho [Google Scholar]
- Jaesik Park [Google Scholar]
- 22.10.27: Added HuggingFace repo for ScanNet.
- 22.10.25: Moved PeRFception datasets from OneDrive to HuggingFace.
- 22.08.04: The first version of PeRFception has been released.
The recent progress in implicit 3D representation, i.e., Neural Radiance Fields (NeRFs), has made accurate and photorealistic 3D reconstruction possible in a differentiable manner. This new representation can effectively convey the information of hundreds of high-resolution images in one compact format and allows photorealistic synthesis of novel views. In this work, using the variant of NeRF called Plenoxels, we create the first large-scale implicit representation datasets for perception tasks, called PeRFception, which consists of two parts that incorporate both object-centric and scene-centric scans for classification and segmentation. It shows a significant memory compression rate (96.4%) from the original dataset, while containing both 2D and 3D information in a unified form. We construct the classification and segmentation models that directly take as input this implicit format and also propose a novel augmentation technique to avoid overfitting on backgrounds of images. The code and data will be publicly available.
Downloading PeRFception-Datastes [CO3D-link1] [CO3D-link2] [CO3D-link3] [ScanNet]
# Link1 - PeRFception-CO3D-v1
git clone https://huggingface.co/datasets/YWjimmy/PeRFception-v1-1
# Link2 - PeRFception-CO3D-v1
git clone https://huggingface.co/datasets/YWjimmy/PeRFception-v1-2
# Link3 - PeRFception-CO3D-v1
git clone https://huggingface.co/datasets/YWjimmy/PeRFception-v1-3
# Link1 - PeRFception-ScanNet
git clone https://huggingface.co/datasets/YWjimmy/PeRFception-ScanNet
mkdir <repo>
cd <repo>
git init
git remote add -f origin [link]
git config core.sparseCheckout true
echo "some/dir/" >> .git/info/sparse-checkout
echo "another/sub/tree" >> .git/info/sparse-checkout
git pull origin main
# ex) If you want to download data only from 288_30460_58530
echo "30/plenoxel_co3d_288_30460_58530" >> .git/info/sparse-checkout
Dataset | # Scenes | # Frames | 3D Shape | Features | 3D-BKGD | Memory | Memoery(Rel) |
---|---|---|---|---|---|---|---|
CO3D | 18.6K | 1.5M | pcd | C | X | 1.44TB | |
PeRFception-CO3D | 18.6K | voxel | SH + D | O | 1.33TB |
Dataset | # Scenes | # Frames | 3D Shape | Features | 3D-BKGD | Memory | Memoery(Rel) |
---|---|---|---|---|---|---|---|
ScanNet | 1.5K | 2.5M | pcd | C | X | 966GB | |
PeRFception-ScanNet | 1.5K | voxel | SH + D | O | 35GB |
Our code is verified on Ubuntu 20.04 with a CUDA version 11.1.
conda create -n perfception -c anaconda python=3.8 -y
conda activate perfception
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge -y
pip3 install imageio tqdm requests configargparse scikit-image imageio-ffmpeg piqa wandb pytorch_lightning==1.5.5 opencv-python gin-config gdown plyfile
pip3 install .
We provide a short demo for rendering a scene on CO3D or ScanNet. After installing the requirements, you could run the demo with the codes below:
# CO3D demo
python3 -m run --ginc configs/co3d.gin
# ScanNet demo
python3 -m run --ginc configs/scannet.gin
We deliver the full code to reproduce the performance reported in the main paper. To run the code, you should first put the dataset on a proper location.
data
|
|--- co3d
-- apple
-- banana
...
|
|--- scannet
-- scene000_00
-- scene000_01
...
ScanNet-v2 can be downloaded in here and CO3D-v1 can be downloaded in here. Thanks to great functions in wandb
, we could manage tremendous scripts. You can download the sweep
file here.
Codes for downstream tasks: https://github.com/POSTECH-CVLab/NeRF-Downstream
We benchmark several 2D classification models on rendered PeRFception-CO3D. For faster reproducing, we also provide the rendered images from PeRFception-CO3D on the link link. Before running the code, be sure that you had put the downloaded dataset on data/perfcepton_2d
. You can easily reproduce the scores using the scripts of scripts/downstream/2d_cls/[model].sh
. Details for the training pipeline and models are elaborated in the main paper.
The pretrained models can be reached with the links below:
We also benchmark several 3D classification models on PeRFception-CO3D. We provide the full code on the link . You can downloa
In PeRFception-ScanNet, we have evaluated several 3D semantic segmentation models with depth-supervised labels.
According to the official CO3D repository[link], authors provided an improved version, v2, of CO3D, which would result in better rendering quality and more accurate geometries in our model. We are planning to extend this work to PeRFception-CO3D-v2 from the CO3D-v2.
@article{jeong2022perfception,
title = {PeRFception: Perception using Radiance Fields},
author = {Jeong, Yoonwoo and Shin, Seungjoo and Lee, Junha and Choy, Chris and Anandkumar, Anima and Cho, Minsu and Park, Jaesik}
year = {2022}
}
We appreciate for the reviewers for their constructive comments and suggestions.