Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

End-To-End TensoRF connection #108

Open
wants to merge 27 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
dc11e5a
Stash Squash Fix
SimonDaKappa Feb 23, 2024
cb39f6d
Fix: .env support for colmap worker
SimonDaKappa Feb 27, 2024
dd4ffc6
Video size constraint made div by 16
SimonDaKappa Mar 15, 2024
9d5614b
Feat: initial nerf worker containerization
SimonDaKappa Mar 19, 2024
d2e647d
Beginning work for Blurry Video Detection
AidanWestphal Mar 19, 2024
8761871
"Changes"
AidanWestphal Mar 22, 2024
20b1950
Modified back-end, front-end, and docker setup to allow for end-to-en…
SimonDaKappa Mar 24, 2024
315bda1
Adding TensoRF NOT as submodule
SimonDaKappa Mar 24, 2024
0e65250
Refactor TensoRF to be more inline with colmap worker. Removal of deb…
SimonDaKappa Mar 24, 2024
0ee84f6
Fixed consuming nerf-in typo
SimonDaKappa Mar 24, 2024
b1e9426
Merge pull request #104 from SimonDaKappa/nerf-worker-connection
SimonDaKappa Mar 24, 2024
c45bc81
First attempt gpu support for nerf-worker
SimonDaKappa Mar 26, 2024
dc09ec5
nerf-worker gpu docker image that is cpu tolerant. Works on cpu, need…
SimonDaKappa Mar 26, 2024
b2b458c
Merge pull request #106 from SimonDaKappa/nerf-worker-connection
SimonDaKappa Mar 26, 2024
ad24eb8
Manual merge with master'
SimonDaKappa Mar 26, 2024
53178de
Fix: process_finished_sfms typo
SimonDaKappa Mar 26, 2024
19b1742
Logging for COLMAP and Web-Server
AidanWestphal Mar 29, 2024
2996260
Initial Merge branch 'featurenerf-connection' of https://github.com/N…
AidanWestphal Mar 29, 2024
dc5069b
Logging for TensoRF and Correcting Merge
AidanWestphal Apr 5, 2024
596117e
Bad Video Handling Fix
AidanWestphal Apr 5, 2024
c6643e5
Bugfix Endpoints and Video Exception Handling
AidanWestphal Apr 5, 2024
d7a4fc0
Change blockConnection to spawn a thread to run nerf pipeline and use…
SimonDaKappa Apr 5, 2024
9c5ac54
Merge pull request #111 from NeRF-or-Nothing/nerf-pika-fix
SimonDaKappa Apr 5, 2024
c397080
Move thread join to ctrl-c capture
SimonDaKappa Apr 5, 2024
63cd9f5
Merge from aidan logging changes
SimonDaKappa Apr 9, 2024
36e7721
Merge branch 'AidanWestphal-master' into featurenerf-connection
SimonDaKappa Apr 9, 2024
1f4d7b8
Finalize Pika Thread Fix. Spawn nerf_worker process for gpu cuda capa…
SimonDaKappa Apr 15, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions TensoRF/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
/log
/output
data
log
Pipfile*
__pycache__
.vscode
*.log
26 changes: 26 additions & 0 deletions TensoRF/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# NVIDIA CUDA Toolkit 12.3 for ubuntu
FROM nvidia/cuda:12.3.2-devel-ubuntu22.04

WORKDIR /TensoRF

RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/3bf863cc.pub
RUN export DEBIAN_FORNTEND=noninteractive && \
apt-get update -y && \
apt-get install libssl-dev -y && \
apt-get install software-properties-common -y && \
add-apt-repository ppa:deadsnakes/ppa && \
apt-get update -y && \
apt-get install curl -y && \
apt-get install python3.10 -y && \
apt-get install python3-pip -y

RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10

COPY ./TensoRF/requirements.txt requirements.txt
RUN python3.10 -m pip install --upgrade -r requirements.txt

# Overwritten by compose
COPY . .

# TODO add config support
CMD ["python3.1-0", "main.py"]
21 changes: 21 additions & 0 deletions TensoRF/LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2022 Anpei Chen

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
151 changes: 151 additions & 0 deletions TensoRF/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
# TensoRF
### This code is based on a research project located [here](https://apchenstu.github.io/TensoRF/) with the readme included below.
A single threaded worker that runs this project consuming jobs from RabbitMQ and submitting it back to a seperate completed job queue is located at `worker.py`. This worker loads the config file at `configs/workerconfig.txt` that defines the settings TensoRF should be run at to process each job. Right now the config is static but in the future these settings can be modified based on the job.

The worker consumes jobs from RabbitMQ described via a json template that contains the following:
```
{
"id": String,
"vid_width": int,
"vid_height": int,
"trained_model_file": String(optional),
"intrinsic_matrix": float[[]],
"frames": [
{
"file_path": String
"extrinsic_matrix": float[[]]
},
...
]
}
```

Once the worker is done generating the trained NeRF and rendering the desired video it submits a complete forum to RabbitMQ also in the json format that contains the following:
```
{
"id": String,
"model_file": String,
"video_file": String
}
```

# Usage of Local Worker
Here are some basic instructions on how to use the worker.py in local mode:
### Running worker.py
To run worker.py to train a new TensoRF and render a new video use the command: `python worker.py --config configs/localworkerconfig.txt`.

If you only want to render a new video from a TensoRF model that has already been trained use the command:
`python worker.py --config configs/localworkerconfig.txt --ckpt [PATH TO TENSORF MODEL] --render_only 1`
This will load a model from the specified path and use it to render the camera motion specified in the `transforms_render.json` input file.

Example for render only: `python worker.py --config configs/localworkerconfig.txt --ckpt log/tensorf_sfm_data_VM/tensorf_sfm_data_VM.th --render_only 1`
### Input data
The worker takes input from `data/sfm_data/`. Within this folder you should provide a json file named `transforms_train.json` which will contain the transformation data from structure from motion along with a subfolder labeled `train` that will contain all of the image files referenced in `transforms_train.json`. This will provide the worker with all the data it needs to train a TensoRF. Then once the TensoRF model is trained the worker will load the final file from the input data `transforms_render.json` which contains the desired camera path to be rendered in the same format as the training json (template above)

Example input file structure:

![Screenshot_20220729_065836](https://user-images.githubusercontent.com/49171429/181745902-920d5483-28e6-4412-bc07-9c770544057f.png)

### Output data
The worker outputs final results to `log/tensorf_sfm_data_VM`.

Within this folder the only relevate outputs for the worker are the rendered images and final video in the `imgs_render_all` folder and the trained TensoRF model that is saved at `tensorf_sfm_data.th`. This trained model can be reused by the worker using the checkpoint `--ckpt` flag.


## [Project page](https://apchenstu.github.io/TensoRF/) | [Paper](https://arxiv.org/abs/2203.09517)
This repository contains a pytorch implementation for the paper: [TensoRF: Tensorial Radiance Fields](https://arxiv.org/abs/2203.09517). Our work present a novel approach to model and reconstruct radiance fields, which achieves super
**fast** training process, **compact** memory footprint and **state-of-the-art** rendering quality.<br><br>


https://user-images.githubusercontent.com/16453770/158920837-3fafaa17-6ed9-4414-a0b1-a80dc9e10301.mp4
## Installation

#### Tested on Ubuntu 20.04 + Pytorch 1.10.1

Install environment:
```
conda create -n TensoRF python=3.8
conda activate TensoRF
pip install torch torchvision
pip install tqdm scikit-image opencv-python configargparse lpips imageio-ffmpeg kornia lpips tensorboard
pip install -r requirements.txt
```


## Dataset
* [Synthetic-NeRF](https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1)
* [Synthetic-NSVF](https://dl.fbaipublicfiles.com/nsvf/dataset/Synthetic_NSVF.zip)
* [Tanks&Temples](https://dl.fbaipublicfiles.com/nsvf/dataset/TanksAndTemple.zip)
* [Forward-facing](https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1)



## Quick Start
The training script is in `train.py`, to train a TensoRF:

```
python train.py --config configs/lego.txt
```


we provide a few examples in the configuration folder, please note:

`dataset_name`, choices = ['blender', 'llff', 'nsvf', 'tankstemple'];

`shadingMode`, choices = ['MLP_Fea', 'SH'];

`model_name`, choices = ['TensorVMSplit', 'TensorCP'], corresponding to the VM and CP decomposition.
You need to uncomment the last a few rows of the configuration file if you want to training with the TensorCP model;

`n_lamb_sigma` and `n_lamb_sh` are string type refer to the basis number of density and appearance along XYZ
dimension;

`N_voxel_init` and `N_voxel_final` control the resolution of matrix and vector;

`N_vis` and `vis_every` control the visualization during training;

You need to set `--render_test 1`/`--render_path 1` if you want to render testing views or path after training.

More options refer to the `opt.py`.

### For pretrained checkpoints and results please see:
[https://1drv.ms/u/s!Ard0t_p4QWIMgQ2qSEAs7MUk8hVw?e=dc6hBm](https://1drv.ms/u/s!Ard0t_p4QWIMgQ2qSEAs7MUk8hVw?e=dc6hBm)



## Rendering

```
python train.py --config configs/lego.txt --ckpt path/to/your/checkpoint --render_only 1 --render_test 1
```

You can just simply pass `--render_only 1` and `--ckpt path/to/your/checkpoint` to render images from a pre-trained
checkpoint. You may also need to specify what you want to render, like `--render_test 1`, `--render_train 1` or `--render_path 1`.
The rendering results are located in your checkpoint folder.

## Extracting mesh
You can also export the mesh by passing `--export_mesh 1`:
```
python train.py --config configs/lego.txt --ckpt path/to/your/checkpoint --export_mesh 1
```
Note: Please re-train the model and don't use the pretrained checkpoints provided by us for mesh extraction,
because some render parameters has changed.

## Training with your own data
We provide two options for training on your own image set:

1. Following the instructions in the [NSVF repo](https://github.com/facebookresearch/NSVF#prepare-your-own-dataset), then set the dataset_name to 'tankstemple'.
2. Calibrating images with the script from [NGP](https://github.com/NVlabs/instant-ngp/blob/master/docs/nerf_dataset_tips.md):
`python dataLoader/colmap2nerf.py --colmap_matcher exhaustive --run_colmap`, then adjust the datadir in `configs/your_own_data.txt`. Please check the `scene_bbox` and `near_far` if you get abnormal results.


## Citation
If you find our code or paper helps, please consider citing:
```
@INPROCEEDINGS{Chen2022ECCV,
author = {Anpei Chen and Zexiang Xu and Andreas Geiger and Jingyi Yu and Hao Su},
title = {TensoRF: Tensorial Radiance Fields},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2022}
}
```
41 changes: 41 additions & 0 deletions TensoRF/configs/drums.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@

dataset_name = blender
datadir = ./data/nerf_synthetic/drums
expname = tensorf_lego_VM
basedir = ./log

n_iters = 30000
batch_size = 4096

N_voxel_init = 2097156 # 128**3
N_voxel_final = 27000000 # 300**3
upsamp_list = [2000,3000,4000,5500,7000]
update_AlphaMask_list = [2000,4000]

N_vis = 5
vis_every = 10000

render_test = 1

n_lamb_sigma = [16,16,16]
n_lamb_sh = [48,48,48]
model_name = TensorVMSplit


shadingMode = MLP_Fea
fea2denseAct = softplus

view_pe = 2
fea_pe = 2

L1_weight_inital = 8e-5
L1_weight_rest = 4e-5
rm_weight_mask_thre = 1e-4

## please uncomment following configuration if hope to training on cp model
#model_name = TensorCP
#n_lamb_sigma = [96]
#n_lamb_sh = [288]
#N_voxel_final = 125000000 # 500**3
#L1_weight_inital = 1e-5
#L1_weight_rest = 1e-5
35 changes: 35 additions & 0 deletions TensoRF/configs/flower.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@

dataset_name = llff
datadir = ./data/nerf_llff_data/flower
expname = tensorf_flower_VM
basedir = ./log

downsample_train = 4.0
ndc_ray = 1

n_iters = 25000
batch_size = 4096

N_voxel_init = 2097156 # 128**3
N_voxel_final = 262144000 # 640**3
upsamp_list = [2000,3000,4000,5500]
update_AlphaMask_list = [2500]

N_vis = -1 # vis all testing images
vis_every = 10000

render_test = 1
render_path = 1

n_lamb_sigma = [16,4,4]
n_lamb_sh = [48,12,12]

shadingMode = MLP_Fea
fea2denseAct = relu

view_pe = 0
fea_pe = 0

TV_weight_density = 1.0
TV_weight_app = 1.0

41 changes: 41 additions & 0 deletions TensoRF/configs/lego.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@

dataset_name = blender
datadir = ./data/nerf_synthetic/lego
expname = tensorf_lego_VM
basedir = ./log

n_iters = 3000
batch_size = 4096

N_voxel_init = 2097156 # 128**3
N_voxel_final = 27000000 # 300**3
upsamp_list = [2000,3000,4000,5500,7000]
update_AlphaMask_list = [2000,4000]

N_vis = 5
vis_every = 10000

render_test = 1

n_lamb_sigma = [16,16,16]
n_lamb_sh = [48,48,48]
model_name = TensorVMSplit


shadingMode = MLP_Fea
fea2denseAct = softplus

view_pe = 2
fea_pe = 2

L1_weight_inital = 8e-5
L1_weight_rest = 4e-5
rm_weight_mask_thre = 1e-4

## please uncomment following configuration if hope to training on cp model
#model_name = TensorCP
#n_lamb_sigma = [96]
#n_lamb_sh = [288]
#N_voxel_final = 125000000 # 500**3
#L1_weight_inital = 1e-5
#L1_weight_rest = 1e-5
36 changes: 36 additions & 0 deletions TensoRF/configs/localworkerconfig.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@

dataset_name = sfm2nerf
datadir = ./data/sfm_data
expname = tensorf_sfm_data_VM
basedir = ./log

n_iters = 3000
batch_size = 4096

N_voxel_init = 2097156 # 128**3
N_voxel_final = 27000000 # 300**3
upsamp_list = [2000,3000,4000,5500,7000]
update_AlphaMask_list = [2000,4000]

N_vis = 5
vis_every = 10000

render_test = 1

n_lamb_sigma = [16,16,16]
n_lamb_sh = [48,48,48]
model_name = TensorVMSplit


shadingMode = MLP_Fea
fea2denseAct = softplus

view_pe = 2
fea_pe = 2

TV_weight_density = 0.1
TV_weight_app = 0.01

#L1_weight_inital = 8e-5
#L1_weight_rest = 4e-5
rm_weight_mask_thre = 1e-4
Loading