- Program: MVA Master's degree class on 3D point clouds. ENS Paris-Saclay.
- Author of this code: Balthazar Neveu
- Study of ADOP: Approximate Differentiable One-Pixel Point Rendering
📜 Report
This is a point cloud rendering (one pixel rendering) + CNN image processin (=deferred rendering to inpaint the holes between projected points).
Live inference |
---|
using a very tiny CNN (a.k.a Vanilla decoder) 100k points only |
Local install of pixr
git clone https://github.com/balthazarneveu/per-pixel-point-rendering.git
cd per-pixel-point-rendering
pip install -r requirements.txt
pip install -e .
Run the demo on pre-trained scene,
python scripts/novel_views_interactive.py -e 55 -t pretrained_scene
- 1/ download and put NERF's Blender scenes on Google Drive in the
__data
folder. - 2/ get Blender Proc
pip install blenderproc
- 3/ optional: get a few environment maps textures (e.g. from PolyHaven ).
- 4/
python studies/full_render.py -s material_balls -n 4 -m orbit
if args.scene == "material_balls":
config = {
"distance": 4.,
"altitude": 0.,
"background_map": "__world_maps/city.exr"
}
-
rendering library:
- Per pixel splatting
- 1st pass (Closest point retrieval = Z-buffer)
- 2nd pass (Aggregate colors)
-
synthesis / rasterizer:
- world definition with triangle primitives or meshes
- view synthesis of a scene rasterizer
-
learning:
- Architectures zoo
- Define experiments = scene name, architecture, hyper parameters.
python scripts/optimize_point_based_neural_renderer.py -e 70
after defininng experiment
-
studies:
- interactive visualization
- rasterizer check
- differentiability check of splatting .
⚠️ so far splatting is not differentiable with regard to camera.
[N, C, H, W]
.
- Image tensors
- N = number of images in a batch = batch size. (
$N<=n$ ) - C = number of channels (1=luma/depth , 3=rgb or more)
- H, W spatial dimension
- N = number of images in a batch = batch size. (
-
$n$ is the number of views
[M, p, d]
- Geometry tensor
- A primitive is a list of points of size p, p=1 points, p=3 triangles.
- d=1 for depth d=3 for xyz, 4 xyz1 for homogeneous coordinates
- M is the number of primitives in a batch.
-
$m$ is the total amount of points.
Fuzzy depth test (varying |
Normal culling | Multiscale splatting |
---|---|---|
To each point of the point cloud, we associate a color vector (later this vector will have a larger dimension, we get pseudo-colors instead of RGB).
Rendered colored point cloud - novel view synthesis | Groundtruth shaded images used to get colors per point so that the final rendering is faithful |
---|---|
Closest point | Fuzzy depth test |
---|---|
To reproduce this demo? python studies/interactive_projections.py -n 200000 -s test_aliasing
.
Can take some time to sample the point cloud from triangles