Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
momower1 committed Mar 14, 2023
2 parents 5903e6f + e42077a commit 403242e
Showing 1 changed file with 18 additions and 50 deletions.
68 changes: 18 additions & 50 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
## Overview
- Two seperate point cloud renderers with support for splatting and phong lighting
- [Ground Truth Renderer](https://github.com/momower1/PointCloudEngine/wiki/Ground-Truth-Renderer) renders a subset of the point cloud and can input it into a neural network
- [Octree Renderer](https://github.com/momower1/PointCloudEngine/wiki/Octree-Renderer) builds an octree in a preprocessing step and renders the point cloud with LOD control
- [PlyToPointcloud](https://github.com/momower1/PointCloudEngine/wiki/PlyToPointcloud) tool for converting .ply files with _x,y,z,nx,ny,nz,red,green,blue_ format into the required .pointcloud format
- Ground Truth Renderer renders a point cloud with splatting, pull push algorithm or neural rendering pipeline and can compare results against a mesh
- Octree Renderer builds an octree in a preprocessing step and renders the point cloud with LOD control and splatting
- PlyToPointcloud tool converts .ply files with _x,y,z,nx,ny,nz,red,green,blue_ format into the required .pointcloud format

## Getting Started
- Follow install guide for latest Windows 10 64-Bit release from [Releases](https://github.com/momower1/PointCloudEngine/releases)
- Drag and drop your .ply files onto _PlyToPointcloud.exe_
- Adjust the _Settings.txt_ file (optional)
- Run _PointCloudEngine.exe_
Expand All @@ -18,16 +17,16 @@
- Move the camera so close that the individual splats are visible (depending on the scale this might not happen)
- Adjust the _Sampling Rate_ in such a way that the splats overlap just a bit
- Enable _Blending_, look at the point cloud from various distances and adjust the blend factor with so that only close surfaces are blended together
- Pull-Push algorithm can be configured in a similar way
- Neural Rendering Pipeline requires loading all three exported networks

## Developer Setup
- Install the following on your Windows machine at the default install locations
- [HDF5](https://www.hdfgroup.org/downloads/hdf5)
- [Cuda 10.0](https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Windows&target_arch=x86_64&target_version=10)
- [Anaconda 3.7](https://repo.anaconda.com/archive/Anaconda3-2019.07-Windows-x86_64.exe) for all users and add it to PATH
## Developer Setup (Windows)
- The following libraries are required, make sure that the CUDA toolkit version and Pytorch version are exactly the same
- [Cuda 11.7](https://developer.nvidia.com/cuda-11-7-0-download-archive?target_os=Windows&target_arch=x86_64&target_version=11&target_type=exe_local)
- [Anaconda3 2022.10](https://www.anaconda.com/products/distribution#Downloads) for all users and add it to PATH
- Python environment in Visual Studio Installer (no need to install Python 3.7)
- Visual Studio 2019 Extension _Microsoft Visual Studio Installer Projects_
- Run admin command line:
- _conda install pytorch torchvision cudatoolkit=10.0 -c pytorch_
- [Pytorch 1.13.1](https://pytorch.org/get-started/locally/) _conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia_
- Update include directories, library directories and post build event paths in Visual Studio PointCloudEngine property pages according to the installation paths

## Example for supported .ply file
```
Expand Down Expand Up @@ -55,48 +54,13 @@ end_header
- View the point cloud in different modes
- Splats: high quality splat rendering of the whole point cloud
- Points: high quality point rendering of the whole point cloud
- Neural Network: renders neural network output and loss channel comparison
- Pull Push: fast screen-space inpainting algorithm applied to point rendering
- Mesh: render .OBJ textured mesh for comparison
- Neural Network: renders neural network output from the sparse point rendering
- Compare to a sparse subset of the point cloud with a different sampling rate
- HDF5 dataset generation containing color, normal and depth images of the point cloud
- Blending the colors of overlapping splats
- Phong Lighting

## Neural Network View Mode:
- Neural network should output the splat rendering from a sparse point rendering input
- The neural network must be loaded from a selected .pt file with _Load Model_
- A description of the input/output channels of the network must be loaded from a .txt file with _Load Description_
- Each entry consists of:
- String: Name of the channel (render mode)
- Int: Dimension of channel
- String: Identifying if the channel is input (inp) or output (tar)
- String: Transformation keywords e.g. normalization
- Int: Offset of this channel from the start channel
- Example for a simple description file:
```
[['PointsSparseColor', 3, 'inp', 'normalize', 0], ['SplatsColor', 3, 'tar', 'normalize', 0]]
```
- When a _Loss Function_ is selected
- Loss between two channels (_Loss Self_ and _Loss Target_) is computed
- Screen area between these channel renderings can be controlled _Loss Area_

## HDF5 Dataset Generation
- Rendering resolution must currently be a power of 2
- There are two modes for the dataset generation with parameters in the _HDF5 Dataset_ tab
- Waypoint dataset:
- Interpolates the camera between the user set waypoints
- Use the _Advanced_ tab to add/remove a waypoint for the current camera perspective
- _Preview Waypoints_ shows a preview of the interpolation
- _Step Size_ controls the interpolation value between two waypoints
- Sphere dataset:
- Sweeps the camera along a sphere around the point cloud
- _Step Size_ influences the amount of viewing angles (0.2 results in ~1000 angles)
- Theta and Phi Min/Max values define the subset of the sphere to be sweeped
- Move the camera to the desired distance from the center of the point cloud before generation
- Start the generation process with _Generate Waypoint HDF5 Dataset_ or _Generate Sphere HDF5 Dataset_
- Make sure that the density and rendering parameters for the _Sparse Splats_ view mode are set according to [Configure the rendering parameters](https://github.com/momower1/PointCloudEngine#configuring-the-rendering-parameters)
- The generated file will be stored in the HDF5 directory
- After generation all the view points can be selected with the _Camera Recording_ slider

# Octree Renderer
## Features
- Loads and renders point cloud datasets and generates an octree for level-of-detail
Expand Down Expand Up @@ -133,4 +97,8 @@ end_header
- char[3] - normalized normal
- uchar[3] - rgb color

# Training
- The neural rendering pipeline is trained in Pytorch
- Datasets for training can be created within the Engine under the Dataset tab

Copyright © Moritz Schöpf. All rights reserved. The use, distribution and change of this software and the source code without written permission of the author is prohibited.

0 comments on commit 403242e

Please sign in to comment.