Skip to content
This repository has been archived by the owner on Dec 19, 2023. It is now read-only.

Commit

Permalink
uploaded project v0
Browse files Browse the repository at this point in the history
  • Loading branch information
ozerodb committed Feb 13, 2022
1 parent ae507db commit 2ab9e1e
Show file tree
Hide file tree
Showing 26 changed files with 2,679 additions and 2 deletions.
136 changes: 136 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
# other things
dataset_shapenet
dataset_novel
.~lock.results.xlsx#
.~lock.training_results.xlsx#
shapenetcore*

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/
2 changes: 0 additions & 2 deletions README.md

This file was deleted.

73 changes: 73 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Unsupervised Point Cloud Reconstruction and Completion

This repository contains the project developed by me (Damiano Bonaccorsi), Alessio Santangelo and Takla Trad, for the exam of Advanced Machine Learning, academic year 2021/22.

The topic of the project, as the name of the repository suggests, is point cloud reconstruction and completion. A paper-like report is available under `docs/final_report_v*.pdf`.

---

## Using this repo

### Installing requirements

You can run `sh setup_environment.sh` for a full-fledged setup that will also download the ShapeNet dataset.

Alternatively, you can run `python3 -m pip install -r requirements.txt` to install the Python dependencies only.

**Note:** we used Python 3.9.7 all throughout the project, if you happen to have problems while installing dependencies for a different Python version, consider creating a virtual environment with the appropriate one.

### Using the scripts to train/test

We provide all the scripts necessary to train and test the various architectures. Each script accepts many arguments and the best way for you to check which arguments you should use is to check the code of the script, as arguments are always found at the very top.

Nevertheless, we provide a few example commands to help you getting started

#### Training a reconstruction auto-encoder

One key-feature of our work is modularity, we wrote each piece of this project such that we could easily combine different kinds of encoders and decoders, rather than having a few fixed models.

As an example, to train a reconstruction framework that uses the PointNet encoder, and a medium-sized fully-connected decoder, one might run:

```bash
python3 train_reconstruction.py --encoder=pointnet --decoder=fcm # leave the other arguments to their default value
```

#### Test the reconstruction auto-encoder

```bash
python3 test_reconstruction.py --encoder=pointnet --decoder=fcm --checkpoint_path=...pth # specify the path of the .pth file generated by the previous script
```

#### Train DGPP for completion

```bash
python3 train_completion.py --model=dgpp # leave the other arguments to their default value
```

#### Test DGPP for completion

```bash
python3 test_completion.py --model=dgpp --checkpoint_path=...pth # specify the path of the .pth file generated by the previous script
```

---

### Using the scripts to visualize pointclouds

#### Visualizing a single pointcloud

```bash
python3 visualize_pointcloud.py --pointcloud_path=...pts # specify the path of the .pts file describing the pointcloud
```

#### Visualizing reconstruction (original pointcloud + reconstructed)

```bash
python3 visualize_reconstruction.py --encoder=pointnet --decoder=fcm --pointcloud_path=...pts
```

#### Visualizing completion (partial pointcloud + completed pointcloud + original pointcloud, using different viewpoints for cropping)

```bash
python3 visualize_completion.py --model=dgpp --checkpoint_path=...pth --pointcloud_path=...pts
```
Binary file added docs/final_report_v0.pdf
Binary file not shown.
Binary file added docs/our_training_results.xlsx
Binary file not shown.
19 changes: 19 additions & 0 deletions models/dgpp.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
import sys, os
sys.path.insert(1, os.path.join(sys.path[0], '..')) # this is needed to import 'our_modules' from the parent directory

import our_modules
from our_modules import autoencoder

### This is just an utility function to make sure the correct model is built
# The actual implementations of the encoder, downsampler and decoder, can be found respectively in our_modules/{encoder, autoencoder, decoder}

def dgpp(remove_point_num=256, pretrained=False):
model = autoencoder.build_model(enc_type='dg',
encoding_length=512,
dec_type='ppd',
method='missing',
remove_point_num=remove_point_num)
if pretrained:
pass #todo

return model
Empty file added our_modules/__init__.py
Empty file.
82 changes: 82 additions & 0 deletions our_modules/autoencoder.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
import torch
import torch.nn as nn
from . import encoder, decoder
import torch.nn.functional as F

class Generic_AutoEncoder(nn.Module):
def __init__(self, encoder, decoder):
super(Generic_AutoEncoder, self).__init__()
self.encoder = encoder
self.decoder = decoder

def forward(self, x):
BS, N, dim = x.size()
assert dim == 3, "Fail: expecting 3 (x-y-z) as last tensor dimension!"

# Refactoring batch
x = x.permute(0, 2, 1) # [BS, N, 3] => [BS, 3, N]

# Encoding
code = self.encoder(x) # [BS, 3, N] => [BS, encoding_length]

# Decoding
decoded = self.decoder(code) # [BS, 3, num_output_points]

# Reshaping decoded output
decoded = decoded.permute(0,2,1) # [BS, 3, num_output_points] => [BS, num_output_points, 3]

return decoded

class DownSampler(nn.Module):
def __init__(self, input_points=1024, output_points=256):
super(DownSampler, self).__init__()
self.fc1 = nn.Linear(input_points, 512)
self.bn = nn.BatchNorm1d(512)
self.fc2 = nn.Linear(512, output_points)

def forward(self, x):
x = F.relu(self.bn(self.fc1(x)))
x = self.fc2(x)
return x

def build_model(enc_type, encoding_length, dec_type, method, remove_point_num=None):
# Encoder definition
if enc_type == 'pointnet':
enc = encoder.PointNet_encoder()
elif enc_type == 'pointnetp1':
enc = encoder.PointNetP1_encoder()
elif enc_type == 'dgcnn' or enc_type == 'dgcnn_glob':
enc = encoder.DGCNN_encoder()
elif enc_type == 'dgcnn_loc':
enc = encoder.DGCNN_encoder(k=6, pooling_type="avg")
elif enc_type == 'dg':
enc = encoder.DG_encoder()
else:
pass # this shouldn't happen

if encoding_length < 1024:
enc = torch.nn.Sequential(
enc,
DownSampler(1024, encoding_length)
)

# Decoder definition
if method == 'total':
num_output_points=1024
elif method == 'missing':
num_output_points=remove_point_num

if dec_type == 'fcs':
dec = decoder.FCsmall_decoder(encoding_length=encoding_length, num_output_points=num_output_points)
if dec_type == 'fcm':
dec = decoder.FCmedium_decoder(encoding_length=encoding_length, num_output_points=num_output_points)
if dec_type == 'fcl':
dec = decoder.FClarge_decoder(encoding_length=encoding_length, num_output_points=num_output_points)
elif dec_type == 'ppd':
dec = decoder.PPD_decoder(encoding_length=encoding_length, num_output_points=num_output_points)
else:
pass # this shouldn't happen

# Putting everything together
model = Generic_AutoEncoder(enc, dec)
return model
Loading

0 comments on commit 2ab9e1e

Please sign in to comment.