Skip to content

Latest commit

 

History

History
63 lines (48 loc) · 3.64 KB

README.md

File metadata and controls

63 lines (48 loc) · 3.64 KB

PyTorch PPF-FoldNet

This repo is the unofficial implementation for PPF-FoldNet(https://arxiv.org/abs/1808.10322v1) in pytorch.

Project Structure

  • models/: dictionary that saves the model of PPF-FoldNet. PPF-FoldNet is an Auto-Encoder for point pair feature of a local patch. The input is a batch of point cloud fragments [bs(num_patches), num_points_per_patch, 4], output of the Encoder is the descriptor for each local patch in these point cloud, [bs(num_patches), 1, 512], where 512 is the default codeword length.
    1. models_conv1d.py: PPF-FoldNet model using conv1d layers.
    2. models_linear.py: PPF-FoldNet model using linear layers. Theoretically, nn.Conv1d and nn.Linear should be same when (kernel_size=1, stride=1, padding=0, dilation=1). You can try misc/linear_conv1d.py for the experiment.
  • input_preparation.py: used before training, including:
    1. read point cloud and voxel downsample.
    2. choose reference point(or interest point) from the point cloud.
    3. collect neighboring points near each reference point.
    4. build local patch for each reference point and their neighbor.
    5. save the local patch as numpy array for later use.
    6. I also write a function for preparing the ppf input on the fly.
  • dataset.py: define the Dataset, read from the files generated from input prepration stage or on-the-fly.
  • dataloader.py: define the dataloader.
  • loss.py: define the Chamfer Loss. (Earth Move Distance Loss is worth trying.)
  • trainer.py: the trainer class, handle the training process including snapshot.
  • train.py: the entrance file, every time I start training, this file will be copied to the snapshot dictionary.
  • geometric_registration/: dictionary for evaluating the model through the task of geometric registration
    1. gt_result/: the ground truth information provided by 3DMatch benchmark.
    2. preparation.py: calculate the descriptor for each interest point provided by 3DMatch Benchmark. (need to calculate the ppf representation for each interest point first)
    3. evaluate_ppfnet.py: using the evaluation metric proposed in PPF-FoldNet paper to evaluate the performance of the descriptor.
      1. get point cloud from .ply file and get the interest point coordinate from .keypts.bin file.
      2. use the descriptor generated by preparation.py to register each pair of point clouds fragment and save the result in pred_result/
      3. after register each pair of fragment, we can get the final recall of the descriptors.
    4. utils.py
  • misc/

Data

  • rgbd_fragments/: fragments of training set.
  • intermediate-files-real/: dictionary that saves the keypoints coordinates for each scene.
  • fragments/: fragments of test set.

Prepara the date

Use script/download.sh to download all the training set from 3DMatch, and script/fuse_fragments_3DMatch.py to fuse rgbd frames into fragments.

And the intermediate file is downloaded from this link

Train the model

python train.py 

All the configuration is in the train.py. When you start training, the train.py and model.py will be saved in the snapshot/ folder and the tensorboard file is saved in tensorboard/.

Evaluate the model

See Geometric Registration for detail

Performance

Model Average Recall
PPF-FoldNet 69.3%
Origin Paper 71.8%
3DMatch 57.3%

The model with best performance is in folder pretrained/