Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
MaxChanger committed Jul 5, 2022
1 parent d7391ee commit 0c5b851
Show file tree
Hide file tree
Showing 4 changed files with 87 additions and 13 deletions.
35 changes: 24 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,20 @@
# Efficient Spatial-Temporal Information Fusion for LiDAR-Based 3D Moving Object Segmentation

### [Project page](https://npucvr.github.io/MotionSeg3D) <br>

This repo contains the code for our paper:

> Efficient Spatial-Temporal Information Fusion for LiDAR-Based 3D Moving Object Segmentation.
> [Jiadai Sun](https://github.com/MaxChanger), [Yuchao Dai](https://scholar.google.com/citations?user=fddAbqsAAAAJ&hl=en&oi=ao), [Xianjing Zhang](https://github.com/zoojing), Jintao Xu, Rui Ai, Weihao Gu, and [Xieyuanli Chen](https://github.com/Chen-Xieyuanli)
> *submitted to RAL/IROS 2022*
> *Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022*
![method-framework](assets/framework.png)
<!-- ![method-framework](assets/framework.png) -->
<!-- ![method-framework](assets/visualization.jpg) -->
<p align='center'>
<a href="https://youtu.be/carIdfwLX_s">
<img src='assets/visualization.jpg' width='500' alt='demo video'/>
</a>
</p>

<br/>

Expand Down Expand Up @@ -47,7 +56,7 @@ unzip ckpt_motionseg3d_pointrefine.zip
```
</details>

Then you could use the follow command to inference and visualize the predictions. If you use toy dataset, please modify the `seq_id` corresponding to valid in `data_cfg.yaml`.
Then you could use the follow command to inference and visualize the predictions. If you use toy dataset, please modify the `seq_id` corresponding to `valid` in `model_path/data_cfg.yaml`.
```bash
# To inference the predictions.
python infer.py -d ./toydata -m ./log/motionseg3d_pointrefine -l ./pred/oursv1 -s valid
Expand All @@ -63,7 +72,7 @@ python utils/visualize_mos.py -d ./toydata -p ./pred/oursv2 --offset 0 -s 38
2. Download KITTI Odometry Benchmark calibration data (1 MB) from [here](http://www.cvlibs.net/download.php?file=data_odometry_calib.zip).
3. Download SemanticKITTI label data (179 MB) (alternatively the data in Files corresponds to the same data) from [here](http://www.semantic-kitti.org/assets/data_odometry_labels.zip).
4. Download KITTI-Road Velodyne point clouds from [original website](http://www.cvlibs.net/datasets/kitti/raw_data.php?type=road), more details can be found in [config/kitti_road_mos.md](config/kitti_road_mos.md)
5. Download the KITTI-Road label data (5.4 MB) annotated by us from [here](https://drive.google.com/file/d/1BP-i6CoDCzuPNlcA7TUwsT24bXXgVIIY/view?usp=sharing).
5. Download the KITTI-Road-MOS label data annotated by us, the pose and calib files from [here](https://drive.google.com/file/d/1pdpcGReJHOJp01pbgXUbcGROWOBd_2kj/view?usp=sharing) (6.1 MB) .
6. Extract everything into the same folder, as follow:
<details>
<summary>[Expected directory structure of SemanticKITTI (click to expand)]</summary>
Expand All @@ -90,7 +99,7 @@ DATAROOT
</details>


7. Next run the data preparation script to generate the residual images. More parameters about the data preparation can be found in the yaml file [config/data_preparing.yaml](./config/data_preparing.yaml).
7. Next run the data preparation script (based on [LMNet](https://github.com/PRBonn/LiDAR-MOS)) to generate the residual images. More parameters about the data preparation can be found in the yaml file [config/data_preparing.yaml](./config/data_preparing.yaml).


```shell
Expand All @@ -99,16 +108,17 @@ python utils/auto_gen_residual_images.py

### Inference on SemanticKITTI-MOS
> The newly labeled KITTI-Road-MOS data is divided into train/valid set.
> The useage of data can be controlled by specifying data_config.
> The useage of data can be controlled by specifying `--data_config` in training.
> During inference, if you use toy dataset or do not download the KITTI-Road-MOS, please modify the `seq_id` corresponding to `valid` in `model_path/data_cfg.yaml`.
```bash
# validation split
python infer.py -d DATAROOT -m ./log/model_path/logs/TIMESTAMP/ -l ./predictions/ -s valid # -dc config/labels/semantic-kitti-mos.yaml
python infer.py -d DATAROOT -m ./log/model_path/logs/TIMESTAMP/ -l ./predictions/ -s valid

# test split
python infer.py -d DATAROOT -m ./log/model_path/logs/TIMESTAMP/ -l ./predictions/ -s test # -dc config/labels/semantic-kitti-mos.raw.yaml
python infer.py -d DATAROOT -m ./log/model_path/logs/TIMESTAMP/ -l ./predictions/ -s test
```
The predictions/labels will be saved to `./predictions/`. `--data_config` determines whether to use new label data KITTI-Road.
The predictions/labels will be saved to `./predictions/`.


### Evaluation on SemanticKITTI-MOS validation split
Expand All @@ -124,6 +134,7 @@ python utils/evaluate_mos.py -d DATAROOT -p ./predictions/ --datacfg config/labe
### Training on SemanticKITTI-MOS
<!-- Our program is a two-stage training process, -->
The training is seperated into two phases, and switching between phases is currently manually controlled.
`--data_config` determines whether to use new label data KITTI-Road-MOS, such as `-dc config/labels/semantic-kitti-mos.yaml` or `-dc config/labels/semantic-kitti-mos.raw.yaml`

- Phase 1 (multi-gpu): Only the range image is used for input and supervision. The training log and checkpoint will be stored in `./log/ours_motionseg3d/logs/TIMESTAMP/`.

Expand All @@ -145,10 +156,12 @@ python train_2stage.py -d DATAROOT -ac ./train_yaml/mos_pointrefine_stage.yml -l
If you find this code useful for your research, please use the following BibTeX entry.

```bibtex
@article{sun2022mos3d,
@inproceedings{sun2022mos3d,
title={Efficient Spatial-Temporal Information Fusion for LiDAR-Based 3D Moving Object Segmentation},
author={Sun, Jiadai and Dai, Yuchao and Zhang, Xianjing and Xu, Jintao and Ai, Rui and Gu, Weihao and Chen, Xieyuanli},
year={2022}
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year={2022},
organization={IEEE}
}
```

Expand Down
Binary file added assets/visualization.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
7 changes: 5 additions & 2 deletions config/kitti_road_mos.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

To enrich the dataset in the moving object segmentation (MOS) task and to reduce the gap of different data distributions between the validation and test sets in the existing SemanticKITTI-MOS dataset, we automatically annotated and manually corrected the [KITTI-Road](http://www.cvlibs.net/datasets/kitti/raw_data.php?type=road) dataset.

More specifically, we first use our auto-mos labeling method ([link](https://arxiv.org/pdf/2201.04501.pdf)) to automatically generate the MOS labels for KITTI-Road data. We then use a point labeler ([link](https://github.com/jbehley/point_labeler)) to manually refined the labels.
More specifically, we first use auto-mos labeling method ([link](https://arxiv.org/pdf/2201.04501.pdf)) to automatically generate the MOS labels for KITTI-Road data. We then use a point labeler ([link](https://github.com/jbehley/point_labeler)) to manually refined the labels.

We follow our semantic SLAM [SuMa++](https://github.com/PRBonn/semantic_suma) to rename the sequences of KITTI-Road data as follows:
We follow semantic SLAM [SuMa++](https://github.com/PRBonn/semantic_suma) to rename the sequences of KITTI-Road data as follows.

```
raw_id -> seq_id
Expand All @@ -21,6 +21,9 @@ raw_id -> seq_id
2011_10_03_drive_0042 -> 40
2011_10_03_drive_0047 -> 41
```
We provide a simple download and conversion script [utils/download_kitti_road.sh](../utils/download_kitti_road.sh), please modify the `DATA_ROOT` path and manually move the result folder `sequences` to the target folder.
And you need to download the KITTI-Road-MOS label data annotated by us, the pose and calib files from [here](https://drive.google.com/file/d/1pdpcGReJHOJp01pbgXUbcGROWOBd_2kj/view?usp=sharing) (6.1 MB) .

We organize our proposed KITTI-Road-MOS using the same setup and data structure used in SemanticKITTI-MOS:

```
Expand Down
58 changes: 58 additions & 0 deletions utils/download_kitti_road.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
#!/bin/bash

# raw_id -> seq_id
# 2011_09_26_drive_0015 -> 30
# 2011_09_26_drive_0027 -> 31
# 2011_09_26_drive_0028 -> 32
# 2011_09_26_drive_0029 -> 33
# 2011_09_26_drive_0032 -> 34
# 2011_09_26_drive_0052 -> 35
# 2011_09_26_drive_0070 -> 36
# 2011_09_26_drive_0101 -> 37
# 2011_09_29_drive_0004 -> 38
# 2011_09_30_drive_0016 -> 39
# 2011_10_03_drive_0042 -> 40
# 2011_10_03_drive_0047 -> 41

road_raw_id_list=(2011_09_26_drive_0015 2011_09_26_drive_0027 2011_09_26_drive_0028 2011_09_26_drive_0029
2011_09_26_drive_0032 2011_09_26_drive_0052 2011_09_26_drive_0070 2011_09_26_drive_0101
2011_09_29_drive_0004 2011_09_30_drive_0016 2011_10_03_drive_0042 2011_10_03_drive_0047)
sub_id=(30 31 32 33 34 35 36 37 38 39 40 41)

# Please modify it to the local path
DATA_ROOT=DEBUG_kitti_road

mkdir $DATA_ROOT
cd $DATA_ROOT

# wget -c https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0015/2011_09_26_drive_0015_sync.zip

for i in $(seq 0 `expr ${#road_raw_id_list[@]} - 1`); do
raw_id=${road_raw_id_list[i]}
sub_id=${sub_id[i]}

wget -c https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/"$raw_id"/"$raw_id"_sync.zip
unzip "$raw_id"_sync.zip
mv "${raw_id:0:10}"/"$raw_id"_sync "$sub_id"

cd $sub_id
mv velodyne_points/data velodyne
for vbin in velodyne/*;
do
if [ ${#vbin} != 23 ]
then
echo "[ != ] error, please check the folder"
break
fi
mv "$vbin" "${vbin:0:9}${vbin:13}" # 0000000077.bin --> 000077.bin
done
cd ../
mkdir -p sequences/$sub_id
mv $sub_id/velodyne $sub_id/image_02 sequences/$sub_id
rm -rf $sub_id

done

# rm -rf 2011_09_26 2011_09_29 2011_09_30 2011_10_03

# Move the subfolder(30~41) under the `sequences` folder to the directory corresponding to SemanticKITTI-MOS

0 comments on commit 0c5b851

Please sign in to comment.