Skip to content

[ICRA 2023] The official code for paper "TrafficGen: Learning to Generate Diverse and Realistic Traffic Scenarios"

License

Notifications You must be signed in to change notification settings

Li-Parker/trafficgen

 
 

Repository files navigation

TrafficGen: Learning to Generate Diverse and Realistic Traffic Scenarios

Webpage | Code | Video | Paper

Setup environment

# Clone the code to local
git clone https://github.com/metadriverse/trafficgen.git
cd trafficgen

# Create virtual environment
conda create -n trafficgen python=3.8
conda activate trafficgen

# You should install pytorch by yourself to make them compatible with your GPU
# For cuda 11.0:
pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
# Install basic dependency
pip install -e .

If you find error messages related to geos when installing Shapely, checkout this post.

Quick Start

You can run the following scripts to test whether the setup is correct. These scripts do not require downloading data.

Vehicle Placement Model

python train_init.py -c local

Trajectory Generator Model

python train_act.py -c local 

Download and Process Dataset and Pre-trained Model

Download dataset for road and traffic

Download from Waymo Dataset

Note: it is not necessary to download all the files from Waymo. You can download one of them for a simple test.

Data Preprocess

python trafficgen/utils/trans20.py PATH_A PATH_B None

Note: PATH_B is where you store the processed data.

Download and retrieve pretrained TrafficGen model

Please download two models from this link: https://drive.google.com/drive/folders/1TbCV6y-vssvG3YsuA6bAtD9lUX39DH9C?usp=sharing

And then put them into trafficgen/traffic_generator/ckpt folder.

Generate new traffic scenarios

Running following scripts will generate images and GIFs (if with --gif) visualizing the new traffic scenarios in traffic_generator/output/vis folder.

# change the data usage and set the data dir in debug.yaml

# First, you have to change working directory
cd TrafficGen/trafficgen

python generate.py [--gif] [--save_metadrive]

Set --gif flag to generate GIF files.

Connect TrafficGen with MetaDrive

Create single-agent RL environment

After running python generate.py --save_metadrive, a folder trafficgen/traffic_generator/output/scene_pkl will be created, and you will see many pickle files. Each .pkl file is a scenario created by TrafficGen.

We provide a script to create single-agent RL environment with TrafficGen generated data. Please refer to trafficgen/run_metadrive.py for details.

We also provide pre-generated scenarios from TrafficGen, so you can kick off RL training on TrafficGen-generated scenarios immediately. Please follow trafficgen/dataset/README.md to download the dataset.

cd trafficgen/

# Run generated scenarios:
python run_metadrive.py --dataset traffic_generator/output/scene_pkl

# Please read `trafficgen/dataset/README.md` to download pre-generated scenarios
# Then you can use them to create an RL environment:
python run_metadrive.py --dataset dataset/validation

# If you want to visualize the generated scenarios, with the ego car also replaying data, use:
python run_metadrive.py --dataset dataset/validation --replay

# If you want to create RL environment where traffic vehicles are not replaying 
# but are controlled by interactive IDM policy, use:
python run_metadrive.py --dataset dataset/validation --no_replay_traffic

You can then kick off RL training by utilizing the created environment showcased in the script above.

Train RL agents in TrafficGen-generated single-agent RL environment

# Dependencies:
pip install ray==2.2.0
pip install ray[rllib]==2.2.0

# Install pytorch by yourself and make it compatible with your CUDA
# ...

# Kickoff training
cd trafficgen

python run_rl_training.py --exp-name EXPERIMENT_NAME --num-gpus 1 
# You can also specify the path to dataset. Currently we set:

--dataset_train  dataset/1385_training
--dataset_test  dataset/validation

# by default. Check the file for more details about the arguments. 

Training

Local Debug

Use the sample data packed in the code repo directly

Vehicle Placement Model

python train_init.py -c local

Trajectory Generator Model

python train_act.py -c local

Cluster Training

For training, we recommend to download all the files from: https://console.cloud.google.com/storage/browser/waymo_open_dataset_motion_v_1_1_0

PATH_A is the raw data path

PATH_B is the processed data path

Execute the data_trans.sh:

sh utils/data_trans.sh PATH_A PATH_B

Note: This will take about 2 hours.

Then modify the 'data_path' in init/configs and act/configs to PATH_B, run:

python init/uitls/init_dataset.py
python act/uitls/act_dataset.py

to get a processed cache for the model.

Modify cluster.yaml. Change data_path, data_usage, run:

python train_act.py -c cluster -d 0 1 2 3 -e exp_name

-d denotes which GPU to use

Reference

@inproceedings{feng2023trafficgen,
  title={Trafficgen: Learning to generate diverse and realistic traffic scenarios},
  author={Feng, Lan and Li, Quanyi and Peng, Zhenghao and Tan, Shuhan and Zhou, Bolei},
  booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
  pages={3567--3575},
  year={2023},
  organization={IEEE}
}

img.png

About

[ICRA 2023] The official code for paper "TrafficGen: Learning to Generate Diverse and Realistic Traffic Scenarios"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 87.4%
  • Jupyter Notebook 12.3%
  • Shell 0.3%