Accepted by 2020 IEEE Intelligent Vehicles Symposium (IV 2020).
The main framework is as follows:
Empirically, lane detection model trained using our method demonstrated adaptability in low-light conditions and robustness in complex scenarios. (It can achieve 73.9 F1-measure in CULane testing set)
The whole dataset is available at CULane.
CULane
├── driver_23_30frame # training&validation
├── driver_161_90frame # training&validation
├── driver_182_30frame # training&validation
├── driver_193_90frame # testing
├── driver_100_30frame # testing
├── driver_37_30frame # testing
├── laneseg_label_w16 # labels
└── list # list
The images in low-light conditions are generated by the proposed SIM-CycleGAN.
-
Matlab (for tools/prob2lines), version R2017a or later.
-
Opencv (for tools/lane_evaluation).
conda create -n your_env_name python=3.6
conda activate your_env_name
conda install pytorch==1.3.0 torchvision==0.4.1 cudatoolkit=10.0 -c pytorch
pip install -r requirements.txt
The source code for SIM-CycleGAN has been released. (11/03)
Train your own SIM-CycleGAN model as follow.
python train.py --name repo_name \
--dataset_loadtxt_A /path/to/domain_A_txt \
--dataset_loadtxt_B /path/to/domain_B_txt \
--gpu_ids 6 \
Use your trained model to generate images.
python test.py --name repo_name \
--model simcycle_gan \
--dataset_loadtxt_A /path/to/domain_A_txt \
--dataset_loadtxt_B /path/to/domain_B_txt \
--gpu_ids 6 \
The source code used for the lane detction is made publicly available by HOU Yuenan.
We provide demo for testing a single image or a video.
sh ./demo.sh
You can get the results as follow.
If you want to test the model for video, you can set mode=0 in demo.sh.
The trained model used in this paper is available in ./trained.
- Run test script
sh ./test_erfnet.sh
- Get lines from probability maps
cd tools/prob2lines
matlab -nodisplay -r "main;exit"
Please check the file path in Matlab code before.
- Evaluation
cd /tools/lane_evaluation
make
# You may also use cmake instead of make, via:
# mkdir build && cd build && cmake ..
sh eval_all.sh # evaluate the whole test set
sh eval_split.sh # evaluate each scenario separately
The evaluation results are saved in /tools/lane_evaluation/output.
Some examples of real images in normal light conditions and their corresponding translations images in low-light conditions.
Performance ( (F1-measure) ) of different methods on CULane testing set. For crossroad, only FP is shown.
Category | ERFNet | CycleGAN+ERFNet | SIM-CycleGAN + ERFNet(ours) | SCNN | ENet-SAD | ResNet-101-SAD |
---|---|---|---|---|---|---|
Normal | 91.5 | 91.7 | 91.8 | 90.6 | 90.1 | 90.7 |
Crowded | 71.6 | 71.5 | 71.8 | 69.7 | 68.8 | 70.0 |
Night | 67.1 | 68.9 | 69.4 | 66.1 | 66.0 | 66.3 |
No Line | 45.1 | 45.2 | 46.1 | 43.4 | 41.6 | 43.5 |
Shadow | 71.3 | 73.1 | 76.2 | 66.9 | 65.9 | 67.0 |
Arrow | 87.2 | 87.2 | 87.8 | 66.9 | 65.9 | 67.0 |
Dazzle Light | 66.0 | 67.5 | 66.4 | 58.5 | 60.2 | 59.9 |
Curve | 66.3 | 69.0 | 67.1 | 64.4 | 65.7 | 65.7 |
Crossroad | 2199 | 2402 | 2346 | 1990 | 1998 | 2052 |
Total | 73.1 | 73.6 | 73.9 | 71.6 | 70.8 | 71.8 |
The probability maps output by the three methods above are shown as following
-
Add attenction on ERFNet
-
Open the source code for SIM-CycleGAN
-
Upgade pytorch (from 0.3.0 to 1.3.0)
-
Upload demo for test
Please cite this in your publication if our work helps your research.
@inproceedings{Liu2020Lane,
title={Lane Detection in Low-light Conditions Using an Efficient Data Enhancement : Light Conditions Style Transfer},
author={Liu, Tong and Chen, Zhaowei and Yang, Yi and Wu, Zehao and Li, Haowei},
booktitle={2020 IEEE intelligent vehicles symposium (IV)},
year={2020},
}
This project refers to the following projects: