This project will no longer be maintained by Intel.
Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.
Intel no longer accepts patches to this project.
If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.
This repository contains code and models for our paper:
Vision Transformers for Dense Prediction
René Ranftl, Alexey Bochkovskiy, Vladlen Koltun
- [March 2021] Initial release of inference code and models
- Download the model weights and place them in the
weights
folder:
Monodepth:
Segmentation:
-
Set up dependencies:
pip install -r requirements.txt
The code was tested with Python 3.7, PyTorch 1.8.0, OpenCV 4.5.1, and timm 0.4.5
-
Place one or more input images in the folder
input
. -
Run a monocular depth estimation model:
python run_monodepth.py
Or run a semantic segmentation model:
python run_segmentation.py
-
The results are written to the folder
output_monodepth
andoutput_semseg
, respectively.
Use the flag -t
to switch between different models. Possible options are dpt_hybrid
(default) and dpt_large
.
Additional models:
- Monodepth finetuned on KITTI: dpt_hybrid_kitti-cb926ef4.pt Mirror
- Monodepth finetuned on NYUv2: dpt_hybrid_nyu-2ce69ec7.pt Mirror
Run with
python run_monodepth -t [dpt_hybrid_kitti|dpt_hybrid_nyu]
Hints on how to evaluate monodepth models can be found here: https://github.com/intel-isl/DPT/blob/main/EVALUATION.md
Please cite our papers if you use this code or any of the models.
@article{Ranftl2021,
author = {Ren\'{e} Ranftl and Alexey Bochkovskiy and Vladlen Koltun},
title = {Vision Transformers for Dense Prediction},
journal = {ArXiv preprint},
year = {2021},
}
@article{Ranftl2020,
author = {Ren\'{e} Ranftl and Katrin Lasinger and David Hafner and Konrad Schindler and Vladlen Koltun},
title = {Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
year = {2020},
}
Our work builds on and uses code from timm and PyTorch-Encoding. We'd like to thank the authors for making these libraries available.
MIT License