Qartezator is your translator between aerial images and maps.
Clone the repo: git clone https://github.com/AndranikSargsyan/qartezator.git
Set up virtualenv:
cd qartezator
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
If you need torch+cuda, you can use the following command
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
Download all models from this link and place inside models/
directory.
Start StreamLit demo by running:
streamlit run demo.py
python -m qartezator.inference -m PATH-TO-MODEL -i PATH-TO-IMAGE -o OUTPUT-PATH
Download training data from here and extract into data/ directory.
To start the training run
python -m qartezator.train --config-path ./qartezator/configs/qartezator-fourier.yaml
Download test images from here.
To do the inference on test set, run
python scripts/predict_many.py --source-dir SOURCE_IMG_DIR --model-path TRACED_MODEL_PATH --output-dir OUTPUT_DIR
please see more argument options in the script.
To evaluate PSNR, SSIM and L1, use
python scripts/cal_psnr_ssim_l1.py --gt-path TARGET_MAPS_DIR --pred-path PREDICTED_MAPS_DIR
To calculate FID use
python -m pytorch_fid PREDICTED_MAPS_PATH TARGET_MAPS_PATH
Aerial image | Target map | Predicted map |
---|---|---|
Our work borrows code from the following repos: