Skip to content

Latest commit

 

History

History
50 lines (34 loc) · 3.22 KB

README.md

File metadata and controls

50 lines (34 loc) · 3.22 KB

🌌 TITAN-Net: Semantics-Aware Multi-Modal Domain Translation 🌌

License: MIT arXiv

TITAN-Net

TITAN-Net introduces a fresh, effective approach for bridging the gap between sensor modalities with different data formats! 🌉 By harnessing the power of scene semantics, TITAN-Net can, for the very first time, synthesize a panoramic color image directly from a 3D LiDAR point cloud.

✨ Highlights

  1. Semantic Segmentation: First, we segment the LiDAR point cloud and project it onto a spherical surface. 📡
  2. Modular Generative Framework: Our approach is modular and generative, translating LiDAR segments into colorful panoramic images!
  3. Adversarial Learning: Using a conditional GAN, we translate LiDAR segment maps to their camera image counterparts, creating a seamless color scene. 🎨
  4. Performance: Quantitative evaluations on the Semantic-KITTI dataset show TITAN-Net outperforms strong baselines by a significant margin.

🔗 Models and Pretrained Weights

Below are links to the models and pretrained weights used in this project:

📹 Example Videos

Check out these example videos showing TITAN-Net in action, generating breathtaking RGB panoramic images! 🎥

Full Panoramic RGB Generation

Panoramic RGB Generation

Data Augmentation with Semantic Segmentation

See how easily we can use semantic segmentation maps for data augmentation in datasets like KITTI and Cityscapes!

KITTI Data Augmentation Cityscapes Data Augmentation

📚 Citation

If you use TITAN-Net in your research, please consider citing our paper:

@inproceedings{cortinhal2021semanticsaware,
  title={Semantics-Aware Multi-Modal Domain Translation: From LiDAR Point Clouds to Panoramic Color Images},
  author={Cortinhal, Tiago and Kurnaz, Fatih and Aksoy, Eren Erdal},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
  pages={3747-3756},
  year={2021}
}