Skip to content

ProhetTeam/QuantQuant

Repository files navigation

Introduction about Quant_Quant

   Quant_Quant (QQ for short) is a lightweight but powerful codebase for quantization aware training(QAT). In this codebase, wo have covered both detection and classification tasks. The codebase integrate majority of QAT methods.

Key Features & Capabilty

  • Completeness codebase Major of QAT algorithm have been covered in QQ, including Uniform/Dorefa/LSQ/DSQ/APOT/LSQ+
  • Authoritative exprimental results Provide realiable code and reproducible results for state-of-the-art QAT methods.
  • Easy to use and extend We define an code orginazation form which is easy to get started and friendly to customize and conduct own experiments.
  • Effcient training Considring training efficiency, our code support multi-machine parallel training.

Who should consider use Quant_Quant

  • Researchers

    • For beginners, the one who are interested in quantization algorithm and just want to try it easily.
    • For senior researchers, the one who are familar with this science field and want to conductive some academic expriments.
  • Practitioner of Microchip Designer

    • the one who want to follow the research progress in QAT.
    • the one who want to deploy models in different chips easily.

The capabilities of QQ

QQ is a simple but strong codebase for quantization aware training. Different form nowadays pubic QAT codebase, we define a module named QQTransformer which convert float operations to int operation according to the quantization method specified in the config file. This feature makes it easy to expriment on self-defined methods or backbone, for interating quantization methods and backebone are relatively splited.

What’s more, we have support to quantize on both classfication and detection tasks, which meets majority of researchers requriments. Please take a glance on the network we have completed.

Task Dataset BackBone
Quant_Method:
      Uniform/LSQ/DSQ/APOT/LSQ+

Classification

ImageNet
Cifar-10
Cifar-100
Mobilenet_V2
RegNet
ResNet
ResNext
SEResNeXt
ShuffleNet_v1
ShuffleNet_v2
VGG
Detection MSCOCO RetinaNet
YOLO_V3
ATSS

The Structure of QQ

├── ModelAnalysisTools
├── QuanTransformer
├── QuantMmdetection
├── lowbit_classification
├── requirements.txt
└── README.md

QuantQuant is origanized as shown above. As shown, QuanTransformer is the most important part, in which we define the quantization methods and how to convert the float operators to the quantized one. Classfication and detection are implemented sperataly.

  • ModelAnalysisTools: intergerate a set of model analysis tools to analysis model performance.
  • QuanTransformer complete QAT algorithm cited most, including uniform, lsq, dsq, dorefa, apot. And we will keep abreast of developments in this area and update our code continuously.
  • lowbit_classification: In classification module, wo have completed lots of classification backbone in it, which totally satisty the purpose on both academical research and industry deploy.
  • QuantMmdetection: Nowadays, the quantization in detection ares has gained more and more attention. But, there still has little paper studied in this problem. Consistering detection is a significant part of computer vision, we want to construct a baseline for this problem. What’s more, we also want to attract more on this problem.

Installation Documentation

How can we start a experiment on detection?

see details in Quick Start on Classification Task.md and Quick Start on Detection Task.md

Getting Help

To start, you can check the documents in docs. If you couldn'd find help there, try search Github issues and we are very pleased to fix the bugs and response to your problem.

Update

  • 8/2021: Release QuantQuant v0.9

License

QuantQuant is released under the Apache 2.0 license.

Citing QuantQuant

If you use QuantQuant model in your research or wish to refer to the baseline results published in the Model Zoo, please use the following BibTeX entry.

@misc{,
  author =       {Tanfeiyang and Xianyuhaishu and Zhangyi and Zhoujianli and Likeyu},
  title =        {QuantQuant},
  howpublished = {\url{https://github.com/ProhetTeam/QuantQuant}},
  year =         {2018}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published