diff --git a/docs/advanced/hgq.rst b/docs/advanced/hgq.rst index 9d6e926a8..bbd25fe9e 100644 --- a/docs/advanced/hgq.rst +++ b/docs/advanced/hgq.rst @@ -1,6 +1,6 @@ -============== +=================================== High Granularity Quantization (HGQ) -============== +=================================== .. image:: https://github.com/calad0i/HGQ/actions/workflows/sphinx-build.yml/badge.svg :target: https://calad0i.github.io/HGQ/ @@ -9,13 +9,13 @@ High Granularity Quantization (HGQ) .. image:: https://img.shields.io/badge/arXiv-2405.00645-b31b1b.svg :target: https://arxiv.org/abs/2405.00645 -High Granularity Quantization (HGQ) is a gradient-based automatic bitwidth optimization and quantization-aware training algorithm for neural networks to be deployed on FPGAs, By laveraging gradients, it allows for bitwidth optimization at arbitrary granularity, up to per-weight and per-activation level. +High Granularity Quantization (HGQ) is package that performs gradient-based automatic bitwidth optimization and quantization-aware training algorithm for neural networks to be deployed on FPGAs. By laveraging gradients, it allows for bitwidth optimization at arbitrary granularity, up to per-weight and per-activation level. .. image:: https://calad0i.github.io/HGQ/_images/overview.svg :alt: Overview of HGQ :align: center -Conversion of models from High Granularity Quantization (HGQ) is fully supported. The HGQ models are first converted to proxy model format, which can then be parsed by hls4ml bit-accurately. Below is an example of how to create a HGQ model and converting it to hls4ml. +Conversion of models made with HGQ library is fully supported. The HGQ models are first converted to proxy model format, which can then be parsed by hls4ml bit-accurately. Below is an example of how to create a model with HGQ and convert it to hls4ml model. .. code-block:: Python