Skip to content

Commit

Permalink
Fix wording and style
Browse files Browse the repository at this point in the history
  • Loading branch information
calad0i authored and vloncar committed Nov 13, 2024
1 parent 7f39793 commit 45ef73b
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions docs/advanced/hgq.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
==============
===================================
High Granularity Quantization (HGQ)
==============
===================================

.. image:: https://github.com/calad0i/HGQ/actions/workflows/sphinx-build.yml/badge.svg
:target: https://calad0i.github.io/HGQ/
Expand All @@ -9,13 +9,13 @@ High Granularity Quantization (HGQ)
.. image:: https://img.shields.io/badge/arXiv-2405.00645-b31b1b.svg
:target: https://arxiv.org/abs/2405.00645

High Granularity Quantization (HGQ) is a gradient-based automatic bitwidth optimization and quantization-aware training algorithm for neural networks to be deployed on FPGAs, By laveraging gradients, it allows for bitwidth optimization at arbitrary granularity, up to per-weight and per-activation level.
`High Granularity Quantization (HGQ) <https://github.com/calad0i/HGQ/>`_ is a library that performs gradient-based automatic bitwidth optimization and quantization-aware training algorithm for neural networks to be deployed on FPGAs. By laveraging gradients, it allows for bitwidth optimization at arbitrary granularity, up to per-weight and per-activation level.

.. image:: https://calad0i.github.io/HGQ/_images/overview.svg
:alt: Overview of HGQ
:align: center

Conversion of models from High Granularity Quantization (HGQ) is fully supported. The HGQ models are first converted to proxy model format, which can then be parsed by hls4ml bit-accurately. Below is an example of how to create a HGQ model and converting it to hls4ml.
Conversion of models made with HGQ library is fully supported. The HGQ models are first converted to proxy model format, which can then be parsed by hls4ml bit-accurately. Below is an example of how to create a model with HGQ and convert it to hls4ml model.

.. code-block:: Python
Expand Down

0 comments on commit 45ef73b

Please sign in to comment.