Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add doc for HGQ #1117

Merged
merged 2 commits into from
Nov 13, 2024
Merged

Add doc for HGQ #1117

merged 2 commits into from
Nov 13, 2024

Conversation

calad0i
Copy link
Contributor

@calad0i calad0i commented Nov 8, 2024

Description

Added a page of doc for HGQ in advanced section

Type of change

  • Documentation update

Tests

Not applicable

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

Copy link
Contributor

@vloncar vloncar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, though I would improve the text somewhat.

I also see the main HGQ documentation could be updated in similar way, let me know if you want feedback on that

.. image:: https://img.shields.io/badge/arXiv-2405.00645-b31b1b.svg
:target: https://arxiv.org/abs/2405.00645

High Granularity Quantization (HGQ) is a gradient-based automatic bitwidth optimization and quantization-aware training algorithm for neural networks to be deployed on FPGAs, By laveraging gradients, it allows for bitwidth optimization at arbitrary granularity, up to per-weight and per-activation level.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"...to be deployed on FPGAs, ...". You wanted an end of sentence there, not a comma.

:alt: Overview of HGQ
:align: center

Conversion of models from High Granularity Quantization (HGQ) is fully supported. The HGQ models are first converted to proxy model format, which can then be parsed by hls4ml bit-accurately. Below is an example of how to create a HGQ model and converting it to hls4ml.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You already introduced the HGQ abbreviation, no need to write it as High Granularity Quantization (HGQ) again. Also, it would be more clear if you refered to HGQ as a library in the first sentense. Like, "Conversion of models made with HGQ library is fully supported".

In the last sentence, pick a style, either "example of how to create ... and convert..." or "example of creating ... and converting ..."

@@ -0,0 +1,49 @@
==============
High Granularity Quantization (HGQ)
==============
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Github suggests that the overline is too short.

@vloncar vloncar merged commit 5616e5a into fastmachinelearning:main Nov 13, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants