-
Notifications
You must be signed in to change notification settings - Fork 421
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add doc for HGQ #1117
Add doc for HGQ #1117
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, though I would improve the text somewhat.
I also see the main HGQ documentation could be updated in similar way, let me know if you want feedback on that
docs/advanced/hgq.rst
Outdated
.. image:: https://img.shields.io/badge/arXiv-2405.00645-b31b1b.svg | ||
:target: https://arxiv.org/abs/2405.00645 | ||
|
||
High Granularity Quantization (HGQ) is a gradient-based automatic bitwidth optimization and quantization-aware training algorithm for neural networks to be deployed on FPGAs, By laveraging gradients, it allows for bitwidth optimization at arbitrary granularity, up to per-weight and per-activation level. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"...to be deployed on FPGAs, ...". You wanted an end of sentence there, not a comma.
docs/advanced/hgq.rst
Outdated
:alt: Overview of HGQ | ||
:align: center | ||
|
||
Conversion of models from High Granularity Quantization (HGQ) is fully supported. The HGQ models are first converted to proxy model format, which can then be parsed by hls4ml bit-accurately. Below is an example of how to create a HGQ model and converting it to hls4ml. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You already introduced the HGQ abbreviation, no need to write it as High Granularity Quantization (HGQ)
again. Also, it would be more clear if you refered to HGQ as a library in the first sentense. Like, "Conversion of models made with HGQ library is fully supported".
In the last sentence, pick a style, either "example of how to create ... and convert..." or "example of creating ... and converting ..."
docs/advanced/hgq.rst
Outdated
@@ -0,0 +1,49 @@ | |||
============== | |||
High Granularity Quantization (HGQ) | |||
============== |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Github suggests that the overline is too short.
Description
Added a page of doc for HGQ in advanced section
Type of change
Tests
Not applicable
Checklist
pre-commit
on the files I edited or added.