Skip to content

Commit

Permalink
0.2.0b1
Browse files Browse the repository at this point in the history
  • Loading branch information
calad0i committed Nov 27, 2023
1 parent 80aea82 commit 24c1e2b
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 2 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/sphinx-build.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Sphinx document with GitHub Pages dependencies preinstalled
name: Documentation

on:
# Runs on pushes targeting the default branch
Expand Down
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,11 @@

# High Granularity Quantization

[![License Apache 2.0](https://img.shields.io/badge/license-Apache%202.0-green.svg)](LICENSE)
[![Documentation](https://github.com/calad0i/HGQ/actions/workflows/sphinx-build.yml/badge.svg)](https://calad0i.github.io/HGQ/)
[![PyPI version](https://badge.fury.io/py/hgq.svg)](https://badge.fury.io/py/hgq)


HGQ is a framework for quantization aware training of neural networks to be deployed on FPGAs, which allows for per-weight and per-activation bitwidth optimization.

Depending on the specific [application](https://arxiv.org/abs/2006.10159), HGQ could achieve up to 10x resource reduction compared to the traditional `AutoQkeras` approach, while maintaining the same accuracy. For some more challenging [tasks](https://arxiv.org/abs/2202.04976), where the model is already under-fitted, HGQ could still improve the performance under the same on-board resource consumption. For more details, please refer to our paper (link coming not too soon).
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ build-backend = "setuptools.build_meta"

[project]
name = "HGQ"
version = "0.2.0-rc1"
version = "0.2.0b1"
authors = [{ name = "Chang Sun", email = "[email protected]" }]
description = "High Granularity Quantizarion"
readme = "README.md"
Expand Down

0 comments on commit 24c1e2b

Please sign in to comment.