Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make auto the default for layer config #1016

Merged
merged 25 commits into from
Sep 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
a57246e
make auto the default for layers
jmitrevs May 30, 2024
9a6c0ad
Merge remote-tracking branch 'upstream/main' into keras-config-auto
jmitrevs May 31, 2024
38f522f
add max_precision, not currently used
jmitrevs Jun 3, 2024
8510705
add maximum precision in standard precision inference
jmitrevs Jun 14, 2024
c4de63f
Merge branch 'fastmachinelearning:main' into keras-config-auto
jmitrevs Jun 14, 2024
244e9a3
Merge branch 'main' into keras-config-auto
jmitrevs Jul 17, 2024
68796dd
minimal handling of other types in infer_precision (e.g. for binary)
jmitrevs Jul 17, 2024
807fbe5
add more checks for max precision
jmitrevs Jul 17, 2024
92dc478
fix the incorrect setting of reuse factors
jmitrevs Jul 17, 2024
b29705e
update tests to pass backend to config_from_*
jmitrevs Jul 17, 2024
9414960
fix parameters syntax error introduced in pytest commit
jmitrevs Jul 17, 2024
a5a36da
add basic type inference for embedding
jmitrevs Jul 18, 2024
141cb2b
add placeholder precision inference for rnn
jmitrevs Jul 18, 2024
6d2a5f5
fix syntax error in test_qkeras
jmitrevs Jul 18, 2024
bc29e0f
fix up test_trace
jmitrevs Jul 18, 2024
e42d0d8
don't pass auto in test_attributes
jmitrevs Jul 18, 2024
6340655
update documentation
jmitrevs Jul 18, 2024
3a2fa00
update documentation (2)
jmitrevs Jul 18, 2024
c17b181
Merge remote-tracking branch 'upstream/main' into keras-config-auto
jmitrevs Aug 21, 2024
b718580
move some optimizers before infering precision type
jmitrevs Aug 21, 2024
09a4d4e
move up the channnels_last_converter
jmitrevs Aug 24, 2024
55abefc
put missing precision_merge logic in infer_preicion and delete, reord…
jmitrevs Aug 25, 2024
910f81a
add type inference to catapult
jmitrevs Aug 25, 2024
8412f7c
Merge remote-tracking branch 'upstream/main' into keras-config-auto
jmitrevs Sep 3, 2024
fe76722
Merge branch 'main' into keras-config-auto
vloncar Sep 11, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 35 additions & 3 deletions docs/api/configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,18 @@ We currently support two ways of setting hls4ml's model configuration. This page

.. contents:: \

The Python API approach is recommended for most users as there are more utilities to help create the configuration dictionaries.

**NOTE:**


*
One important part of ``hls4ml`` to remember is that the user is responsible for the format of the inputs. There is no automatic formatting or normalization so this must be done in the training.

*
..
*
For developers, you might also want to checkout this section: `Detailed configuration in converted hls codes <#detailed-configuration-in-converted-hls-codes>`_.
*Broken link*

----

Expand All @@ -31,11 +34,26 @@ Using hls4ml, you can quickly generate a simple configuration dictionary from a
import hls4ml
config = hls4ml.utils.config_from_keras_model(model, granularity='model')

For more advanced and detailed configuration, you can also set them through the created dictionary. For example, to change the reuse factor:
This python dictionary can be edited as needed. A more advanced configuration can be generated by, for example:

.. code-block:: python

import hls4ml
config = hls4ml.utils.config_from_keras_model(
model,
granularity='name',
default_precision='fixed<16,6>',
backend='Vitis')

This will include per-layer configuration based on the model. Including the backend is recommended because some configation options depend on the backend. Note, the precisions at the
higher granularites usually default to 'auto', which means that ``hls4ml`` will try to set it automatically. Note that higher granularity settings take precendence
over model-level settings. See :py:class:`~hls4ml.utils.config.config_from_keras_model` for more information on the various options.

One can override specific values before using the configuration:

.. code-block:: python

config['Model']['ReuseFactor'] = 2
config['LayerName']['fc1']['ReuseFactor'] = 2

Or to set the precision of a specific layer's weight:

Expand All @@ -45,6 +63,20 @@ Or to set the precision of a specific layer's weight:

To better understand how the configuration hierachy works, refer to the next section for more details.

Finally, one then uses the configuration to create an hls model:

.. code-block:: python

hls_model = hls4ml.converters.convert_from_keras_model(
model,
hls_config=config,
output_dir="my_project_dir",
io_type='io_stream',
backend='Vitis'
)

See :py:class:`~hls4ml.converters.convert_from_keras_model` for more information on the various options.

----

2. YAML Configuration file
Expand Down
2 changes: 1 addition & 1 deletion docs/setup.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ To run FPGA synthesis, installation of following tools is required:

* Xilinx Vivado HLS 2018.2 to 2020.1 for synthesis for Xilinx FPGAs

* Vitis HLS 2022.1 or newer is required for synthesis for Xilinx FPGAs using the experimental ``Vitis`` backend.
* Vitis HLS 2022.2 or newer is required for synthesis for Xilinx FPGAs using the ``Vitis`` backend.

* Intel Quartus 20.1 to 21.4 for the synthesis for Intel FPGAs

Expand Down
2 changes: 1 addition & 1 deletion docs/status.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ Other feature notes:
* ``hls4ml`` is tested on Linux, and supports
* Vivado HLS versions 2018.2 to 2020.1
* Intel HLS versions 20.1 to 21.4
* Vitis HLS versions 2020.2 to 2022.2 (experimentally)
* Vitis HLS versions 2022.2 to 2024.1
* Windows and macOS are not supported
* BDT support has moved to the `Conifer <https://github.com/thesps/conifer>`__ package

Expand Down
1 change: 1 addition & 0 deletions hls4ml/backends/catapult/catapult_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,7 @@ def _register_flows(self):
'catapult:inplace_stream_flatten',
'catapult:skip_softmax',
'catapult:fix_softmax_table_size',
'infer_precision_types',
]
optimization_flow = register_flow('optimize', optimization_passes, requires=[init_flow], backend=self.name)

Expand Down
15 changes: 6 additions & 9 deletions hls4ml/model/optimizer/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,8 @@
register_flow(
'convert',
[
'seperable_to_depthwise_and_conv', # has to be before precision inference
'infer_precision_types',
'channels_last_converter',
'seperable_to_depthwise_and_conv',
'remove_transpose_before_flatten',
'remove_nop_transpose',
'remove_single_channel_transpose',
Expand All @@ -45,19 +44,17 @@
'qkeras_factorize_alpha',
'extract_ternary_threshold',
'fuse_consecutive_batch_normalization',
'fuse_batch_normalization',
'replace_multidimensional_dense_with_conv',
'enforce_proxy_model_embedded_config',
'eliminate_linear_activation',
# many of the above optimzers need to be done before this
'infer_precision_types',
],
) # TODO Maybe not all QKeras optmizers belong here?

register_flow(
'optimize',
[
'eliminate_linear_activation',
'fuse_consecutive_batch_normalization',
'fuse_batch_normalization',
'infer_precision_types',
'set_precision_concat',
],
[],
requires=['convert'],
)
Loading
Loading