Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove extras flow #1067

Merged
merged 3 commits into from
Sep 26, 2024
Merged

Conversation

vloncar
Copy link
Contributor

@vloncar vloncar commented Sep 24, 2024

Description

Vivado, Quartus and Catapult backends have an "extras" flow that is used for all optimizers that are not part of existing flows and should be empty in normal operation. It is only intended as a development/debug feature and was created before the flows become so granular. Meanwhile it became a hinderance and can trip up genuine development/debug effort. This PR removes the extras flow and replaces it with a warning.

Type of change

  • Other (Specify) - Change in internal behavior, invisible to users

Tests

There are numerous ways to test this, as a developer one can just ensure one optimizer is not registered to a backend flow to see the warning. This can be done by commenting out any of the existing optimizers in any backend flows or creating a new optimizer in passes directory.

Creating a test-case for this is not really useful and is very hacky, as it requires "reloading" a backend, first by removing the registered optimizers and flows, then registering the new dummy optmizer and finally reloading the backend in the backend_map. I suggest we don't do this.

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

@vloncar vloncar requested a review from jmitrevs September 24, 2024 19:18
else:
extras_flow = None
for opt in extras:
print(f'WARNING: Optimizer "{opt}" is not part of any flow and will not be executed.')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it better to print a warning or to do warnings.warn()? I am fine either way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a nice description about warnings vs logging in python documentation:

warnings.warn() in library code if the issue is avoidable and the client application should be modified to eliminate the warning

A logger’s warning() method if there is nothing the client application can do about the situation, but the event should still be noted

By this logic we should use warnings.warn() instead of our way of "logging" with print(). I'll update the PR.

On a related note, I was thinking that after we merge large PRs for 1.0 we do a simple pass and change all print() statements to logging.info() or logging.warn() and eventually use logging.debug() throughout the code to help with development/debugging. It was never a good time to do this as it affected large unmerged codebases, but it seems after 1.0 things will finally sufficiently quiet down for this to be done.

@jmitrevs jmitrevs added the please test Trigger testing by creating local PR branch label Sep 24, 2024
@jmitrevs
Copy link
Contributor

I assume the CI failure is unrelated. I decided to run the pytests as a sanity check, but if they seem to be running fine I'll approve.

@jmitrevs jmitrevs added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Sep 25, 2024
@jmitrevs jmitrevs merged commit bd69272 into fastmachinelearning:main Sep 26, 2024
7 of 9 checks passed
jmitrevs added a commit to jmitrevs/hls4ml that referenced this pull request Oct 1, 2024
vloncar pushed a commit that referenced this pull request Oct 25, 2024
* snapshot adding oneapi

* fix reduce constexpr

* further updates

* update the bridge and testbench

* fix issues discovered when compiling

* update bridge writing files

* build library (but not tested)

* fix a bug in testbench

* snapshot after some debugging

* remove forgotten debug printing

* add build

* pre-commit fixes

* fix more pre-commit

* fix more pre-commit errors

* snapshot of work before reworking types

* Use using to decide array type, some preliminary updates

* snapshot unifying types

* fix the testbench and bridge

* snapshot updating nnet_utils (not finished)

* define array in nnet_types for oneAPI

* fix parallel conv2d

* add back the streaming versions of algs, most unconverted

* tentatively complete streaming for dense but not functional

* first version that compiles streaming

* change how the pipe value type is extracted

* fix pre-commit error

* always treat elu as ELU class

* fix batchnorm

* snapshot towards fixing conv

* snapshot fixing test for streaming

* fix conv1d

* fix conv2d

* fix reshape and flatten for oneAPI

* initial oneAPI tests

* remove nnet_dense_compressed from oneAPI

* add merge functionality (untested)

* fix merge for oneAPI

* fix merge for oneAPI (missing commit)

* add zeropadding

* standardize paralellization spelling

* fix pointwise for oneAPI

* remove references to quartus

* more replace quartus with oneapi

* snapshot on the way towards implementing pooling

* fix io_stream pooling for oneAPI

* add fix for Conv2DBatchnorm

* accidentally committed CMakeLists.txt in my debug setup

* reshaping, not fully tested

* fix cloning of streams

* fix pytest library loading

* remove unused template

* fix some activation bugs

* fix the overwriting of directories in the pytest

* update version of test repository

* try to fix docker issue

* bump hls4ml-testing tag to 0.5.2

* try not restricting tensorflow-model-optimizatoin

* Update to 0.5.3 for testing

* bump to docker image 0.5.4, suggested by Ben

* fix pre-commit warning

* dial down N_TESTS_PER_YAML to 4

* revert tensorflow-model-optimization change

* fix issue of saving in "obsolete" h5 format

* fix embedding for oneAPI

* First attempt at adding RNNs to oneAPI

* fix bug in array size

* fix order or indices

* make queues static in bridge

* fix logic error in repack stream

* changing the style, but functionally identical

* update pointwise optimizer for oneAPI

* add oneAPI to test_multi_dense.py

* fix updating weight types

* initial changes of templates, for testing

* fix weight naming, product selection

* make im2col the default; fix winograd size

* fix up streaming dense and convolution

* fix prelu, some batchnorm

* fix weight array of exponential types

* move ACExponentialPrecisionDefinition to oneapi_types

* attempt to fix batchnorm and recurrent

* fixed BatchNormalizationQuantizedTanhConfigTemplate template selection

* fix embedding_stream

* fix lstm and simple rnn

* fix GRU

* fix winograd, and also disable it by default

* fix threshold name

* split bn_quant to be backend-specific

* add type inference to oneAPI

* add oneAPI to pytorch tests

* fix pooling with padding for oneAPI and Quartus

* Compilation for larger models enabled by increasing -fconstexpr-steps

* add oneapi clone tests; remove reduntand multi_clone test

* remove some attributes to avoid overwrite warnings

* make extra handling for oneAPI like others (as in PR #1067)

* remove warnings for extra optimizers that are not scheduled on purpose

* update parametrized activations

* fix reference to alpha that had not been switched to param

* add oneapi documentation

* add parallelization factor to the attributes for oneAPI

---------

Co-authored-by: Lauri Laatu <[email protected]>
Co-authored-by: Jan-Frederik Schulte <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
please test Trigger testing by creating local PR branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants