Skip to content
This repository has been archived by the owner on Dec 20, 2023. It is now read-only.

4 benchmark suite setup #9

Merged
merged 47 commits into from
Jun 9, 2023
Merged

4 benchmark suite setup #9

merged 47 commits into from
Jun 9, 2023

Conversation

tyi1025
Copy link
Owner

@tyi1025 tyi1025 commented May 15, 2023

@burgholzer Hi, I added some stuff. Feel free to look and point out anything that's not good. Besides, I have a couple of questions:

  1. Is it possible to run the CI pipeline without pushing? It might be ideal to see whether the CI passes before everything is already on GitHub
  2. About the current architecture: should I still implement the VerificationTask.cpp and SimulationTask.cpp or should I just initialize concrete tasks using their respective headers?
  3. I added nlohmann/json, and now I want to add QuantumCircuit from extern/ddsim/extern/qfr/mqt/qfr/qiskit/QuantumCircuit.hpp for the Task classes. Should I then add the whole mqt module?
  4. In what context do you mean by "adaptor functions"?
  5. I might need some more guidance on the json test setup, but for this part let's wait until I get the source setup right first.

@tyi1025 tyi1025 requested a review from burgholzer May 15, 2023 18:50
@tyi1025 tyi1025 self-assigned this May 15, 2023
@tyi1025 tyi1025 linked an issue May 15, 2023 that may be closed by this pull request
@codecov
Copy link

codecov bot commented May 15, 2023

Codecov Report

Merging #9 (bf32253) into main (e032131) will not change coverage.
The diff coverage is 100.0%.

@@           Coverage Diff           @@
##             main       #9   +/-   ##
=======================================
  Coverage   100.0%   100.0%           
=======================================
  Files           2       12   +10     
  Lines          12      106   +94     
=======================================
+ Hits           12      106   +94     
Impacted Files Coverage Δ
include/Executor.hpp 100.0% <100.0%> (ø)
include/Task.hpp 100.0% <100.0%> (ø)
...lude/executors/AlternatingVerificationExecutor.hpp 100.0% <100.0%> (ø)
include/executors/CircuitSimulatorExecutor.hpp 100.0% <100.0%> (ø)
include/tasks/SimulationTask.hpp 100.0% <100.0%> (ø)
include/tasks/VerificationTask.hpp 100.0% <100.0%> (ø)
src/executors/AlternatingVerificationExecutor.cpp 100.0% <100.0%> (ø)
src/executors/CircuitSimulatorExecutor.cpp 100.0% <100.0%> (ø)
src/tasks/SimulationTask.cpp 100.0% <100.0%> (ø)
src/tasks/VerificationTask.cpp 100.0% <100.0%> (ø)
... and 2 more

Copy link
Collaborator

@burgholzer burgholzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just had a quick glance through everything here.
I added quite some rather general comments regarding coding best practices and such.
The other half of the comments is topic related. As of now, there is not much here, which makes it kind of hard to judge.
The goal for this PR should be to have a setup that allows to specify Simulation and Verifikation Executors and Tasks. In addition, this should also be tested.

Note, that this only has to be

  • a single, basic simulator
  • a single verification method
  • only the most basic metrics (some general metrics as described in the comments and one special DD metric)
  • just simple test circuits. One for simulation and two for verification.

That should suffice to find out whether the general framework works. Based on that:

include/Executor.hpp Outdated Show resolved Hide resolved
include/Executor.hpp Outdated Show resolved Hide resolved
include/Executor.hpp Outdated Show resolved Hide resolved
include/Executor.hpp Outdated Show resolved Hide resolved
include/Executor.hpp Outdated Show resolved Hide resolved
include/ResultBase.hpp Outdated Show resolved Hide resolved
include/ResultBase.hpp Outdated Show resolved Hide resolved
include/Result.hpp Outdated Show resolved Hide resolved
include/Task.hpp Outdated Show resolved Hide resolved
include/Task.hpp Outdated Show resolved Hide resolved
@burgholzer
Copy link
Collaborator

@burgholzer Hi, I added some stuff. Feel free to look and point out anything that's not good. Besides, I have a couple of questions:

  1. Is it possible to run the CI pipeline without pushing? It might be ideal to see whether the CI passes before everything is already on GitHub

Almost everything can be run before pushing:

  • You can build and run tests locally before pushing
  • clang-format and clang-tidy are natively integrated into CLion and should per default show warnings and reformat files properly
  • you can simply run pre-commit locally via pre-commit run -a or even easier, have it automatically run as a git hook by installing them once with pre-commit install
  • coverage can also be collected directly in CLion via the respective run option

That leaves only builds on other platforms than your host and the GitHub CodeQL checks which are fine to just run in CI.

  1. About the current architecture: should I still implement the VerificationTask.cpp and SimulationTask.cpp or should I just initialize concrete tasks using their respective headers?

I hope this should be answered by the Review comments. Tasks should really only describe ... well... tasks to be passed to a simulator or a verifier.

  1. I added nlohmann/json, and now I want to add QuantumCircuit from extern/ddsim/extern/qfr/mqt/qfr/qiskit/QuantumCircuit.hpp for the Task classes. Should I then add the whole mqt module?

Also see the comments on nlohmann_json, which is available by default. Note that you are looking for the wrong quantum circuit class here. The QFR has the QuantumComputation class, which lives in the qc namespace and can be directly imported via #include "QuantumComputation".

  1. In what context do you mean by "adaptor functions"?

In the sense of the Executor classes. They "adapt" the various individual simulators (etc.) to conform to a single, standardised interface and make sure to get the proper results from the execution.

  1. I might need some more guidance on the json test setup, but for this part let's wait until I get the source setup right first.

You should be able to (more-or-less) copy and paste this from the QMAP setup I linked.
Note that you should probably create two test targets in the long run: one that is intended to run in CI and runs quickly with the purpose of testing the overall functionality. and one that runs the "real" evaluations that we are interested in and will take quite a while to run.
That second target should be excluded from CI and things like coverage as it is never gonna be run in the CI environment.

src/CMakeLists.txt Outdated Show resolved Hide resolved
src/CMakeLists.txt Outdated Show resolved Hide resolved
@tyi1025
Copy link
Owner Author

tyi1025 commented May 18, 2023

@burgholzer It seems that CLion still does not recognize the two exemplary .cpp files in my test folder ("this file does not belong to any project target..."), despite them being in the CMake file. After some investigation, I think the problem might be that there is something I didn't configure about building the tests. Should I fix it or is it irrelevant for now?

@burgholzer
Copy link
Collaborator

@burgholzer It seems that CLion still does not recognize the two exemplary .cpp files in my test folder ("this file does not belong to any project target..."), despite them being in the CMake file. After some investigation, I think the problem might be that there is something I didn't configure about building the tests. Should I fix it or is it irrelevant for now?

Have you configured the current CMake target with -DBUILD_DD_EVAL_TESTS=ON?
And have you actually added the test files to a test target in the test/CMakeLists.txt file?
Last thing: maybe try clearing the CMakeCache.
That should be something you get fixed in your setup.

@tyi1025
Copy link
Owner Author

tyi1025 commented May 18, 2023

@burgholzer It seems that CLion still does not recognize the two exemplary .cpp files in my test folder ("this file does not belong to any project target..."), despite them being in the CMake file. After some investigation, I think the problem might be that there is something I didn't configure about building the tests. Should I fix it or is it irrelevant for now?

Have you configured the current CMake target with -DBUILD_DD_EVAL_TESTS=ON? And have you actually added the test files to a test target in the test/CMakeLists.txt file? Last thing: maybe try clearing the CMakeCache. That should be something you get fixed in your setup.

Thanks, it's fixed now. The problem was I wrote ...DDEVAL_TESTS... instead of ...DD_EVAL_TESTS... in my CLion run configuration🤦‍♂️

@tyi1025
Copy link
Owner Author

tyi1025 commented May 18, 2023

In the sense of the Executor classes. They "adapt" the various individual simulators (etc.) to conform to a single, standardised interface and make sure to get the proper results from the execution.

This means that the adapter functions should be specified in individual subclasses of Executor instead of declared in the headers, is it correct?

@burgholzer
Copy link
Collaborator

In the sense of the Executor classes. They "adapt" the various individual simulators (etc.) to conform to a single, standardised interface and make sure to get the proper results from the execution.

This means that the adapter functions should be specified in individual subclasses of Executor instead of declared in the headers, is it correct?

Yeah. I suspect some of these subclasses will be rather small or look similar. But this should allow for some more freedom when it comes to implementing various executors.

@tyi1025
Copy link
Owner Author

tyi1025 commented May 18, 2023

@burgholzer Do you think the code structure as of now makes sense? Please note that the subclasses of Executor and Task are not finished yet, which is why I didn't formally re-request a review.

@burgholzer
Copy link
Collaborator

@burgholzer Do you think the code structure as of now makes sense? Please note that the subclasses of Executor and Task are not finished yet, which is why I didn't formally re-request a review.

It's going into the right direction. It's still hard to judge because there are no tests and it isn't implemented end-to-end.
Please try to get a complete setup done and iterate on that. You have a lot of freedom on how to realize the overall architecture. It's on you to implement something that works and can be used. You shouldn't need my feedback on every single step along the way.

@tyi1025
Copy link
Owner Author

tyi1025 commented May 19, 2023

@burgholzer Do you think the code structure as of now makes sense? Please note that the subclasses of Executor and Task are not finished yet, which is why I didn't formally re-request a review.

It's going into the right direction. It's still hard to judge because there are no tests and it isn't implemented end-to-end.
Please try to get a complete setup done and iterate on that. You have a lot of freedom on how to realize the overall architecture. It's on you to implement something that works and can be used. You shouldn't need my feedback on every single step along the way.

Understand, I will keep that in mind.
Although we agreed on a flexible schedule, could you maybe give me a time recommendation, e.g. when should I finish the benchmarking tasks to not fall behind schedule (primarily the TUM project deadline in the week of October 16th), so that I'll have a better grasp on the big picture? I feel that some of my (stupid) questions might come from this "anxiety" because of the lack of knowledge of a "deadline".

@burgholzer
Copy link
Collaborator

I understand that feeling.
However, there is lots of time until October and lots of flexibility when it comes to the extent of this project. So there is really no need to worry. You'll get something done for sure.

As for more precise deadlines: the earlier this benchmark suite is "ready" or at least set up, the earlier you can work on extracting relevant statistics from the DD Package (which will definitely also require some time).
After that the real improvements on the DD Package happen.
If I had a guess then the benchmark suite setup here shouldn't take much more than a week or so.
Collecting the benchmarks and setting up the various simulators / verifiers might take another 2-3 days.
Gathering/logging all the statistics will probably require 2-3 weeks as you really have to learn about the DD package architecture at that point.
The rest of the time can then be spent on improvements of the package and aggregation/visualization of the benchmark results.

Hope that helps.

@tyi1025 tyi1025 marked this pull request as ready for review June 2, 2023 23:24
@tyi1025 tyi1025 requested a review from burgholzer June 2, 2023 23:25
Copy link
Collaborator

@burgholzer burgholzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your work on this. Seems like you are getting more and more familiar with everything. I still believe this needs some major changes before actually being useful. You'll find many suggestions for changes in the comments below. I hope the respective explanations and suggestions are clear enough so that you can work with them.
I have the feeling that this PR should be more or less good to go after all the issues have been addressed.
Most of the feedback is rather straight to the point and should be easy to implement. The most open question probably concerns the tests and how to check the expected behavior. This might involve some creativity from your side.

include/Task.hpp Outdated Show resolved Hide resolved
src/VerificationTask.cpp Outdated Show resolved Hide resolved
src/SimulationTask.cpp Outdated Show resolved Hide resolved
include/SimulationTask.hpp Outdated Show resolved Hide resolved
include/VerificationTask.hpp Outdated Show resolved Hide resolved
test/equi_circuits.json Outdated Show resolved Hide resolved
test/circuits.json Outdated Show resolved Hide resolved
test/test_qcecexec_simple.cpp Outdated Show resolved Hide resolved
test/test_simexec_simple.cpp Outdated Show resolved Hide resolved
test/test_simexec.cpp Outdated Show resolved Hide resolved
@tyi1025 tyi1025 force-pushed the 4-benchmark-suite-setup branch from 9f66121 to a4e752f Compare June 8, 2023 10:52
@tyi1025 tyi1025 requested a review from burgholzer June 9, 2023 17:20
Copy link
Collaborator

@burgholzer burgholzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be good to go 👍🏼

@tyi1025 tyi1025 merged commit 1788d78 into main Jun 9, 2023
@tyi1025 tyi1025 deleted the 4-benchmark-suite-setup branch June 9, 2023 17:25
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

🔧 Benchmark Suite Setup
2 participants