Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmark script #81

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Add benchmark script #81

wants to merge 1 commit into from

Conversation

sirmarcel
Copy link
Contributor

@sirmarcel sirmarcel commented Oct 15, 2024

Fixes #17

As discussed a while ago with @PicoCentauri , this adds a benchmark script, designed to emulate a "typical" training workload. The idea is that we agree to run this before major merges, to make sure we don't cause performance regressions.

The script runs Ewald and PME, with and without Metatensor, for CsCl crystals of various sizes. The results are saved as .yaml, with some diagnonstic information. An example output can be seen in #80. The script already worked quite nicely to pointint that something was wrong in that case...

We could consider making some additional repository for the .yaml and make a nice plot of performance, hopefully, increasing over time.


📚 Documentation preview 📚: https://torch-pme--81.org.readthedocs.build/en/81/

@sirmarcel
Copy link
Contributor Author

@kvhuguenin the times are reported, as far as i'm able to determine, per sample. however, the script allows for asynchronous dispatch, so in principle there could even be some parallelisation benefits.... maybe i should turn that off. let's have a look.

@PicoCentauri
Copy link
Contributor

Very nice @sirmarcel and thanks. This will be very useful.

Regarding storing. I had the idea of creating an orphan branch maybe called benchmark and put the files there together with a plotting script and the figure.


.. code-block:: bash

pip install .[metatensor]
Copy link
Contributor

@PicoCentauri PicoCentauri Oct 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think also vesin, ase etc are needed.

Usage
-----

First, make sure to install ``torch-pme`` with the right dependencies:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe even

Suggested change
First, make sure to install ``torch-pme`` with the right dependencies:
First, make sure to install ``torch-pme`` with the right dependencies in a fresh enviroment:

Comment on lines +23 to +28
devices = []
if torch.cuda.is_available():
devices.append("cuda")

# run CUDA first!
devices.append("cpu")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
devices = []
if torch.cuda.is_available():
devices.append("cuda")
# run CUDA first!
devices.append("cpu")
devices = ["cpu"]
# run CUDA first!
if torch.cuda.is_available():
devices.insert(0, "cuda")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
devices = []
if torch.cuda.is_available():
devices.append("cuda")
# run CUDA first!
devices.append("cpu")
devices = ["cuda", "cpu"] if torch.cuda.is_available() else ["cpu"]

Comment on lines +96 to +100
version = {
"torch": str(torch.__version__),
"torch-pme-commit": torch_pme_commit,
"torch-pme-status": torch_pme_status,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe also add the torchpme.__version__. Even though useless before we release but useful in the future.

with open(f"{timestamp}.yaml", "w") as file:
yaml.dump(out, file)

print("Have a nice day.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good old fhi-aims vibes. Like it.

@PicoCentauri
Copy link
Contributor

PicoCentauri commented Oct 15, 2024

Maybe we should also run this code somewhere in CI. Not for benchmark but just that we know we don't break anything. maybe just a subprocess call in der testsuite...

EDIT. Maybe one can eve do a tox -e benchmark. This will create an fresh env automatically, but maybe is problematic on HPC systems where you can't easily install packages...

@ceriottm
Copy link
Contributor

Would be nice to make this a bit more customizable - setting the devices to run on, the sizes to try, etc. We can leave all this as defaults, but e.g. if one is using this in development it makes sense to only run what's being optimized for.


import torchpme

primitive = read("geometry.in")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe choose the file based on the location of this script?

Suggested change
primitive = read("geometry.in")
primitive = read(Path(__file__).parent / "geometry.in")

@@ -0,0 +1,344 @@
import datetime
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe even make it executable

Suggested change
import datetime
#!/usr/bin/env python
import datetime

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Perform benchmarks of PMEPotential
4 participants