Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytorch CUDA support within key4hep environment #688

Open
dudarboh opened this issue Dec 16, 2024 · 0 comments
Open

pytorch CUDA support within key4hep environment #688

dudarboh opened this issue Dec 16, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@dudarboh
Copy link

The current key4hep builds pytorch without CUDA support.
That means all HEP ML studies where one would need to train on GPUs are performed in a separate custom user environment.
I think it would greatly help the reproducibility and integrability of all ML studies if training on GPUs could be possible within the same key4hep environment where one runs all the other non-ML related things, e.g. simulation/reconstruction/analysis/etc.

As discussed privately with @tmadlener, including CUDA support for all O(10) CUDA architectures would increase the stack size by 4-5 GB (~30% of its current size) and add a lot of maintenance burden.
So it may only make sense if many people are interested.

@dudarboh dudarboh added the enhancement New feature or request label Dec 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant