You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current key4hep builds pytorch without CUDA support.
That means all HEP ML studies where one would need to train on GPUs are performed in a separate custom user environment.
I think it would greatly help the reproducibility and integrability of all ML studies if training on GPUs could be possible within the same key4hep environment where one runs all the other non-ML related things, e.g. simulation/reconstruction/analysis/etc.
As discussed privately with @tmadlener, including CUDA support for all O(10) CUDA architectures would increase the stack size by 4-5 GB (~30% of its current size) and add a lot of maintenance burden.
So it may only make sense if many people are interested.
The text was updated successfully, but these errors were encountered:
The current key4hep builds
pytorch
without CUDA support.That means all HEP ML studies where one would need to train on GPUs are performed in a separate custom user environment.
I think it would greatly help the reproducibility and integrability of all ML studies if training on GPUs could be possible within the same key4hep environment where one runs all the other non-ML related things, e.g. simulation/reconstruction/analysis/etc.
As discussed privately with @tmadlener, including CUDA support for all O(10) CUDA architectures would increase the stack size by 4-5 GB (~30% of its current size) and add a lot of maintenance burden.
So it may only make sense if many people are interested.
The text was updated successfully, but these errors were encountered: