Hyperparameter suggestions considering the speed in lammps #26
-
To reach a good balance of accuracy and efficiency in MD, is there any advice? num_layers: 2
l_max: 2
parity: o3_full
env_embed_multiplicity: 16
two_body_latent_mlp_latent_dimensions: [32, 64, 128]
latent_mlp_latent_dimensions: [128]
edge_eng_mlp_latent_dimensions: [128, 64] which is much smaller than configs/example.yaml and end up with 0.368 ns/day with 7680 atoms on 4 GPUs without Kokko as our server doesn't support:( |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 3 replies
-
Hi @Hongyu-yu, thanks for your interest in our code. You're right there is always a trade-off between accuracy and computational efficiency. The parameters you laid out are all important, here is what I usually scan:
Hope this works. |
Beta Was this translation helpful? Give feedback.
-
Hi @simonbatzner, thanks for your detailed suggestions! How about |
Beta Was this translation helpful? Give feedback.
-
(A side point is that we, like pretty much everyone else, see some very large performance differences between NVIDIA GPU generations, so 4 A100 >> 4 V100 >> etc.) |
Beta Was this translation helpful? Give feedback.
Hi @Hongyu-yu, thanks for your interest in our code. You're right there is always a trade-off between accuracy and computational efficiency. The parameters you laid out are all important, here is what I usually scan:
latent_mlp_latent_dimensions: [128, 128, 128]
and use a silu nonlinearity in that MLP.