You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
was previously written to disk with only some folds recorded.
For the purpose of splitting folds into slurm array jobs, it would be very useful if it was possible to read and write partial benchmarks. I tried commenting out the validation line
and everything appears to be working fine. The line probably has a purpose but perhaps that could be achieved differently while also allowing partial benchmark writing?
The text was updated successfully, but these errors were encountered:
This is a good point. I never really considered people would be using Matbench in a parallel fashion but now that they are it makes sense to think of a more comprehensive and robust solution. I'll do some thinking on my side but if you have ideas for how to do this while still allowing validation on loading I'm open to suggestions
Something off the top of my head is just introducing a conditional that will validate only if all folds are recorded. The purpose of the validation is to really check for any possible errors before it is saved as a complete benchmark (and used by the doc builder to actually create the docs) to avoid downstream debugging chaos. But I can't immediately forsee any scenario where the benchmark checker would allow an incomplete task without error, so maybe just a simple conditional would work.
This code
raises
if
was previously written to disk with only some folds recorded.
For the purpose of splitting folds into slurm array jobs, it would be very useful if it was possible to read and write partial benchmarks. I tried commenting out the validation line
matbench/matbench/task.py
Line 214 in c3b910e
and everything appears to be working fine. The line probably has a purpose but perhaps that could be achieved differently while also allowing partial benchmark writing?
The text was updated successfully, but these errors were encountered: