-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Predicting on large DF runs into infinite loop #272
Comments
Turns out that if I only use an import automatminer as amm
import matminer as mm
featurizers = {
"composition": [mm.featurizers.composition.ElementProperty.from_preset("magpie")],
"structure": [],
}
pipe_config = {
**amm.get_preset_config(),
"autofeaturizer": amm.AutoFeaturizer(
featurizers=featurizers,
guess_oxistates=False,
),
}
pipe = amm.MatPipe(**pipe_config) |
Hey @janosh thanks for the bug report. I've been aware of this problem for some time and am actually currently running some tests to try and pinpoint it. I actually think this is a bug with matminer and job parallelization with What I think is happening behind the scenes is when Some tests to tryDoes running the bare featurizers (without automatminer) still have this problem? My guess is yes. If so, does setting n_jobs for an individual featurizer change the halting behavior whatsoever? My guess is that if you set n_jobs=1 the job will go very slowly but eventually finish, and if you turn n_jobs very high you increase the probability it halts indefinitely. |
I've been trying to work around what might be a bug in (auto-)matminer. Trying to make predictions for a large dataframe (around 80000 rows) never finishes. I think the culprit might be guessing oxidation states as that seems to a long time and also increases rapidly in run time from one prediction to the next when slicing up the dataframe into chunks and predicting on each chunk individually.
@ardunn I couldn't create a minimal example with dummy data that reproduces this issue but maybe you can try to run this script and see if you experience the same issue.
The text was updated successfully, but these errors were encountered: