-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using UltraNest
with an MPI
parallelized likelihood.
#153
Comments
It would be better if you do not do MPI within your likelihood (openmp is fine), because ultranest already runs hundreds of likelihood evaluations in parallel, so MPI will just be confused by another parallelisation. The error is strange. Can you paste the log file from the log_dir? |
Hi, thank you for the reply! To be exact, the script doesn't stop. The .log file content is:
|
It seems you are resuming and this line is causing the issue: I don't quite see the benefit of using MPI within your likelihood when you can have ultranest parallelise the likelihood, which will save communication overhead. Are there memory constraints that make you prefer MPI parallelisation within the likelihood and serial evaluation by ultranest? |
I'm not sure if ultranest is capable of parallelizing the likelihood as I do in the example code. I have an update on my problem. If I remove the lines in the original code
the assertion error disappers, but then code is stucked here
I tried to set the value on that line to be zero and also on line 1516 (with options
|
Hi!
I'm trying to use
UltraNest
with a very expensive likelihood whose evaluation at a single point of the parameter space needs to be parallelized using MPI.The likelihood is automatically vectorized.
However, if I'm not mistaken,
UltraNest
uses MPI to parallelize some internal computation/live point proposal. This conflict causes a bug in my program.Here's a dummy code that reproduces a heavy likelihood parallelized using MPI:
I was able to use this likelihood with
emcee
in the following way:The idea of this script is that each MPI process run an
emcee
sampler so that each of them calls the likelihood function. This is necessary, otherwise the non-root ranks never compute the likelihood on their chunk of data. However, only the root process stores the results and prints the progress. Also, inside the likelihood function the parameters used to compute the likelihood by each rank are forced to be those of the root for consistency between the various sampler.This example works and there is effectively a large speed up in the code.
I can't produce anything similar with
UltraNest
. I tried with this code here:The program runs when started with a single MPI process (
mpirun -np 1 python test.py
), but fails fornp > 1
with error:Does anyone know a workaround for this problem?
Thanks!
The text was updated successfully, but these errors were encountered: