-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Defining the number of cores to utilize. #8
Comments
On spreg we've adopted as default: def __init__(self, y, x, regimes, w, cores=None):
pool = mp.Pool(cores) I don't think the 'if statement' is needed and haven't used it so far. In mp.Pool(processes), processes is the number of worker processes to use. If processes is None then the number returned by cpu_count() is used. So that if statement seems redundant. |
That works for instances where |
|
def my_func(arg1, arg2, cores=mp.cpu_count()):
pass I am hesitant to just default to 1. In cases where we want to support iPython integration 1 will crash on a dual core machine. In cases where we access slice by index (original post) 1 will crash.
This hits what I see as the root questions: Is PySAL going to support multiprocessing in trunk? (It looks like yes.) If so, are we going to make it a black box that just works, or leave the interfacing to the user. The former requires that we perform these checks, etc. The later assumes that the dev. using the library is fluent enough in the multiprocessing library not to break something. |
We need to be careful using (ncores -1) as the number of processing cores. This does not work on a dual core machine when we use slice notation, i.e.
Probably the best bet, across pysal, is something like:
The text was updated successfully, but these errors were encountered: