Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

REFACTOR-#000: Change IsMpiSpawnWorkers to MpiSpawn to make it more concise #382

Merged
merged 1 commit into from
Nov 10, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/flow/unidist/config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Unidist Configuration Settings List
+-------------------------------+-------------------------------------------+--------------------------------------------------------------------------+
| DaskSchedulerAddress | UNIDIST_DASK_SCHEDULER_ADDRESS | Dask Scheduler address to connect to when running in Dask cluster |
+-------------------------------+-------------------------------------------+--------------------------------------------------------------------------+
| IsMpiSpawnWorkers | UNIDIST_IS_MPI_SPAWN_WORKERS | Whether to enable MPI spawn or not |
| MpiSpawn | UNIDIST_MPI_SPAWN | Whether to enable MPI spawn or not |
+-------------------------------+-------------------------------------------+--------------------------------------------------------------------------+
| MpiHosts | UNIDIST_MPI_HOSTS | MPI hosts to run unidist on |
+-------------------------------+-------------------------------------------+--------------------------------------------------------------------------+
Expand Down
20 changes: 10 additions & 10 deletions docs/using_unidist/unidist_on_mpi.rst
Original file line number Diff line number Diff line change
Expand Up @@ -72,25 +72,25 @@ SPMD model
----------

First of all, to run unidist on MPI in a single node using `SPMD model`_,
you should set the ``UNIDIST_IS_MPI_SPAWN_WORKERS`` environment variable to ``False``:
you should set the ``UNIDIST_MPI_SPAWN`` environment variable to ``False``:

.. code-block:: bash

$ export UNIDIST_IS_MPI_SPAWN_WORKERS=False
$ export UNIDIST_MPI_SPAWN=False

.. code-block:: python

import os

os.environ["UNIDIST_IS_MPI_SPAWN_WORKERS"] = "False"
os.environ["UNIDIST_MPI_SPAWN"] = "False"

or set the associated configuration value:

.. code-block:: python

from unidist.config import IsMpiSpawnWorkers
from unidist.config import MpiSpawn

IsMpiSpawnWorkers.put(False)
MpiSpawn.put(False)

This will enable unidist not to spawn MPI processes dynamically because the user himself spawns the processes.

Expand Down Expand Up @@ -169,25 +169,25 @@ SPMD model
""""""""""

First of all, to run unidist on MPI in a cluster using `SPMD model`_,
you should set the ``UNIDIST_IS_MPI_SPAWN_WORKERS`` environment variable to ``False``:
you should set the ``UNIDIST_MPI_SPAWN`` environment variable to ``False``:

.. code-block:: bash

$ export UNIDIST_IS_MPI_SPAWN_WORKERS=False
$ export UNIDIST_MPI_SPAWN=False

.. code-block:: python

import os

os.environ["UNIDIST_IS_MPI_SPAWN_WORKERS"] = "False"
os.environ["UNIDIST_MPI_SPAWN"] = "False"

or set the associated configuration value:

.. code-block:: python

from unidist.config import IsMpiSpawnWorkers
from unidist.config import MpiSpawn

IsMpiSpawnWorkers.put(False)
MpiSpawn.put(False)

This will enable unidist not to spawn MPI processes dynamically because the user himself spawns the processes.

Expand Down
4 changes: 2 additions & 2 deletions unidist/config/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
)
from .backends.dask import DaskMemoryLimit, IsDaskCluster, DaskSchedulerAddress
from .backends.mpi import (
IsMpiSpawnWorkers,
MpiSpawn,
MpiHosts,
MpiPickleThreshold,
MpiBackoff,
Expand All @@ -38,7 +38,7 @@
"DaskMemoryLimit",
"IsDaskCluster",
"DaskSchedulerAddress",
"IsMpiSpawnWorkers",
"MpiSpawn",
"MpiHosts",
"ValueSource",
"MpiPickleThreshold",
Expand Down
4 changes: 2 additions & 2 deletions unidist/config/backends/mpi/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"""Config entities specific for MPI backend which can be used for unidist behavior tuning."""

from .envvars import (
IsMpiSpawnWorkers,
MpiSpawn,
MpiHosts,
MpiPickleThreshold,
MpiBackoff,
Expand All @@ -18,7 +18,7 @@
)

__all__ = [
"IsMpiSpawnWorkers",
"MpiSpawn",
"MpiHosts",
"MpiPickleThreshold",
"MpiBackoff",
Expand Down
4 changes: 2 additions & 2 deletions unidist/config/backends/mpi/envvars.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
from unidist.config.parameter import EnvironmentVariable, ExactStr


class IsMpiSpawnWorkers(EnvironmentVariable, type=bool):
class MpiSpawn(EnvironmentVariable, type=bool):
"""Whether to enable MPI spawn or not."""

default = True
varname = "UNIDIST_IS_MPI_SPAWN_WORKERS"
varname = "UNIDIST_MPI_SPAWN"


class MpiHosts(EnvironmentVariable, type=ExactStr):
Expand Down
4 changes: 2 additions & 2 deletions unidist/core/backends/mpi/core/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
import inspect
import weakref

from unidist.config.backends.mpi.envvars import IsMpiSpawnWorkers
from unidist.config.backends.mpi.envvars import MpiSpawn
from unidist.core.backends.mpi.utils import ImmutableDict

try:
Expand Down Expand Up @@ -485,7 +485,7 @@ def is_shared_memory_supported():
# Mpich shared memory does not work with spawned processes prior to version 4.2.0.
if (
"MPICH" in MPI.Get_library_version()
and IsMpiSpawnWorkers.get()
and MpiSpawn.get()
and not check_mpich_version("4.2.0")
):
return False
Expand Down
4 changes: 2 additions & 2 deletions unidist/core/backends/mpi/core/communication.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
deserialize_complex_data,
)
import unidist.core.backends.mpi.core.common as common
from unidist.config.backends.mpi.envvars import IsMpiSpawnWorkers, MpiHosts
from unidist.config.backends.mpi.envvars import MpiSpawn, MpiHosts

# TODO: Find a way to move this after all imports
mpi4py.rc(recv_mprobe=False, initialize=False)
Expand Down Expand Up @@ -132,7 +132,7 @@ def __init__(self, comm):
self.host_by_rank[global_rank] = host

mpi_hosts = MpiHosts.get()
if mpi_hosts is not None and IsMpiSpawnWorkers.get():
if mpi_hosts is not None and MpiSpawn.get():
host_list = mpi_hosts.split(",")
host_count = len(host_list)

Expand Down
12 changes: 6 additions & 6 deletions unidist/core/backends/mpi/core/controller/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
from unidist.core.backends.mpi.core.async_operations import AsyncOperations
from unidist.config import (
CpuCount,
IsMpiSpawnWorkers,
MpiSpawn,
MpiHosts,
ValueSource,
MpiPickleThreshold,
Expand Down Expand Up @@ -145,16 +145,16 @@ def init():

# Path to dynamically spawn MPI processes.
# If a requirement is not met, processes have been started with mpiexec -n <N>, where N > 1.
if rank == 0 and parent_comm == MPI.COMM_NULL and IsMpiSpawnWorkers.get():
if rank == 0 and parent_comm == MPI.COMM_NULL and MpiSpawn.get():
args = _get_py_flags()
args += ["-c"]
py_str = [
"import unidist",
"import unidist.config as cfg",
"cfg.Backend.put('mpi')",
]
if IsMpiSpawnWorkers.get_value_source() != ValueSource.DEFAULT:
py_str += [f"cfg.IsMpiSpawnWorkers.put({IsMpiSpawnWorkers.get()})"]
if MpiSpawn.get_value_source() != ValueSource.DEFAULT:
py_str += [f"cfg.MpiSpawn.put({MpiSpawn.get()})"]
if MpiHosts.get_value_source() != ValueSource.DEFAULT:
py_str += [f"cfg.MpiHosts.put('{MpiHosts.get()}')"]
if CpuCount.get_value_source() != ValueSource.DEFAULT:
Expand Down Expand Up @@ -272,7 +272,7 @@ def init():
# If the user executes a program in SPMD mode,
# we do not want workers to continue the flow after `unidist.init()`
# so just killing them.
if not IsMpiSpawnWorkers.get():
if not MpiSpawn.get():
sys.exit()
return
else:
Expand All @@ -282,7 +282,7 @@ def init():
# If the user executes a program in SPMD mode,
# we do not want workers to continue the flow after `unidist.init()`
# so just killing them.
if not IsMpiSpawnWorkers.get():
if not MpiSpawn.get():
sys.exit()
return

Expand Down
Loading