Skip to content

Commit

Permalink
REFACTOR-#000: Change IsMpiSpawnWorkers to MpiSpawn to make it more c…
Browse files Browse the repository at this point in the history
…oncise

Signed-off-by: Igoshev, Iaroslav <[email protected]>
  • Loading branch information
YarShev committed Nov 7, 2023
1 parent f50fc44 commit 04892b3
Show file tree
Hide file tree
Showing 7 changed files with 25 additions and 25 deletions.
2 changes: 1 addition & 1 deletion docs/flow/unidist/config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Unidist Configuration Settings List
+-------------------------------+-------------------------------------------+--------------------------------------------------------------------------+
| DaskSchedulerAddress | UNIDIST_DASK_SCHEDULER_ADDRESS | Dask Scheduler address to connect to when running in Dask cluster |
+-------------------------------+-------------------------------------------+--------------------------------------------------------------------------+
| IsMpiSpawnWorkers | UNIDIST_IS_MPI_SPAWN_WORKERS | Whether to enable MPI spawn or not |
| MpiSpawn | UNIDIST_MPI_SPAWN | Whether to enable MPI spawn or not |
+-------------------------------+-------------------------------------------+--------------------------------------------------------------------------+
| MpiHosts | UNIDIST_MPI_HOSTS | MPI hosts to run unidist on |
+-------------------------------+-------------------------------------------+--------------------------------------------------------------------------+
Expand Down
20 changes: 10 additions & 10 deletions docs/using_unidist/unidist_on_mpi.rst
Original file line number Diff line number Diff line change
Expand Up @@ -71,25 +71,25 @@ SPMD model
----------

First of all, to run unidist on MPI in a single node using `SPMD model <https://en.wikipedia.org/wiki/Single_program,_multiple_data>`_,
you should set the ``UNIDIST_IS_MPI_SPAWN_WORKERS`` environment variable to ``False``:
you should set the ``UNIDIST_MPI_SPAWN`` environment variable to ``False``:

.. code-block:: bash
$ export UNIDIST_IS_MPI_SPAWN_WORKERS=False
$ export UNIDIST_MPI_SPAWN=False
.. code-block:: python
import os
os.environ["UNIDIST_IS_MPI_SPAWN_WORKERS"] = "False"
os.environ["UNIDIST_MPI_SPAWN"] = "False"
or set the associated configuration value:

.. code-block:: python
from unidist.config import IsMpiSpawnWorkers
from unidist.config import MpiSpawn
IsMpiSpawnWorkers.put(False)
MpiSpawn.put(False)
This will enable unidist not to spawn MPI processes dynamically because the user himself spawns the processes.

Expand Down Expand Up @@ -146,25 +146,25 @@ SPMD model
""""""""""

First of all, to run unidist on MPI in a cluster using `SPMD model <https://en.wikipedia.org/wiki/Single_program,_multiple_data>`_,
you should set the ``UNIDIST_IS_MPI_SPAWN_WORKERS`` environment variable to ``False``:
you should set the ``UNIDIST_MPI_SPAWN`` environment variable to ``False``:

.. code-block:: bash
$ export UNIDIST_IS_MPI_SPAWN_WORKERS=False
$ export UNIDIST_MPI_SPAWN=False
.. code-block:: python
import os
os.environ["UNIDIST_IS_MPI_SPAWN_WORKERS"] = "False"
os.environ["UNIDIST_MPI_SPAWN"] = "False"
or set the associated configuration value:

.. code-block:: python
from unidist.config import IsMpiSpawnWorkers
from unidist.config import MpiSpawn
IsMpiSpawnWorkers.put(False)
MpiSpawn.put(False)
This will enable unidist not to spawn MPI processes dynamically because the user himself spawns the processes.

Expand Down
4 changes: 2 additions & 2 deletions unidist/config/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
)
from .backends.dask import DaskMemoryLimit, IsDaskCluster, DaskSchedulerAddress
from .backends.mpi import (
IsMpiSpawnWorkers,
MpiSpawn,
MpiHosts,
MpiPickleThreshold,
MpiBackoff,
Expand All @@ -38,7 +38,7 @@
"DaskMemoryLimit",
"IsDaskCluster",
"DaskSchedulerAddress",
"IsMpiSpawnWorkers",
"MpiSpawn",
"MpiHosts",
"ValueSource",
"MpiPickleThreshold",
Expand Down
4 changes: 2 additions & 2 deletions unidist/config/backends/mpi/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"""Config entities specific for MPI backend which can be used for unidist behavior tuning."""

from .envvars import (
IsMpiSpawnWorkers,
MpiSpawn,
MpiHosts,
MpiPickleThreshold,
MpiBackoff,
Expand All @@ -18,7 +18,7 @@
)

__all__ = [
"IsMpiSpawnWorkers",
"MpiSpawn",
"MpiHosts",
"MpiPickleThreshold",
"MpiBackoff",
Expand Down
4 changes: 2 additions & 2 deletions unidist/config/backends/mpi/envvars.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
from unidist.config.parameter import EnvironmentVariable, ExactStr


class IsMpiSpawnWorkers(EnvironmentVariable, type=bool):
class MpiSpawn(EnvironmentVariable, type=bool):
"""Whether to enable MPI spawn or not."""

default = True
varname = "UNIDIST_IS_MPI_SPAWN_WORKERS"
varname = "UNIDIST_MPI_SPAWN"


class MpiHosts(EnvironmentVariable, type=ExactStr):
Expand Down
4 changes: 2 additions & 2 deletions unidist/core/backends/mpi/core/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
import inspect
import weakref

from unidist.config.backends.mpi.envvars import IsMpiSpawnWorkers
from unidist.config.backends.mpi.envvars import MpiSpawn
from unidist.core.backends.mpi.utils import ImmutableDict

try:
Expand Down Expand Up @@ -485,7 +485,7 @@ def is_shared_memory_supported():
# Mpich shared memory does not work with spawned processes prior to version 4.2.0.
if (
"MPICH" in MPI.Get_library_version()
and IsMpiSpawnWorkers.get()
and MpiSpawn.get()
and not check_mpich_version("4.2.0")
):
return False
Expand Down
12 changes: 6 additions & 6 deletions unidist/core/backends/mpi/core/controller/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
from unidist.core.backends.mpi.core.async_operations import AsyncOperations
from unidist.config import (
CpuCount,
IsMpiSpawnWorkers,
MpiSpawn,
MpiHosts,
ValueSource,
MpiPickleThreshold,
Expand Down Expand Up @@ -145,16 +145,16 @@ def init():

# Path to dynamically spawn MPI processes.
# If a requirement is not met, processes have been started with mpiexec -n <N>, where N > 1.
if rank == 0 and parent_comm == MPI.COMM_NULL and IsMpiSpawnWorkers.get():
if rank == 0 and parent_comm == MPI.COMM_NULL and MpiSpawn.get():
args = _get_py_flags()
args += ["-c"]
py_str = [
"import unidist",
"import unidist.config as cfg",
"cfg.Backend.put('mpi')",
]
if IsMpiSpawnWorkers.get_value_source() != ValueSource.DEFAULT:
py_str += [f"cfg.IsMpiSpawnWorkers.put({IsMpiSpawnWorkers.get()})"]
if MpiSpawn.get_value_source() != ValueSource.DEFAULT:
py_str += [f"cfg.MpiSpawn.put({MpiSpawn.get()})"]
if MpiHosts.get_value_source() != ValueSource.DEFAULT:
py_str += [f"cfg.MpiHosts.put('{MpiHosts.get()}')"]
if CpuCount.get_value_source() != ValueSource.DEFAULT:
Expand Down Expand Up @@ -276,7 +276,7 @@ def init():
# If the user executes a program in SPMD mode,
# we do not want workers to continue the flow after `unidist.init()`
# so just killing them.
if not IsMpiSpawnWorkers.get():
if not MpiSpawn.get():
sys.exit()
return
else:
Expand All @@ -286,7 +286,7 @@ def init():
# If the user executes a program in SPMD mode,
# we do not want workers to continue the flow after `unidist.init()`
# so just killing them.
if not IsMpiSpawnWorkers.get():
if not MpiSpawn.get():
sys.exit()
return

Expand Down

0 comments on commit 04892b3

Please sign in to comment.