Skip to content

Commit

Permalink
Merge branch 'master' into benc-python3.13
Browse files Browse the repository at this point in the history
  • Loading branch information
benclifford authored Dec 14, 2024
2 parents 2e2483e + 1821120 commit a020aea
Show file tree
Hide file tree
Showing 148 changed files with 1,039 additions and 2,735 deletions.
1 change: 1 addition & 0 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ on:
types:
- opened
- synchronize
merge_group:

jobs:
main-test-suite:
Expand Down
1 change: 0 additions & 1 deletion .wci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,6 @@ execution_environment:
- Slurm
- LSF
- PBS
- Cobalt
- Flux
- GridEngine
- HTCondor
Expand Down
5 changes: 4 additions & 1 deletion README.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Parsl - Parallel Scripting Library
==================================
|licence| |docs| |NSF-1550588| |NSF-1550476| |NSF-1550562| |NSF-1550528| |CZI-EOSS|
|licence| |docs| |NSF-1550588| |NSF-1550476| |NSF-1550562| |NSF-1550528| |NumFOCUS| |CZI-EOSS|

Parsl extends parallelism in Python beyond a single computer.

Expand Down Expand Up @@ -64,6 +64,9 @@ then explore the `parallel computing patterns <https://parsl.readthedocs.io/en/s
.. |CZI-EOSS| image:: https://chanzuckerberg.github.io/open-science/badges/CZI-EOSS.svg
:target: https://czi.co/EOSS
:alt: CZI's Essential Open Source Software for Science
.. |NumFOCUS| image:: https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A
:target: https://numfocus.org
:alt: Powered by NumFOCUS


Quickstart
Expand Down
6 changes: 3 additions & 3 deletions docs/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -257,15 +257,15 @@ There are a few common situations in which a Parsl script might hang:

.. code-block:: python
from libsubmit.providers import Cobalt
from parsl.config import Config
from parsl.providers import SlurmProvider
from parsl.executors import HighThroughputExecutor
config = Config(
executors=[
HighThroughputExecutor(
label='ALCF_theta_local',
provider=Cobalt(),
label='htex',
provider=SlurmProvider(),
worer_port_range=('50000,55000'),
interchange_port_range=('50000,55000')
)
Expand Down
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,7 @@ Table of Contents
quickstart
1-parsl-introduction.ipynb
userguide/index
userguide/glossary
faq
reference
devguide/index
Expand Down
13 changes: 9 additions & 4 deletions docs/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -176,12 +176,17 @@ This script runs on a system that must stay on-line until all of your tasks comp
much computing power, such as the login node for a supercomputer.

The :class:`~parsl.config.Config` object holds definitions of Executors and the Providers and Launchers they rely on.
An example which launches 512 workers on 128 nodes of the Polaris supercomputer looks like
An example which launches 4 workers on 1 node of the Polaris supercomputer looks like

.. code-block:: python
from parsl import Config
from parsl.executors import HighThroughputExecutor
from parsl.providers import PBSProProvider
from parsl.launchers import MpiExecLauncher
config = Config(
retires=1, # Restart task if they fail once
retries=1, # Restart task if they fail once
executors=[
HighThroughputExecutor(
available_accelerators=4, # Maps one worker per GPU
Expand All @@ -191,13 +196,13 @@ An example which launches 512 workers on 128 nodes of the Polaris supercomputer
account="example",
worker_init="module load conda; conda activate parsl",
walltime="1:00:00",
queue="prod",
queue="debug",
scheduler_options="#PBS -l filesystems=home:eagle", # Change if data on other filesystem
launcher=MpiExecLauncher(
bind_cmd="--cpu-bind", overrides="--depth=64 --ppn 1"
), # Ensures 1 manger per node and allows it to divide work to all 64 cores
select_options="ngpus=4",
nodes_per_block=128,
nodes_per_block=1,
cpus_per_node=64,
),
),
Expand Down
9 changes: 0 additions & 9 deletions docs/reference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,6 @@ Providers
:nosignatures:

parsl.providers.AWSProvider
parsl.providers.CobaltProvider
parsl.providers.CondorProvider
parsl.providers.GoogleCloudProvider
parsl.providers.GridEngineProvider
Expand Down Expand Up @@ -170,14 +169,6 @@ Exceptions
parsl.providers.errors.ScaleOutFailed
parsl.providers.errors.SchedulerMissingArgs
parsl.providers.errors.ScriptPathError
parsl.channels.errors.ChannelError
parsl.channels.errors.BadHostKeyException
parsl.channels.errors.BadScriptPath
parsl.channels.errors.BadPermsScriptPath
parsl.channels.errors.FileExists
parsl.channels.errors.AuthException
parsl.channels.errors.SSHException
parsl.channels.errors.FileCopyException
parsl.executors.high_throughput.errors.WorkerLost
parsl.executors.high_throughput.interchange.ManagerLost
parsl.serialize.errors.DeserializationError
Expand Down
9 changes: 2 additions & 7 deletions docs/userguide/configuring.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ environment. Configuration is described by a Python object (:class:`~parsl.confi
so that developers can
introspect permissible options, validate settings, and retrieve/edit
configurations dynamically during execution. A configuration object specifies
details of the provider, executors, connection channel, allocation size,
details of the provider, executors, allocation size,
queues, durations, and data management options.

The following example shows a basic configuration object (:class:`~parsl.config.Config`) for the Frontera
Expand Down Expand Up @@ -123,9 +123,6 @@ Stepping through the following question should help formulate a suitable configu
| Torque/PBS based | * `parsl.executors.HighThroughputExecutor` | `parsl.providers.TorqueProvider` |
| system | * `parsl.executors.WorkQueueExecutor` | |
+---------------------+-----------------------------------------------+----------------------------------------+
| Cobalt based system | * `parsl.executors.HighThroughputExecutor` | `parsl.providers.CobaltProvider` |
| | * `parsl.executors.WorkQueueExecutor` | |
+---------------------+-----------------------------------------------+----------------------------------------+
| GridEngine based | * `parsl.executors.HighThroughputExecutor` | `parsl.providers.GridEngineProvider` |
| system | * `parsl.executors.WorkQueueExecutor` | |
+---------------------+-----------------------------------------------+----------------------------------------+
Expand Down Expand Up @@ -185,8 +182,6 @@ Stepping through the following question should help formulate a suitable configu
| `parsl.providers.TorqueProvider` | Any | * `parsl.launchers.AprunLauncher` |
| | | * `parsl.launchers.MpiExecLauncher` |
+-------------------------------------+--------------------------+----------------------------------------------------+
| `parsl.providers.CobaltProvider` | Any | * `parsl.launchers.AprunLauncher` |
+-------------------------------------+--------------------------+----------------------------------------------------+
| `parsl.providers.SlurmProvider` | Any | * `parsl.launchers.SrunLauncher` if native slurm |
| | | * `parsl.launchers.AprunLauncher`, otherwise |
+-------------------------------------+--------------------------+----------------------------------------------------+
Expand Down Expand Up @@ -492,7 +487,7 @@ CC-IN2P3
.. image:: https://cc.in2p3.fr/wp-content/uploads/2017/03/bandeau_accueil.jpg

The snippet below shows an example configuration for executing from a login node on IN2P3's Computing Centre.
The configuration uses the `parsl.providers.LocalProvider` to run on a login node primarily to avoid GSISSH, which Parsl does not support yet.
The configuration uses the `parsl.providers.LocalProvider` to run on a login node primarily to avoid GSISSH, which Parsl does not support.
This system uses Grid Engine which Parsl interfaces with using the `parsl.providers.GridEngineProvider`.

.. literalinclude:: ../../parsl/configs/cc_in2p3.py
Expand Down
20 changes: 9 additions & 11 deletions docs/userguide/execution.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,7 @@ retrieve the status of an allocation (e.g., squeue), and cancel a running
job (e.g., scancel). Parsl implements providers for local execution
(fork), for various cloud platforms using cloud-specific APIs, and
for clusters and supercomputers that use a Local Resource Manager
(LRM) to manage access to resources, such as Slurm, HTCondor,
and Cobalt.
(LRM) to manage access to resources, such as Slurm and HTCondor.

Each provider implementation may allow users to specify additional parameters for further configuration. Parameters are generally mapped to LRM submission script or cloud API options.
Examples of LRM-specific options are partition, wall clock time,
Expand All @@ -39,15 +38,14 @@ parameters include access keys, instance type, and spot bid price
Parsl currently supports the following providers:

1. `parsl.providers.LocalProvider`: The provider allows you to run locally on your laptop or workstation.
2. `parsl.providers.CobaltProvider`: This provider allows you to schedule resources via the Cobalt scheduler. **This provider is deprecated and will be removed by 2024.04**.
3. `parsl.providers.SlurmProvider`: This provider allows you to schedule resources via the Slurm scheduler.
4. `parsl.providers.CondorProvider`: This provider allows you to schedule resources via the Condor scheduler.
5. `parsl.providers.GridEngineProvider`: This provider allows you to schedule resources via the GridEngine scheduler.
6. `parsl.providers.TorqueProvider`: This provider allows you to schedule resources via the Torque scheduler.
7. `parsl.providers.AWSProvider`: This provider allows you to provision and manage cloud nodes from Amazon Web Services.
8. `parsl.providers.GoogleCloudProvider`: This provider allows you to provision and manage cloud nodes from Google Cloud.
9. `parsl.providers.KubernetesProvider`: This provider allows you to provision and manage containers on a Kubernetes cluster.
10. `parsl.providers.LSFProvider`: This provider allows you to schedule resources via IBM's LSF scheduler.
2. `parsl.providers.SlurmProvider`: This provider allows you to schedule resources via the Slurm scheduler.
3. `parsl.providers.CondorProvider`: This provider allows you to schedule resources via the Condor scheduler.
4. `parsl.providers.GridEngineProvider`: This provider allows you to schedule resources via the GridEngine scheduler.
5. `parsl.providers.TorqueProvider`: This provider allows you to schedule resources via the Torque scheduler.
6. `parsl.providers.AWSProvider`: This provider allows you to provision and manage cloud nodes from Amazon Web Services.
7. `parsl.providers.GoogleCloudProvider`: This provider allows you to provision and manage cloud nodes from Google Cloud.
8. `parsl.providers.KubernetesProvider`: This provider allows you to provision and manage containers on a Kubernetes cluster.
9. `parsl.providers.LSFProvider`: This provider allows you to schedule resources via IBM's LSF scheduler.



Expand Down
Loading

0 comments on commit a020aea

Please sign in to comment.