Skip to content

Commit

Permalink
Remove issue363 test marker (#3329)
Browse files Browse the repository at this point in the history
This marker originally referred to issue #363 for tests which should not be
expected to pass if running in an environment where stderr and stdout are not
staged back to the submitting system by the executor.

Since it was created, it then expanded in use to more generally refer to any
test which had a problem in environments which did not properly make either
output files or stdout/stderr available on the submitting system, as a
separate behaviour from the core of parsl behaving properly. An example of
that would be running in a multi-site environment without working file
staging.

However, since all that happened, the feel in the Parsl community is that
running with a shared filesystem is the core supported usage mode of Parsl;
and also since then, notions of file staging have become much stronger,
both in Parsl core and in individual executors, making it a much more
reasonable test requirement that if you make a file, it can be expected to
be available for the test to inspect - and that if you are in an environment
that does not support that, you should not expect the test suite (or Parsl)
to work.

Because of all of that, this PR removes the issue363 test marker entirely.
It was removed from Work Queue and Task Vine test environments in PR #3327,
leaving only the Radical-PILOT test environment setting that.

It turns out only two tests don't work in the Radical-PILOT environment
when issue363 is removed:

One requires support for the slightly awkward "stdout/err opening modes,
specified by tuples" feature, and this PR labels and disables that test
using a new 'executor_supports_std_stream_tuples' label. (In general, an
executor which does staging of stdout/err streams is likely to not support
the features offered by std stream tuples, such as appending to stderr
files from multiple task - so this label is not Radical-PILOT specific)

Another test looks like it is broken exception handling in the Radical-PILOT
executor, and so it is labelled with a new issue specific label, relating to
the bug report for that issue - issue #3328, issue3328.
  • Loading branch information
benclifford authored Apr 9, 2024
1 parent 026e014 commit fe9145b
Show file tree
Hide file tree
Showing 13 changed files with 11 additions and 30 deletions.
4 changes: 0 additions & 4 deletions CONTRIBUTING.rst
Original file line number Diff line number Diff line change
Expand Up @@ -72,10 +72,6 @@ making FTP transfers). When the test environment (github actions) does not
provide a sufficiently clean network, run all tests with ``-k "not cleannet"`` to
disable those tests.

A pytest marker of ``issue363`` can be used to select or deselect tests
that will fail because of issue 363 when running without a shared file
system.

Some other markers are available but unused in testing;
see ``pytest --markers parsl/tests/`` for more details.

Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ wqex_local_test: $(CCTOOLS_INSTALL) ## run all tests with workqueue_ex config
radical_local_test:
pip3 install ".[radical-pilot]"
mkdir -p ~/.radical/pilot/configs && echo '{"localhost": {"virtenv_mode": "local"}}' > ~/.radical/pilot/configs/resource_local.json
pytest parsl/tests/ -k "not cleannet and not issue363" --config parsl/tests/configs/local_radical.py --random-order --durations 10
pytest parsl/tests/ -k "not cleannet and not issue3328 and not executor_supports_std_stream_tuples" --config parsl/tests/configs/local_radical.py --random-order --durations 10

.PHONY: config_local_test
config_local_test:
Expand Down
12 changes: 8 additions & 4 deletions parsl/tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,10 +137,6 @@ def pytest_configure(config):
'markers',
'cleannet: Enable tests that require a clean network connection (such as for testing FTP)'
)
config.addinivalue_line(
'markers',
'issue363: Marks tests that require a shared filesystem for stdout/stderr - see issue #363'
)
config.addinivalue_line(
'markers',
'staging_required: Marks tests that require a staging provider, when there is no sharedFS)'
Expand All @@ -153,6 +149,14 @@ def pytest_configure(config):
'markers',
'multiple_cores_required: Marks tests that require multiple cores, such as htex affinity'
)
config.addinivalue_line(
'markers',
'issue3328: Marks tests broken by issue #3328'
)
config.addinivalue_line(
'markers',
'executor_supports_std_stream_tuples: Marks tests that require tuple support for stdout/stderr'
)


@pytest.fixture(autouse=True, scope='session')
Expand Down
4 changes: 0 additions & 4 deletions parsl/tests/site_tests/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,10 +67,6 @@ Shared filesystem option

There is a new env variable "SHARED_FS_OPTIONS" to pass markers to pytest to skip certain tests.

Tests that rely on stdout/stderr side-effects between apps that work on with a shared-FS can be deselected with `-k "not issue363"`

When there's a shared-FS, the default NoOpStaging works. However, when there's no shared-FS some tests
that uses File objects require a staging provider (eg. rsync). These tests can be turned off with
`-k "not staging_required"`

These can also be combined as `-k "not issue363 and not staging_required"`
3 changes: 0 additions & 3 deletions parsl/tests/test_bash_apps/test_basic.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@ def foo(x, y, z=10, stdout=None, label=None):
return f"echo {x} {y} {z}"


@pytest.mark.issue363
def test_command_format_1(tmpd_cwd):
"""Testing command format for BashApps"""

Expand All @@ -38,7 +37,6 @@ def test_command_format_1(tmpd_cwd):
assert so_content == "1 4 10"


@pytest.mark.issue363
def test_auto_log_filename_format():
"""Testing auto log filename format for BashApps
"""
Expand All @@ -62,7 +60,6 @@ def test_auto_log_filename_format():
'Output does not match expected string "1 {0} 10", Got: "{1}"'.format(rand_int, contents)


@pytest.mark.issue363
def test_parallel_for(tmpd_cwd, n=3):
"""Testing a simple parallel for loop"""
outdir = tmpd_cwd / "outputs/test_parallel"
Expand Down
3 changes: 0 additions & 3 deletions parsl/tests/test_bash_apps/test_error_codes.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,6 @@ def test_div_0(test_fn=div_0):
os.remove('std.out')


@pytest.mark.issue363
def test_bash_misuse(test_fn=bash_misuse):
err_code = test_matrix[test_fn]['exit_code']
f = test_fn()
Expand All @@ -91,7 +90,6 @@ def test_bash_misuse(test_fn=bash_misuse):
os.remove('std.out')


@pytest.mark.issue363
def test_command_not_found(test_fn=command_not_found):
err_code = test_matrix[test_fn]['exit_code']
f = test_fn()
Expand All @@ -108,7 +106,6 @@ def test_command_not_found(test_fn=command_not_found):
return True


@pytest.mark.issue363
def test_not_executable(test_fn=not_executable):
err_code = test_matrix[test_fn]['exit_code']
f = test_fn()
Expand Down
1 change: 0 additions & 1 deletion parsl/tests/test_bash_apps/test_kwarg_storage.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ def foo(z=2, stdout=None):
return f"echo {z}"


@pytest.mark.issue363
def test_command_format_1(tmpd_cwd):
"""Testing command format for BashApps
"""
Expand Down
2 changes: 0 additions & 2 deletions parsl/tests/test_bash_apps/test_memoize.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ def fail_on_presence(outputs=()):
# This test is an oddity that requires a shared-FS and simply
# won't work if there's a staging provider.
# @pytest.mark.sharedFS_required
@pytest.mark.issue363
def test_bash_memoization(tmpd_cwd, n=2):
"""Testing bash memoization
"""
Expand All @@ -33,7 +32,6 @@ def fail_on_presence_kw(outputs=(), foo=None):
# This test is an oddity that requires a shared-FS and simply
# won't work if there's a staging provider.
# @pytest.mark.sharedFS_required
@pytest.mark.issue363
def test_bash_memoization_keywords(tmpd_cwd, n=2):
"""Testing bash memoization
"""
Expand Down
1 change: 0 additions & 1 deletion parsl/tests/test_bash_apps/test_memoize_ignore_args.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@ def no_checkpoint_stdout_app_ignore_args(stdout=None):
return "echo X"


@pytest.mark.issue363
def test_memo_stdout():

# this should run and create a file named after path_x
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@ def no_checkpoint_stdout_app(stdout=None):
return "echo X"


@pytest.mark.issue363
def test_memo_stdout():

assert const_list_x == const_list_x_arg
Expand Down
1 change: 0 additions & 1 deletion parsl/tests/test_bash_apps/test_multiline.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ def multiline(inputs=(), outputs=(), stderr=None, stdout=None):
""".format(inputs=inputs, outputs=outputs)


@pytest.mark.issue363
def test_multiline(tmpd_cwd):
so, se = tmpd_cwd / "std.out", tmpd_cwd / "std.err"
f = multiline(
Expand Down
6 changes: 2 additions & 4 deletions parsl/tests/test_bash_apps/test_stdout.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ def echo_to_streams(msg, stderr=None, stdout=None):
]


@pytest.mark.issue363
@pytest.mark.parametrize('spec', speclist, ids=testids)
def test_bad_stdout_specs(spec):
"""Testing bad stdout spec cases"""
Expand All @@ -54,7 +53,7 @@ def test_bad_stdout_specs(spec):
assert False, "Did not raise expected exception"


@pytest.mark.issue363
@pytest.mark.issue3328
def test_bad_stderr_file():
"""Testing bad stderr file"""

Expand All @@ -72,7 +71,7 @@ def test_bad_stderr_file():
return


@pytest.mark.issue363
@pytest.mark.executor_supports_std_stream_tuples
def test_stdout_truncate(tmpd_cwd):
"""Testing truncation of prior content of stdout"""

Expand All @@ -89,7 +88,6 @@ def test_stdout_truncate(tmpd_cwd):
assert len1 == len2


@pytest.mark.issue363
def test_stdout_append(tmpd_cwd):
"""Testing appending to prior content of stdout (default open() mode)"""

Expand Down
1 change: 0 additions & 1 deletion parsl/tests/test_python_apps/test_outputs.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@ def double(x, outputs=[]):
whitelist = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'configs', '*threads*')


@pytest.mark.issue363
def test_launch_apps(tmpd_cwd, n=2):
outdir = tmpd_cwd / "outputs"
outdir.mkdir()
Expand Down

0 comments on commit fe9145b

Please sign in to comment.