-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move execute_wait from vestigial LocalChannel into parsl.utils #3705
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
This file was deleted.
This file was deleted.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
import pytest | ||
|
||
from parsl.utils import execute_wait | ||
|
||
|
||
@pytest.mark.local | ||
def test_env(): | ||
''' Regression testing for issue #27 | ||
''' | ||
|
||
rc, stdout, stderr = execute_wait("env", 1) | ||
|
||
stdout = stdout.split('\n') | ||
x = [s for s in stdout if s.startswith("PATH=")] | ||
assert x, "PATH not found" | ||
|
||
x = [s for s in stdout if s.startswith("HOME=")] | ||
assert x, "HOME not found" | ||
|
||
|
||
@pytest.mark.local | ||
def test_large_output_2210(): | ||
"""Regression test for #2210. | ||
execute_wait was hanging if the specified command gave too | ||
much output, due to a race condition between process exiting and | ||
pipes filling up. | ||
""" | ||
|
||
# this will output 128kb of stdout | ||
execute_wait("yes | dd count=128 bs=1024", walltime=60) | ||
|
||
# if this test fails, execute_wait should raise a timeout | ||
# exception. | ||
|
||
# The contents out the output is not verified by this test | ||
Comment on lines
+22
to
+35
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Similar question (and understanding of simply moving) as to the progeny of this test, though it's utility is more clear to me. Meanwhile I'm unsure of the value of these particular numbers. Why stop at "only" 128K? If it's going to hang, perhaps we make sure it does so we catch it, nominally with a much higher value. 1G would be overkill enough: Similarly, if we're going to fail, this should fail super quick, why wait for a full minute? 5s should be beyond plenty of time, no? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get this is just moving the test, but I also don't understand what it's testing. What does this have to do with the referenced issue #27, which looks related to documentation?
Meanwhile,
PATH/HOME not found
is restating the logic. Is the goal to just ensure we're executing something successfully? If so, something likeDid not execute: canary value missing!
might be a better assertion message. Or is the environment actually important to verify? I don't see any reference to a special environment inexecute_wait
... ?And of course, a simplification of the test itself might be:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The #27 is a red herring -- around 2017 there was a somewhat misguided activity to separate out the provider layer into its own github repo called
libsubmit
. This refers to libsubmit issue 27: Parsl/libsubmit#27.Go back far enough in history and the commit that introduces this test does stuff in the libsubmit/ subdirectory from the subsequently re-merged libsubmit history.
I'm not going to tidy these tests any time soon - this PR sequence really is meant to be moving, deleting and then moving on.