Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test/e2e: improve nginx deployment test #2044

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

wainersm
Copy link
Member

The nginx deployment test isn't completely stable. Sometimes it fails and we don't know yet whether it's a legit bug on peer pods or not. With this PR I don't solve that problem either, as I still don't know the roots of the problem, however, it improves the test that it will run the Teardown() function even when WithSetup() failed. We have this suspect that the left nginx deployment might be messing with the next tests, so this change will at least solve that problem (if it's really a problem). Regardless, tests should always delete their resources at the execution end.

There is one thing that I'd like to implement, which is, do not run Teardown() if not running on CI. In other words, if I'm running the test locally and it fails I want to have the deployment running so I can inspect. I may be sending this on another PR or maybe even on this one....depending on the reviews.

Fixed the printing of the pod name. Also will print an
user-friendly message when there isn't any pod logs to show.

Signed-off-by: Wainer dos Santos Moschetta <[email protected]>
The teardown function doesn't run if WithSetup fail, which might
leave the deployment on the cluster and it might mess with next
tests.

This replaced some t.Fatal() with t.Error() so that the current
goroutine doesn't exit, in fact, the execution continues after
the t.Error() call but the test is marked failed. So I had to
proper return when t.Error().

On WithSetup() failing, it doesn't make sense to run the Assess()
and the only way to pass the status down to Assess() is through the
`ctx` variable (due to a limitation on the k8s e2e framework, the `t`
object is not shared).

Signed-off-by: Wainer dos Santos Moschetta <[email protected]>
@stevenhorsman stevenhorsman added the test_e2e_libvirt Run Libvirt e2e tests label Sep 18, 2024
Copy link
Member

@stevenhorsman stevenhorsman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense to me. Thanks @wainersm!

@mkulke
Copy link
Collaborator

mkulke commented Sep 18, 2024

i don't have a lot of context, but in my (manual) tests I'm always using deployments. but, essentially, deployments are higher-order kubernetes constructs, as a primitive they should not be relevant for CAA. Thus, if we hava a Deployment test that will spawn Pods w/ a Pod spec that is 1:1 matching the Spec for a (reliable) Pod test, I'd conclude that there are problems in our testing suite (race conditions probably). Maybe it makes sense to remodel the deployment tests to mirror the pod test closely.

@mkulke
Copy link
Collaborator

mkulke commented Sep 18, 2024

I also noticed that the test cause side-effects that makes it impossible to run them in parallel. Essentially the asserts of a test find leftovers of services or other k8s objects, because they use the same name. So, it might make sense to create some logic that will make a test independent, because there is a random segment in the names

@stevenhorsman
Copy link
Member

I also noticed that the test cause side-effects that makes it impossible to run them in parallel. Essentially the asserts of a test find leftovers of services or other k8s objects, because they use the same name. So, it might make sense to create some logic that will make a test independent, because there is a random segment in the names

We currently have a different namespace for each test run. We picked that to make it easier to clean everything up together if it went wrong, but if you think it would help to have a random namespace for every tests, then we could consider that?

@mkulke
Copy link
Collaborator

mkulke commented Sep 19, 2024

We currently have a different namespace for each test run. We picked that to make it easier to clean everything up together if it went wrong, but if you think it would help to have a random namespace for every tests, then we could consider that?

yeah, ideally, within a test-run the tests should also be independent, so maybe running them in their own namespace would indeed be a good measure, it'll also help with cleanup. we can use a label: test-run: 123 for the ns, so we can bulk delete in a tear-down section.

@wainersm
Copy link
Member Author

wainersm commented Oct 7, 2024

Hi @mkulke ,

i don't have a lot of context, but in my (manual) tests I'm always using deployments. but, essentially, deployments are higher-order kubernetes constructs, as a primitive they should not be relevant for CAA. Thus, if we hava a Deployment test that will spawn Pods w/ a Pod spec that is 1:1 matching the Spec for a (reliable) Pod test, I'd conclude that there are problems in our testing suite (race conditions probably). Maybe it makes sense to remodel the deployment tests to mirror the pod test closely.

I don't remember the context that nginx deployment test was added, the original commit doesn't explain either, so I don't know for sure. Well, it's exercising ReadinessProbe and LivenessProbe, maybe Probes are relevant for peer pods implementation? Let's suppose "yes", do the test should be a Deployment rather than a regular Pod?

In the end, I'm trying to understand whether I should address the static checker errors aiming to get this merged OR drop the nginx deployment test OR take another action :)

@wainersm
Copy link
Member Author

wainersm commented Oct 7, 2024

We currently have a different namespace for each test run. We picked that to make it easier to clean everything up together if it went wrong, but if you think it would help to have a random namespace for every tests, then we could consider that?

yeah, ideally, within a test-run the tests should also be independent, so maybe running them in their own namespace would indeed be a good measure, it'll also help with cleanup. we can use a label: test-run: 123 for the ns, so we can bulk delete in a tear-down section.

Makes sense to me. Created an issue to discuss that and other stuffs for running the tests in parallel: #2092

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
e2e-test test_e2e_libvirt Run Libvirt e2e tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants