-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Support or document methods of "black box" integration testing #137
Comments
Communicating with modules over MQTT directly has other advantages, as well. In particular, it enables third parties to perform "black box" integration with EVerest, where EVerest is stood up in conjunction with other software (and hardware) to confirm expected behavior across a more connected set of services (e.g., SIL or HIL tests involving an EV, EVerest, and a CSMS). I believe @shankari wanted to provide some real world examples from recent PKI testing efforts where this would come in handy. |
Hi @drmrd @shankari,
I agree we need more documentation regarding testing, including:
This would adress your 1. point. I agree this knowledge is necessary to impelement Tests on Integration level (with probe module) I also guess it would adress your 3. point because I would say the probe module is not defined for specific testing and is general purpose. So don't agree on 3. To point 4: If you are not familiar with pytest it looks quite complex, but the testing framework is "just" a collection of "handy"/useful fixtures (pytest) that are controlled by markers. Both can be reused. To point 2: So in summarization:
Open questions:
|
When wishing to run test scenarios against the full EVerest stack that require the monitoring of EVerest modules, the approach that appears publicly in
everest-core
tests is to dynamically define a new type of EVerest module at runtime usingeverestpy
and run it as a standalone module alongside the rest of the stack. This custom "proble module" relies oneverestpy
to connect to the MQTT message broker and publish/subscribe to topics associated with other modules.This approach has a number of limitations, some of which are as follows:
Required Knowledge of EVerest Tooling: The test in
startup_tests.py
looks readable on the surface, but in order to reuse it in new tests, a developer almost immediately requires a knowledge of each of the following (at a minimum):everestpy
fromeverest-framework
(andpybind11
syntax, accordingly).everest.testing
submodule is really a copy of theeverest-testing
submodule fromeverest-utils
, which to my knowledge isn't explicitly documented anywhere.Not Testing a Production EVerest Stack: While unlikely, it's possible that introducing a misconfigured standalone module into EVerest results in a change to the behavior of other modules within EVerest.
Lack of Convenient Testing Abstractions: In its current implementation, the probe module is not general purpose, meaning that a bespoke probe module is defined for testing a specific behavior. Without a higher-level abstraction here, this pattern will likely result in quite a bit of boilerplate code in test suites and brittle tests.
Complexity: This seems like quite a few layers of abstraction simply to trigger a module command or obtain a variable value.
All of these issues can be addressed through a mix of improvements to the main EVerest documentation, providing documentation for
everestpy
, extending the documentation ofeverest-testing
, extendingeverest-testing
with some kind of general-purpose probe module, etc. That said, I'm curious why the community has adopted this approach over building out a testing framework that connects to MQTT directly, using a client likePaho
.Either way, from my own experience and conversations with other EVerest adopters, it seems like commentary on existing testing approaches would be a welcome contribution to the documentation.
The text was updated successfully, but these errors were encountered: