diff --git a/docs/optimizing-test-execution/subsetting-tests.md b/docs/optimizing-test-execution/subsetting-tests.md index 1e0f8d223..331c1229d 100644 --- a/docs/optimizing-test-execution/subsetting-tests.md +++ b/docs/optimizing-test-execution/subsetting-tests.md @@ -16,6 +16,7 @@ launchable subset \ See the following sections for how to fill the `...(test runner specific part)...` in the above example: * [Bazel](subsetting-tests.md#bazel) +* [Behave](subsetting-tests.md#behave) * [CTest](subsetting-tests.md#ctest) * [Cypress](subsetting-tests.md#cypress) * [GoogleTest](subsetting-tests.md#googletest) @@ -56,6 +57,23 @@ You can now invoke Bazel with it: bazel test $(cat launchable-subset.txt) ``` +### Behave + +To select a meaningful subset of tests, first pipe a list of all test files to the Launchable CLI: + +```bash +find ./features/ | launchable subset \ + --build \ + --target \ + behave > launchable-subset.txt +``` + +The file will contain the subset of tests that should be run. You can now invoke your test executable to run exactly those tests: + +```bash +behave -i "$(cat launchable-subset.txt)" +``` + ### CTest To select a meaningful subset of tests, have CTest list your test cases to a JSON file \([documentation](https://cmake.org/cmake/help/latest/manual/ctest.1.html)\), then feed that JSON into the Launchable CLI: diff --git a/docs/resources/integrations.md b/docs/resources/integrations.md index 724fc8db6..2fcb7ca78 100644 --- a/docs/resources/integrations.md +++ b/docs/resources/integrations.md @@ -5,6 +5,7 @@ The Launchable CLI includes pre-built integrations with the following test runners/build tools: * [Bazel](https://bazel.build/) +* [Behave](https://pypi.org/project/behave/) * [CTest](https://cmake.org/cmake/help/latest/manual/ctest.1.html#id13) * [Cypress](https://www.cypress.io/) * [GoogleTest](https://github.com/google/googletest) diff --git a/docs/training-a-model/recording-test-results.md b/docs/training-a-model/recording-test-results.md index ec57ce884..19f2e54cc 100644 --- a/docs/training-a-model/recording-test-results.md +++ b/docs/training-a-model/recording-test-results.md @@ -11,6 +11,7 @@ You'll need to disable this feature so that Launchable has enough test results t {% endhint %} * [Bazel](recording-test-results.md#bazel) +* [Behave](recording-test-results.md#behave) * [Cypress](recording-test-results.md#cypress) * [CTest](recording-test-results.md#ctest) * [GoogleTest](recording-test-results.md#googletest) @@ -41,6 +42,25 @@ To make sure that `launchable record tests` always runs even if the build fails, For more information and advanced options, run `launchable record tests bazel --help` +### Behave + +Behave provides a JUnit report option: see [Using behave](https://behave.readthedocs.io/en/stable/behave.html?highlight=junit#cmdoption-junit). + +After running tests, point to files that contains all the generated test report XML files: + +```bash +# run the tests however you normally do +behave --junit + +launchable record tests --build behave ./reports/*.xml +``` + +{% hint style="warning" %} +To make sure that `launchable record tests` always runs even if the build fails, see [Always record tests](recording-test-results.md#always-record-tests). +{% endhint %} + +For more information and advanced options, run `launchable record tests behave --help` + ### CTest Have CTest run tests and produce XML reports in its native format. Launchable CLI supports the CTest format; you don't need to convert to JUnit. By default, this location is `Testing/{date}/Test.xml`.