Skip to content

Latest commit

 

History

History
147 lines (113 loc) · 5.34 KB

7.Threshold.md

File metadata and controls

147 lines (113 loc) · 5.34 KB

Configuration

Suppose that dataConverter.feature has 2 Scenario outlines with 2 examples each and 2 more Scenarios. So there are total 6 tests.

Scenario Outline:
    Examples:
    |1|
    |2|
Scenario Outline:
    Examples:
    |1|
    |2|
@skip
Scenario:
Scenario:

We have to define a strategy that all the tests excluding the skipped test, are passing.

module.exports = {
    threshold: [{
            file: "dataConverter.feature",
            pass: [1,2.1, 2.2, 4]
        }]
}

In above threshold strategy, 1 menas all examples of 1st Scenario which could also be mentioned as 1.1, 1.2.

Remember that if the feature file is not the part of test set, that particular threshold strategy will not be applicable.

$ DEBUG=cytorus npx cytorus run --spec "cypress/integration/features/google*.feature"

Structure

A strategy has 2 parts: Selector, Expectation, When. Eg

[{
    tagExpression: "@all but not @failing", //selector
    pass: [1,2]                             //Expectation
}]

Selectors You can't have multiple selectors in a strategy

  • tagExpression: It can be used to select the tests for the strategy using tag expression.
  • file: To apply the strategy on tests from particular feature file.
[{
    file: "dataConverter.feature",
    pass: [1,2.1, 2.2, 4]
},{
    tagExpression: "@all but not @failing", 
    min: "80%"
}]

Expectation Expectations are must. A strategy without expectation gets skipped. Following expectations are supported to apply on selected tests;

  • max : specify the maximum count or percentage of tests passing.
  • min : specify the minimum count or percentage of tests passing.
[{
    file: "features/nested/dynamicArguments.feature",
    min: "100%"
},{
    tagExpression: "@all but not @failing"
    max: 17
},{
    min: "30%"
}]

When you don't specify selector then expectation is applied on all tests.

  • pass: accepts an array of scenario position in selected feature files which are expected to be passed. Other tests may also be passing. This can't be used with tagExpression selector.
  • fail: accepts an array of scenario position in selected feature files which are expected to be failed. Other tests may also be failing. This can't be used with tagExpression selector.
[{
    file: "integration/features/failing.feature",
    pass: [2],
    fail: [1,3,4]
},{
    file: "integration/features/dataConverter.feature",
    pass: [2.1, 2.3],
}]

If you pass, suppose pass: [1,2,1] and the test at first position is a Scenario Outline then all the examples of 1st test should be passed. 2.1 denotes 1st example of 2nd test in selected feature file.

When You may have to skip a particular strategy on some condition eg when the tests are running with particular CLI parameters, or on specific day/timing/environment.

[{
    tagExpression: "@timebound but not @wip" ,
    min: "100%",
    when: () => (new Date()).getHours() < 17
}]

Note: You may want to skip threshold strategy when running particular tests using --skip-threshold as CLI parameter.

Use cases

Threshold strategy can help to keep your CI/CD builds green even when some tests are failing.

Scenario 1

Suppose you're working on a big project where so many tests are still failing and new feature has to be developed. When your test build fails, it is difficult to differentiate if it is because of old tests or new. You want to keep the build green for new feature and fix old tests gradually.

What strategy will you choose? You can create 2 builds. One for new feature, another for old tests. Set the first build in pipeline so if any of the test for the new feature fails, it'll break the pipeline. However, this approach has an issue. How will you ensure that success percentage for the failing build is increasing?

With threshold strategy, you can define the percentage of failures that you can reduce on daily/weekly basis. Your test result will be passing if threshold matches. You need not to create 2 separate jobs/build for this.

Scenario 2

Suppose you have some tests to place the order with a shop between 9am - 6pm only. If you run these tests after 6pm, tests will fail hence your build/job.

There can be other reasons too when a test can fail because it doesn't match certain condition. Eg When you select express delivery, your order should be delivered by next day. But if the next day is bank holiday, or weekend then this test might fail.

With the threshold strategy you can set a rule that if the tests are failing due to out of defined time range then don't fail the whole build.

Check skip strategy to make your biuld light by skipping test to run in particular time range.

Scenario 3

  • Sometimes we have crucial tests which must never fail and should be looked on priority. You can set their expected success to 100%.
  • Or there can be

--skip-threshold

Cytorus by default apply mentioned threshold on all the tests.

If you're running tests for a.feature only but expecting in threshold that all the tests from b.feature should also be successful, then overall test status would be failed.

You can pass --skip-threshold in the command to guide cytorus not to apply threshold.

Eg

npx cytorus run --skip-threshold  --spec 'cypress/integration/features/a.feature'

> Next : Run tests in parallel (experimental)