diff --git a/docs/docs/articles/test-workflows-examples-expressions.md b/docs/docs/articles/test-workflows-examples-expressions.md
deleted file mode 100644
index 4744003f836..00000000000
--- a/docs/docs/articles/test-workflows-examples-expressions.md
+++ /dev/null
@@ -1,53 +0,0 @@
-# Test Workflows Examples - Expressions
-
-## Expressions Language
-
-We have designed a simple expressions language, that allows dynamic evaluation of different values.
-
-## JSON-Native
-
-It is built on JSON, so every JSON syntax is a valid expression value as well, like `[ "a", "b", "c" ]`.
-
-## Math
-
-You can do basic math easily, like **config.workers * 5**.
-
-![Expressions](../img/expressions.png)
-
-## Built-in Variables
-
-### General Variables
-
-There are some built-in variables available in most of the places;
-
-- **env** - Object has a reference to the environment variables.
-- **config** - Object has a reference to defined configuration variables.
-- **execution** - Object has some execution data.
-
-### Contextual Variables
-
-In some contexts, there are additional variables available.
-
-As an example, while writing the condition, you can use variables like passed (bool), failed (bool), always (true), never (false), status (string) that refer to current status of the TestWorkflow.
-
-![Built-in Variables](../img/built-in-variables.png)
-
-## Built-in Functions
-
-### Casting Functions
-
-There are some functions that help to cast or serialize values, such as **int**, **json**, **tojson**, **yaml**, and **toyaml**.
-
-### General Functions
-
-There are some functions that aid in working with data, i.e. **join**, **split**, **floor**, **round**, **trim**, **len**, **map**, **filter**, **jq**, **shellparse** or **shellquote**.
-
-### File System Functions
-
-You can as well read the file system in the Test Workflow to determine values based on that. You can read files with **file** function, or list files with **glob**.
-
-![Built-in Functions](../img/built-in-functions.png)
-
-
-
-
diff --git a/docs/docs/articles/test-workflows-expressions.md b/docs/docs/articles/test-workflows-expressions.md
new file mode 100644
index 00000000000..8a1e9c8c317
--- /dev/null
+++ b/docs/docs/articles/test-workflows-expressions.md
@@ -0,0 +1,171 @@
+# Test Workflows - Expressions
+
+## Expressions Language
+
+We have designed a simple expressions language, that allows dynamic evaluation of different values.
+
+## JSON-Native
+
+It is built on JSON, so every JSON syntax is a valid expression value as well, like `[ "a", "b", "c" ]`.
+
+## Math
+
+You can do basic math easily, like **config.workers * 5**.
+
+![Expressions](../img/expressions.png)
+
+### Operators
+
+#### Arithmetic
+
+The operators have the precedence defined so the order will follow math rules. Examples:
+
+* `1 + 2 * 3` will result in `7`
+* `(1 + 2) * 3` will result in `9`
+* `2 * 3 ** 2` will result in `18`
+
+| Operator | Returns | Description | Example |
+|---------------------------|--------------------------------------|------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `==` (or `=`) | `bool` | Is equal? | `3 == 5` is `false` |
+| `!=` (or `<>`) | `bool` | Is not equal? | `3 != 5` is `true` |
+| `>` | `bool` | Is greater than? | `3 > 5` is `false` |
+| `<` | `bool` | Is lower than? | `3 < 5` is `true` |
+| `>=` | `bool` | Is greater than or equal? | `3 >= 5` is `false` |
+| `<=` | `bool` | Is lower than or equal? | `3 <= 5` is `true` |
+| `&&` | the last value or the falsy one | Are both truthy? | `true && false` is `false`
`5 && 0 && 3` is `0`
`5 && 3 && 2` is `2` |
+| ||
| first truthy value or the last value | Is any truthy? | true || false
is true
5 || 3 || 0
is `5`
0 || 5
is `5`
"" || "foo"
is `"foo"` |
+| `!` | `bool` | Is the value falsy? | `!0` is `true` |
+| `?` and `:` | any of the values inside | Ternary operator - if/else | `true ? 5 : 3` is `5` |
+| `+` | `string` or `float` | Add numbers together or concatenate text | `1 + 3` is `4`
`"foo" + "bar"` is `"foobar"`
`"foo" + 5` is `"foo5"` |
+| `-` | `float` | Subtract one number from another | `5 - 3` is `2` |
+| `%` | `float` | Divides numbers and returns the remainder | `5 % 3` is `2` |
+| `/` | `float` | Divides two numbers | `6 / 3` is `2`
`10 / 4` is `2.5`
**Edge case:** `10 / 0` is `0` (for simplicity) |
+| `*` | `float` | Multiplies one number by the other | `4 * 2` is `8` |
+| `**` | `float` | Exponentiation - power one number to the other | `2 ** 5` is `32` |
+| `(` and `)` | the inner type | Compute the expression altogether | `(2 + 3) * 5` is `20` |
+
+#### Access
+
+| Operator | Description | Example |
+|----------|---------------------------|-------------------------------------------------------------------------------------|
+| `.` | Access inner value | `{"id": 10}.id` is `10`
`["a", "b"].1` is `"b"` |
+| `.*.` | Wildcard mapping | `[{"id": 5}, {"id": 3}].*.id` is `[5, 3]` |
+| `...` | Spread arguments operator | `shellquote(["foo", "bar baz"]...)` is equivalent of `shellquote("foo", "bar baz")` |
+
+## Built-in Variables
+
+### General Variables
+
+There are some built-in variables available. Part of them may be resolved before execution (and therefore used for Pod settings),
+while the others may be accessible only dynamically in the container.
+
+#### Selected variables
+
+| Name | Resolved immediately | Description |
+|------------------------------------------------------------|----------------------|-----------------------------------------------------------------------------------|
+| `always` | ✅ | Alias for `true` |
+| `never` | ✅ | Alias for `false` |
+| `config` variables (like `config.abc`) | ✅ | Values provided for the configuration |
+| `execution.id` | ✅ | TestWorkflow Execution's ID |
+| `resource.id` | ✅ | Either execution ID, or unique ID for parallel steps and services |
+| `resource.root` | ✅ | Either execution ID, or nested resource ID, of the resource that has scheduled it |
+| `namespace` | ✅ | Namespace where the execution will be scheduled |
+| `workflow.name` | ✅ | Name of the executed TestWorkflow |
+| `env` variables (like `env.SOME_VARIABLE`) | ❌ | Environment variable value |
+| `failed` | ❌ | Is the TestWorkflow Execution failed already at this point? |
+| `passed` | ❌ | Is the TestWorkflow Execution still not failed at this point? |
+| `services` (like `services.db.0.ip` or `services.db.*.ip`) | ❌ | Get the IPs of initialized services |
+
+### Contextual Variables
+
+In some contexts, there are additional variables available.
+
+#### Retry Conditions
+
+When using custom `retry` condition, you can use `self.passed` and `self.failed` for determining the status based on the step status.
+
+```yaml
+spec:
+ steps:
+ - shell: exit 0
+ # ensure that the step won't fail for 5 executions
+ retry:
+ count: 5
+ until: 'self.failed'
+```
+
+#### Matrix and Shard
+
+When using `services` (service pods), `parallel` (parallel workers), or `execute` (test suite) steps:
+
+* You can use `matrix.` and `shard.` to access parameters for each copy.
+* You can access `index` and `count` that will differ for each copy.
+* Also, you may use `matrixIndex`, `matrixCount`, `shardIndex` and `shardCount` to get specific indexes/numbers for combinations and shards.
+
+```yaml
+spec:
+ services:
+ # Start two workers and label them with index information
+ db:
+ count: 2
+ description: "Instance {{ index + 1 }} of {{ count }}" # "Instance 1 of 2" and "Instance 2 of 2"
+ image: mongo:latest
+ # Run 2 servers with different node versions
+ api:
+ matrix:
+ node: [20, 21]
+ description: "Node v{{ matrix.node }}" # "Node v20" and "Node v21"
+ image: "node:{{ matrix.node }}"
+```
+
+## Built-in Functions
+
+### Casting
+
+There are some functions that help to cast values to a different type. Additionally, when using wrong types in different places, the engine tries to cast them automatically.
+
+| Name | Returns | Description | Example |
+|----------|-------------------------|--------------------------|---------------------------------------------------------------------------------------------------------------------------|
+| `string` | `string` | Cast value to a string | `string(5)` is `"5"`
`string([10, 15, 20])` is `"10,15,20"`
`string({ "foo": "bar" })` is `"{\"foo\":\"bar\"}"` |
+| `list` | list of provided values | Build a list of values | `list(10, 20)` is `[ 10, 20 ]` |
+| `int` | `int` | Maps to integer | `int(10.5)` is `10`
`int("300.50")` is `300` |
+| `bool` | `bool` | Maps value to boolean | `bool("")` is `false`
`bool("1239")` is `true` |
+| `float` | `float` | Maps value to decimal | `float("300.50")` is `300.5` |
+| `eval` | anything | Evaluates the expression | `eval("4 * 5")` is `20` |
+
+### General
+
+| Name | Returns | Description | Example |
+|--------------|-----------------|-----------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|
+| `join` | `string` | Join list elements | `join(["a", "b"])` is `"a,b"`
`join(["a", "b"], " - ")` is `"a - b"` |
+| `split` | `list` | Split string to list | `split("a,b,c")` is `["a", "b", "c"]`
`split("a - b - c", " - ")` is `["a", "b", "c"]` |
+| `trim` | `string` | Trim whitespaces from the string | `trim(" \nabc d ")` is `"abc d"` |
+| `len` | `int` | Length of array, map or string | `len([ "a", "b" ])` is `2`
`len("foobar")` is `6`
`len({ "foo": "bar" })` is `1` |
+| `floor` | `int` | Round value down | `floor(10.5)` is `10` |
+| `ceil` | `int` | Round value up | `ceil(10.5)` is `11` |
+| `round` | `int` | Round value to nearest integer | `round(10.5)` is `11` |
+| `at` | anything | Get value of the element | `at([10, 2], 1)` is `2`
`at({"foo": "bar"}, "foo")` is `"bar"` |
+| `tojson` | `string` | Serialize value to JSON | `tojson({ "foo": "bar" })` is `"{\"foo\":\"bar\"}"` |
+| `json` | anything | Parse the JSON | `json("{\"foo\":\"bar\"}")` is `{ "foo": "bar" }` |
+| `toyaml` | `string` | Serialize value to YAML | `toyaml({ "foo": "bar" })` is `"foo: bar\n` |
+| `yaml` | anything | Parse the YAML | `yaml("foo: bar")` is `{ "foo": "bar" }` |
+| `shellquote` | `string` | Sanitize arguments for shell | `shellquote("foo bar")` is `"\"foo bar\""`
`shellquote("foo", "bar baz")` is `"foo \"bar baz\""` |
+| `shellparse` | `[]string` | Parse shell arguments | `shellparse("foo bar")` is `["foo", "bar"]`
`shellparse("foo \"bar baz\"")` is `["foo", "bar baz"]` |
+| `map` | `list` or `map` | Map list or map values with expression; `_.value` and `_.index`/`_.key` are available | `map([1,2,3,4,5], "_.value * 2")` is `[2,4,6,8,10]` |
+| `filter` | `list` | Filter list values with expression; `_.value` and `_.index` are available | `filter([1,2,3,4,5], "_.value > 2")` is `[3,4,5]` |
+| `jq` | anything | Execute [**jq**](https://en.wikipedia.org/wiki/Jq_(programming_language)) against value | jq([1,2,3,4,5], ". | max")
is `[5]` |
+| `range` | `[]int` | Build range of numbers | `range(5, 10)` is `[5, 6, 7, 8, 9]`
`range(5)` is `[0, 1, 2, 3, 4]` |
+| `relpath` | `string` | Build relative path | `relpath("/a/b/c")` may be `./b/c`
`relpath("/a/b/c", "/a/b")` is `"./c"` |
+| `abspath` | `string` | Build absolute path | `abspath("/a/b/c")` is `/a/b/c`
`abspath("b/c")` may be `/some/working/dir/b/c` |
+| `chunk` | `[]list` | Split list to chunks of specified maximum size | `chunk([1,2,3,4,5], 2)` is `[[1,2], [3,4], [5]]` |
+
+### File System
+
+These functions are only executed during the execution.
+
+| Name | Returns | Description | Example |
+|--------|------------|-----------------------|------------------------------------------------------------------------------------------------------------------|
+| `file` | `string` | File contents | `file("/etc/some/path")` may be `"some\ncontent"` |
+| `glob` | `[]string` | Find files by pattern | `glob("/etc/**/*", "./x/**/*.js")` may be `["/etc/some/file", "/etc/other/file", "/some/working/dir/x/file.js"]` |
+
+![Built-in Functions](../img/built-in-functions.png)
diff --git a/docs/docs/articles/test-workflows-matrix-and-sharding.md b/docs/docs/articles/test-workflows-matrix-and-sharding.md
new file mode 100644
index 00000000000..264966e8dba
--- /dev/null
+++ b/docs/docs/articles/test-workflows-matrix-and-sharding.md
@@ -0,0 +1,238 @@
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+
+# Test Workflows - Matrix and Sharding
+
+Often you want to run a test with multiple scenarios or environments,
+either to distribute the load or to verify it on different setup.
+
+Test Workflows have a built-in mechanism for all these cases - both static and dynamic.
+
+## Usage
+
+Matrix and sharding features are supported in [**Services (`services`)**](./test-workflows-services.md), and both [**Test Suite (`execute`)**](./test-workflows-test-suites.md) and [**Parallel Steps (`parallel`)**](./test-workflows-parallel.md) operations.
+
+
+Services (services
)} default>
+
+```yaml
+kind: TestWorkflow
+apiVersion: testworkflows.testkube.io/v1
+metadata:
+ name: example-matrix-services
+spec:
+ services:
+ remote:
+ matrix:
+ browser:
+ - driver: chrome
+ image: selenium/standalone-chrome:4.21.0-20240517
+ - driver: edge
+ image: selenium/standalone-edge:4.21.0-20240517
+ - driver: firefox
+ image: selenium/standalone-firefox:4.21.0-20240517
+ image: "{{ matrix.browser.image }}"
+ description: "{{ matrix.browser.driver }}"
+ readinessProbe:
+ httpGet:
+ path: /wd/hub/status
+ port: 4444
+ periodSeconds: 1
+ steps:
+ - shell: 'echo {{ shellquote(join(map(services.remote, "tojson(_.value)"), "\n")) }}'
+```
+
+
+Test Suite (execute
)}>
+
+```yaml
+kind: TestWorkflow
+apiVersion: testworkflows.testkube.io/v1
+metadata:
+ name: example-matrix-test-suite
+spec:
+ steps:
+ - execute:
+ workflows:
+ - name: k6-workflow-smoke
+ matrix:
+ target:
+ - https://testkube.io
+ - https://docs.testkube.io
+ config:
+ target: "{{ matrix.target }}"
+```
+
+
+Parallel Steps (parallel
)}>
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-sharded-playwright
+spec:
+ content:
+ git:
+ uri: https://github.com/kubeshop/testkube
+ paths:
+ - test/playwright/executor-tests/playwright-project
+ container:
+ image: mcr.microsoft.com/playwright:v1.32.3-focal
+ workingDir: /data/repo/test/playwright/executor-tests/playwright-project
+
+ steps:
+ - name: Install dependencies
+ shell: 'npm ci'
+
+ - name: Run tests
+ parallel:
+ count: 2
+ transfer:
+ - from: /data/repo
+ shell: 'npx playwright test --shard {{ index + 1 }}/{{ count }}'
+```
+
+
+
+
+## Syntax
+
+This feature allows you to provide few properties:
+
+* `matrix` to run the operation for different combinations
+* `count`/`maxCount` to replicate or distribute the operation
+* `shards` to provide the dataset to distribute among replicas
+
+Both `matrix` and `shards` can be used together - all the sharding (`shards` + `count`/`maxCount`) will be replicated for each `matrix` combination.
+
+### Matrix
+
+Matrix allows you to run the operation for multiple combinations. The values for each instance are accessible by `matrix.`.
+
+In example:
+
+```yaml
+parallel:
+ matrix:
+ image: ['node:20', 'node:21', 'node:22']
+ memory: ['1Gi', '2Gi']
+ container:
+ resources:
+ requests:
+ memory: '{{ matrix.memory }}'
+ run:
+ image: '{{ matrix.image }}'
+```
+
+Will instantiate 6 copies:
+
+| `index` | `matrixIndex` | `matrix.image` | `matrix.memory` | `shardIndex` |
+|---------|---------------|----------------|-----------------|--------------|
+| `0` | `0` | `"node:20"` | `"1Gi"` | `0` |
+| `1` | `1` | `"node:20"` | `"2Gi"` | `0` |
+| `2` | `2` | `"node:21"` | `"1Gi"` | `0` |
+| `3` | `3` | `"node:21"` | `"2Gi"` | `0` |
+| `4` | `4` | `"node:22"` | `"1Gi"` | `0` |
+| `5` | `5` | `"node:22"` | `"2Gi"` | `0` |
+
+The matrix properties can be a static list of values, like:
+
+```yaml
+matrix:
+ browser: [ 'chrome', 'firefox', '{{ config.another }}' ]
+```
+
+or could be dynamic one, using [**Test Workflow's expressions**](test-workflows-expressions.md):
+
+```yaml
+matrix:
+ files: 'glob("/data/repo/**/*.test.js")'
+```
+
+### Sharding
+
+Often you may want to distribute the load, to speed up the execution. To do so, you can use `shards` and `count`/`maxCount` properties.
+
+* `shards` is a map of data to split across different instances
+* `count`/`maxCount` are describing the number of instances to start
+ * `count` defines static number of instances (always)
+ * `maxCount` defines maximum number of instances (will be lower if there is not enough data in `shards` to split)
+
+
+Replicas (count
only)} default>
+
+```yaml
+parallel:
+ count: 5
+ description: "{{ index + 1 }} instance of {{ count }}"
+ run:
+ image: grafana/k6:latest
+```
+__
+
+Static sharding (count
+ shards
)} default>
+
+```yaml
+parallel:
+ count: 2
+ description: "{{ index + 1 }} instance of {{ count }}"
+ shards:
+ url: ["https://testkube.io", "https://docs.testkube.io", "https://app.testkube.io"]
+ run:
+ # shard.url for 1st instance == ["https://testkube.io", "https://docs.testkube.io"]
+ # shard.url for 2nd instance == ["https://app.testkube.io"]
+ shell: 'echo {{ shellquote(join(shard.url, "\n")) }}'
+```
+
+
+Dynamic sharding (maxCount
+ shards
)} default>
+
+```yaml
+parallel:
+ maxCount: 5
+ shards:
+ # when there will be less than 5 tests found - it will be 1 instance per 1 test
+ # when there will be more than 5 tests found - they will be distributed similarly to static sharding
+ testFiles: 'glob("cypress/e2e/**/*.js")'
+ description: '{{ join(map(shard.testFiles, "relpath(_.value, \"cypress/e2e\")"), ", ") }}'
+```
+
+
+
+
+Similarly to `matrix`, the `shards` may contain a static list, or [**Test Workflow's expression**](test-workflows-expressions.md).
+
+### Counters
+
+Besides having the `matrix.` and `shard.` there are some counter variables available in Test Workflow's expressions:
+
+* `index` and `count` - counters for total instances
+* `matrixIndex` and `matrixCount` - counters for the combinations
+* `shardIndex` and `shardCount` - counters for the shards
+
+### Matrix and sharding together
+
+Sharding can be run along with matrix. In that case, for every matrix combination, we do have selected replicas/sharding. In example:
+
+```yaml
+matrix:
+ browser: ["chrome", "firefox"]
+ memory: ["1Gi", "2Gi"]
+count: 2
+shards:
+ url: ["https://testkube.io", "https://docs.testkube.io", "https://app.testkube.io"]
+```
+
+Will start 8 instances:
+
+| `index` | `matrixIndex` | `matrix.browser` | `matrix.memory` | `shardIndex` | `shard.url` |
+|---------|---------------|------------------|-----------------|--------------|-------------------------------------------------------|
+| `0` | `0` | `"chrome"` | `"1Gi"` | `0` | `["https://testkube.io", "https://docs.testkube.io"]` |
+| `1` | `0` | `"chrome"` | `"1Gi"` | `1` | `["https://app.testkube.io"]` |
+| `2` | `1` | `"chrome"` | `"2Gi"` | `0` | `["https://testkube.io", "https://docs.testkube.io"]` |
+| `3` | `1` | `"chrome"` | `"2Gi"` | `1` | `["https://app.testkube.io"]` |
+| `4` | `2` | `"firefox"` | `"1Gi"` | `0` | `["https://testkube.io", "https://docs.testkube.io"]` |
+| `5` | `2` | `"firefox"` | `"1Gi"` | `1` | `["https://app.testkube.io"]` |
+| `6` | `3` | `"firefox"` | `"2Gi"` | `0` | `["https://testkube.io", "https://docs.testkube.io"]` |
+| `7` | `3` | `"firefox"` | `"2Gi"` | `1` | `["https://app.testkube.io"]` |
diff --git a/docs/docs/articles/test-workflows-parallel.md b/docs/docs/articles/test-workflows-parallel.md
new file mode 100644
index 00000000000..5fdf1dc1ae5
--- /dev/null
+++ b/docs/docs/articles/test-workflows-parallel.md
@@ -0,0 +1,576 @@
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+
+# Test Workflows - Parallel Steps
+
+Often you would like to speed up the test execution, by distributing the load across multiple runs.
+
+Test Workflows have `parallel` steps, that allow you to distribute your test even dynamically, and among multiple cluster nodes.
+
+## Syntax
+
+To declare the parallel step, you need to specify the step with `parallel` clause.
+
+### Basic configuration
+
+It allows to provide:
+* similar properties as any other kind of step, i.e. `container`, `run`, `shell` or `steps`
+* general Test Workflow properties, like `job`, `pod` or `content`
+* [**matrix and sharding**](./test-workflows-matrix-and-sharding.md) properties
+* `parallelism` to define maximum number of instances to run at once
+* `description` that may provide human-readable information for each instance separately
+
+### Fetching logs
+
+By default the logs for the parallel steps are saved. To disable them or make them conditional, you can use `logs` property.
+It takes an expression condition, so you can dynamically choose whether it should be saved or not. Often you will use:
+
+* `logs: never` to never store the logs
+* `logs: failed` to store logs only if the step has failed
+
+![example-parallel-log.png](../img/example-parallel-log.png)
+
+### Pod and Job configuration
+
+The parallel steps are started as a separate jobs/pods, so you can configure `pod` and `job` similarly to general Test Workflow.
+
+### Lifecycle
+
+Similarly to regular steps, you can configure things like `timeout` (`timeout: 30m`), `optional: true`, or `negative: true` for expecting failure.
+
+### Matrix and sharding
+
+The parallel steps are meant to support matrix and sharding, to run multiple replicas and/or distribute the load across multiple instances.
+It is supported by regular matrix/sharding properties (`matrix`, `shards`, `count` and `maxCount`).
+
+You can read more about it in the general [**Matrix and Sharding**](./test-workflows-matrix-and-sharding.md) documentation.
+
+## Providing content
+
+There are multiple ways to provide the files for the parallel steps.
+
+:::info
+
+As the parallel steps are started in separate pods, they don't share the file system with the Test Workflow execution.
+
+:::
+
+### Copying content inside
+
+It is possible to copy the files from the original Test Workflow into the parallel steps.
+As an example, you may want to fetch the repository and install the dependencies on the original TestWorkflow,
+and then distribute it across the parallel steps.
+
+To do so, you can use `transfer` property. It takes list of files to transfer:
+
+* `{ from: "/data/repo/build" }` will copy the `/data/repo/build` directory from execution's Pod into `/data/repo/build` in the instance's Pod
+* `{ from: "/data/repo/build", to: "/out" }` will copy the `/data/repo/build` directory from execution's Pod into `/out` in the instance's Pod
+* `{ from: "/data/repo/build", to: "/out", "files": ["**/*.json"] }` will copy only JSON files from the `/data/repo/build` directory from execution's Pod into `/out` in the instance's Pod
+
+#### Example
+
+The example below will:
+
+* Clone the Git repository (`content`)
+* Install the Node.js dependencies (`steps[0].shell`)
+* Run Playwright tests (`steps[1].parallel`)
+ * Specify 2 instances of that step (`steps[1].parallel.count`)
+ * Copy the `/data/repo` along with already installed `node_modules` (`steps[1].parallel.transfer`)
+ * Run the Playwright test with customized `--shard` parameter for each instance (`1/2` and `2/2` respectively, via `steps[1].parallel.shell`)
+
+
+
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-sharded-playwright-test
+spec:
+ content:
+ git:
+ uri: https://github.com/kubeshop/testkube
+ paths:
+ - test/playwright/executor-tests/playwright-project
+ container:
+ image: mcr.microsoft.com/playwright:v1.32.3-focal
+ workingDir: /data/repo/test/playwright/executor-tests/playwright-project
+
+ steps:
+ - name: Install dependencies
+ shell: 'npm ci'
+
+ - name: Run tests
+ parallel:
+ count: 2
+ transfer:
+ - from: /data/repo
+ shell: 'npx playwright test --shard {{ index + 1 }}/{{ count }}'
+```
+
+
+
+
+![example-sharded-playwright-test.png](../img/example-sharded-playwright-test.png)
+
+
+
+
+### Static content or a Git repository
+
+:::tip
+
+For distributed testing, it's better to avoid cloning repository in each step.
+Instead, that could be run on sequential step, and then transferred to parallel steps with [`transfer`](#copying-content-inside).
+
+This way you will spare the resources, as the computation and transferring over internet will happen only once.
+
+:::
+
+Parallel steps allow to provide the `content` property similar to the one directly in the Test Workflow. As an example, you may clone the repository:
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-parallel-with-static-files
+spec:
+ steps:
+ - parallel:
+ count: 2
+ content:
+ files:
+ - path: /k6.js
+ content: |
+ import http from 'k6/http';
+ export const options = {
+ thresholds: {
+ http_req_failed: ['rate<0.01'],
+ }
+ };
+ export default function () {
+ http.get('https://testkube.io/');
+ }
+ run:
+ image: grafana/k6:latest
+ shell: "k6 run /k6.js --iterations 100"
+```
+
+## Synchronising the parallel steps execution
+
+By default, each parallel step is executed as soon as it is possible. There is an option to override it though, so they won't start until all the instances are ready.
+The pods may start at different time, especially with [node auto-provisioning](https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning).
+
+It's especially useful for load testing, like K6, as you want to have the distributed load test executed at the same time.
+
+To achieve that with parallel steps, simply add `paused: true` clause directly under the parallel, or to the specific step that it should stay at.
+This way, the tests won't get started, until all steps have reached that point.
+
+
+
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-parallel-with-static-files
+spec:
+ steps:
+ - parallel:
+ count: 2
+ paused: true
+ content:
+ files:
+ - path: /k6.js
+ content: |
+ import http from 'k6/http';
+ export const options = {
+ thresholds: {
+ http_req_failed: ['rate<0.01'],
+ }
+ };
+ export default function () {
+ http.get('https://testkube.io/');
+ }
+ run:
+ image: grafana/k6:latest
+ shell: "k6 run /k6.js --iterations 100"
+```
+
+
+
+
+![example-workflow-sync-lock.png](../img/example-workflow-sync-lock.png)
+
+
+
+
+## Reading files from parallel steps
+
+In the opposite to copying the files into the parallel steps pod, you may want to read reports or other data **from** them too.
+There are 2 basic methods to achieve that.
+
+### Artifacts
+
+The parallel steps may expose data as artifacts, just the same way as sequential step. The artifacts from different steps will be isolated.
+
+
+
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-sharded-playwright-test-with-artifacts
+spec:
+ content:
+ git:
+ uri: https://github.com/kubeshop/testkube
+ paths:
+ - test/playwright/executor-tests/playwright-project
+ container:
+ image: mcr.microsoft.com/playwright:v1.32.3-focal
+ workingDir: /data/repo/test/playwright/executor-tests/playwright-project
+
+ steps:
+ - name: Install dependencies
+ shell: 'npm ci'
+
+ - name: Run tests
+ parallel:
+ count: 2
+ transfer:
+ - from: /data/repo
+ container:
+ env:
+ - name: PLAYWRIGHT_HTML_REPORT
+ value: /data/out/playwright-report
+ shell: 'npx playwright test --output /data/out --shard {{ index + 1 }}/{{ count }}'
+ artifacts:
+ workingDir: /data/out
+ paths:
+ - '**/*'
+```
+
+
+
+
+![example-sharded-playwright-test-with-artifacts-logs.png](../img/example-sharded-playwright-test-with-artifacts-logs.png)
+
+
+
+
+![example-sharded-playwright-test-with-artifacts-artifacts.png](../img/example-sharded-playwright-test-with-artifacts-artifacts.png)
+
+
+
+
+### Fetching files back to execution's Pod
+
+Alternatively, you can use `fetch` instruction. `fetch` syntax is similar to `transfer`, but instead of copying data from execution's Pod into parallel instance's Pod,
+it's copying the other way - from parallel instance's Pod back to execution's.
+
+Afterward, you can process these files, or i.e. build not isolated artifacts.
+
+
+
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-sharded-playwright-test-with-artifacts-fetch
+spec:
+ content:
+ git:
+ uri: https://github.com/kubeshop/testkube
+ paths:
+ - test/playwright/executor-tests/playwright-project
+ container:
+ image: mcr.microsoft.com/playwright:v1.32.3-focal
+ workingDir: /data/repo/test/playwright/executor-tests/playwright-project
+
+ steps:
+ - name: Install dependencies
+ shell: 'npm ci'
+
+ - name: Run tests
+ parallel:
+ count: 2
+ transfer:
+ - from: /data/repo
+ fetch:
+ - from: /data/out
+ to: /data/artifacts/instance-{{ index }}
+ container:
+ env:
+ - name: PLAYWRIGHT_HTML_REPORT
+ value: /data/out/playwright-report
+ shell: 'npx playwright test --output /data/out --shard {{ index + 1 }}/{{ count }}'
+
+ - condition: always
+ artifacts:
+ workingDir: /data/artifacts
+ paths:
+ - '**/*'
+```
+
+
+
+
+![example-sharded-playwright-test-with-artifacts-fetch-logs.png](../img/example-sharded-playwright-test-with-artifacts-fetch-logs.png)
+
+
+
+
+![example-sharded-playwright-test-with-artifacts-fetch-artifacts.png](../img/example-sharded-playwright-test-with-artifacts-fetch-artifacts.png)
+
+
+
+
+## Examples
+
+### Sharded Playwright with single report
+
+:::info
+
+Blob reporter and merging reports have landed in Playwright 1.37.0, so it's not available before.
+
+:::
+
+Playwright provides nice toolset for sharding, which can be used easily with the Test Workflows.
+
+The example below:
+
+* Load the Git repository with Playwright test (`content`)
+* Install the project dependencies (`steps[0].shell`)
+* Run the Playwright tests split to 2 shards (`steps[1].parallel`)
+ * Reserve 1 CPU and 1GB RAM for each shard (`steps[1].parallel.container.resources`)
+ * Copy the repository and `node_modules` inside (`steps[1].parallel.transfer`)
+ * Run Playwright test - with `blob` reporter, and with specific shard segment (`steps[1].parallel.shell`)
+ * Fetch the Blob reporter's data to corresponding directory on Execution's pod (`steps[1].parallel.fetch`)
+* Merge the reports using Playwright's tooling (`steps[2].shell`)
+* Save the merged report as an artifact (`steps[2].artifacts`)
+
+
+
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-sharded-playwright-with-merged-report
+spec:
+ content:
+ git:
+ uri: https://github.com/kubeshop/testkube
+ paths:
+ - test/playwright/executor-tests/playwright-project
+ container:
+ image: mcr.microsoft.com/playwright:v1.38.0-focal
+ workingDir: /data/repo/test/playwright/executor-tests/playwright-project
+
+ steps:
+ - name: Install dependencies
+ shell: 'npm install --save-dev @playwright/test@1.38.0 && npm ci'
+
+ - name: Run tests
+ parallel:
+ count: 2
+ transfer:
+ - from: /data/repo
+ fetch:
+ - from: /data/repo/test/playwright/executor-tests/playwright-project/blob-report
+ to: /data/reports
+ container:
+ resources:
+ requests:
+ cpu: 1
+ memory: 1Gi
+ shell: |
+ npx playwright test --reporter blob --shard {{ index + 1 }}/{{ count }}
+
+ - name: Merge reports
+ condition: always
+ shell: 'npx playwright merge-reports --reporter=html /data/reports'
+ artifacts:
+ paths:
+ - 'playwright-report/**'
+```
+
+
+
+
+![example-sharded-playwright-with-merged-report-logs.png](../img/example-sharded-playwright-with-merged-report-logs.png)
+
+
+
+
+![example-sharded-playwright-with-merged-report-artifacts.png](../img/example-sharded-playwright-with-merged-report-artifacts.png)
+
+
+
+
+### Automatically sharded Cypress tests
+
+Cypress doesn't have any built-in way for sharding, but Test Workflow's [**matrix and sharding**](./test-workflows-matrix-and-sharding.md)
+works well with all kinds of tests.
+
+While the example here is not a perfect solution, it's sharding the Cypress tests based on the available test files.
+
+The example below:
+
+* Load the Cypress tests from the Git repository (`content`)
+* Sets the working directory to the tests one (`container.workingDir`)
+* Install the project dependencies (`steps[0].shell`)
+* Run Cypress tests with dynamic sharding (`steps[1].parallel`)
+ * The shards will be built off the test files in `cypress/e2e` directory (`steps[1].parallel.shards.testFiles`)
+ * It will have maximum of 5 shards (`steps[1].parallel.maxCount`)
+ * When there is less than or equal to 5 test files, it will run 1 shard per test file
+ * When there will be more than 5 test files, it will distribute them across 5 shards
+ * Each shard will run only selected test files with `--spec` Cypress' argument (`steps[1].parallel.run.args`)
+
+
+
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-sharded-cypress
+spec:
+ content:
+ git:
+ uri: https://github.com/kubeshop/testkube
+ paths:
+ - test/cypress/executor-tests/cypress-13
+ container:
+ image: cypress/included:13.6.4
+ workingDir: /data/repo/test/cypress/executor-tests/cypress-13
+
+ steps:
+ - name: Install dependencies
+ shell: 'npm ci'
+
+ - name: Run tests
+ parallel:
+ maxCount: 5
+ shards:
+ testFiles: 'glob("cypress/e2e/**/*.js")'
+ description: '{{ join(map(shard.testFiles, "relpath(_.value, \"cypress/e2e\")"), ", ") }}'
+ transfer:
+ - from: /data/repo
+ container:
+ resources:
+ requests:
+ cpu: 1
+ memory: 1Gi
+ env:
+ - name: CYPRESS_CUSTOM_ENV
+ value: CYPRESS_CUSTOM_ENV_value
+ run:
+ args:
+ - --env
+ - NON_CYPRESS_ENV=NON_CYPRESS_ENV_value
+ - --spec
+ - '{{ join(shard.testFiles, ",") }}'
+```
+
+
+
+
+![example-sharded-cypress.png](../img/example-sharded-cypress.png)
+
+
+
+
+### Distributed K6 load testing
+
+:::tip
+
+If you have multiple suites, you may consider exposing such executor as a Test Workflow Template,
+and declare contents by `config` parameters. Alternatively, you can use `config` directly in Test Workflow.
+
+:::
+
+You can simply run K6 load tests distributed across all your nodes. The mechanism is similar to what [**k6-operator**](https://grafana.com/docs/k6/latest/testing-guides/running-distributed-tests/) has under the hood,
+but it's much more powerful and flexible.
+
+The example below:
+
+* Takes optional run configuration parameters (`config`)
+ * `vus` to declare Virtual Users to distribute
+ * `duration` to declare Load Test time
+ * `workers` to declare number of K6 instances to create
+* Load the K6 script from Git repository (`content`)
+* Run distributed K6 tests (`steps[0].parallel`)
+ * It's using built-in `distribute/evenly` Test Workflow Template, that sets [`pod.topologySpreadConstraints`](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) to distribute pods evenly across nodes (`steps[0].parallel.use`)
+ * It's creating as many K6 workers as has been declared in `workers` config (`steps[0].parallel.count`)
+ * It copies the test case from Git repository into workers (`steps[0].parallel.transfer`)
+ * It reserves 1/8 CPU and 128MB for each worker (`steps[0].parallel.container.resources`)
+ * It ensures that all workers will start load tests at the same time, when all are ready (`steps[0].parallel.paused`)
+ * It runs K6 executable against that test case (`steps[0].parallel.run.shell`)
+ * It passes number of Virtual Users and test duration via K6 parameters
+ * It uses K6 [**--execution-segment**](https://grafana.com/docs/k6/latest/using-k6/k6-options/reference/#execution-segment) argument to select the fraction of tests to run
+
+
+
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-distributed-k6
+ labels:
+ core-tests: workflows
+spec:
+ config:
+ vus: {type: integer, default: 100}
+ duration: {type: string, default: '5s'}
+ workers: {type: integer, default: 10}
+ content:
+ git:
+ uri: https://github.com/kubeshop/testkube
+ paths:
+ - test/k6/executor-tests/k6-smoke-test.js
+
+ steps:
+ - name: Run test
+ parallel:
+ count: 'config.workers'
+ transfer:
+ - from: /data/repo
+ use:
+ - name: distribute/evenly
+ container:
+ workingDir: /data/repo/test/k6/executor-tests
+ resources:
+ requests:
+ cpu: 128m
+ memory: 128Mi
+ env:
+ - name: K6_SYSTEM_ENV
+ value: K6_SYSTEM_ENV_value
+ paused: true
+ run:
+ image: grafana/k6:0.49.0
+ shell: |
+ k6 run k6-smoke-test.js \
+ -e K6_ENV_FROM_PARAM=K6_ENV_FROM_PARAM_value \
+ --vus {{ config.vus }} \
+ --duration {{ shellquote(config.duration) }} \
+ --execution-segment {{ index }}/{{ count }}:{{ index + 1 }}/{{ count }}
+```
+
+
+
+
+![example-distributed-k6-run.png](../img/example-distributed-k6-run.png)
+
+
+
+
+![example-distributed-k6-logs.png](../img/example-distributed-k6-logs.png)
+
+
+
\ No newline at end of file
diff --git a/docs/docs/articles/test-workflows-services.md b/docs/docs/articles/test-workflows-services.md
new file mode 100644
index 00000000000..a1c6ae5a71e
--- /dev/null
+++ b/docs/docs/articles/test-workflows-services.md
@@ -0,0 +1,429 @@
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+
+# Test Workflows - Services
+
+Often if your use-case is more complex, you may need additional services for the Tests you are running. Common use cases are:
+* Database, i.e. [**MongoDB**](https://hub.docker.com/_/mongo) or [**PostgreSQL**](https://hub.docker.com/r/bitnami/postgresql)
+* Workers, i.e. [**remote JMeter workers**](https://hub.docker.com/r/justb4/jmeter) or [**Selenium Grid's remote browsers**](https://hub.docker.com/r/selenium/standalone-firefox)
+* Service under test, i.e. your API to run E2E tests against it
+
+Testkube allows you to run such services for the Test Workflow, communicate with them and debug smoothly.
+
+## How it works
+
+When you define the service, the Test Workflow is creating a new pod and any other require resources for each of instances,
+read its status and logs, and provides its information (like IP) for use in further steps. After the service is no longer needed, it's cleaned up.
+
+:::info
+
+As the services are started in a separate pod, they don't share the file system with the Test Workflow execution.
+There are multiple ways to share data with them - either using [**one of techniques described below**](#providing-content), or advanced Kubernetes' native ways like [**ReadWriteMany volumes**](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes).
+
+:::
+
+## Syntax
+
+To add some services, you need to specify the `services` clause.
+It can be either directly on the `spec` level (to be available for the whole execution), or on specific step (to isolate it).
+
+:::tip
+
+You may want to use services in [Test Workflow Template](./test-workflows-examples-templates.md), to reuse them for multiple tests.
+
+:::
+
+
+
+
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-workflow-with-mongo-service
+spec:
+ services:
+ db:
+ timeout: 5m
+ image: mongo:latest
+ env:
+ - name: MONGO_INITDB_ROOT_USERNAME
+ value: root
+ - name: MONGO_INITDB_ROOT_PASSWORD
+ value: p4ssw0rd
+ readinessProbe:
+ tcpSocket:
+ port: 27017
+ periodSeconds: 1
+ steps:
+ - name: Check if it is running
+ run:
+ image: mongo:latest
+ shell: |
+ echo Connecting to MongoDB at {{ services.db.0.ip }}
+ mongosh -u root -p p4ssw0rd {{ services.db.0.ip }} --eval 'db.serverStatus().localTime'
+```
+
+
+
+
+![example-workflow-with-mongo-service workflow](../img/example-workflow-with-mongo-service.png)
+
+
+
+
+### Connecting to the services
+
+To connect to create services, you can simply use `services...ip` expression in the place you need its address (i.e. environment variable, or shell command).
+
+* `services.db.0.ip` will return `string` - IP of the 1st instance of the `db` service
+* `services.db.*.ip` will return `[]string` - list of IPs of all the `db` service instances
+
+### Basic configuration
+
+The service allows similar fields as the `run` command, i.e.:
+* `image`, `env`, `volumeMounts`, `resources` - to configure the container
+* `command`, `args` - to specify command to run
+* `shell` - to specify script to run (instead of `command`/`args`)
+* `description` that may provide human-readable information for each instance separately
+
+### Fetching logs
+
+By default we are not saving the logs for the services. If you would like to fetch the logs, you can use `logs` property.
+It takes an expression condition, so you can dynamically choose whether it should be saved or not. Often you will use:
+
+* `logs: always` to always store the logs
+* `logs: failed` to store the logs only if the Test Workflow has failed
+
+### Pod configuration
+
+The service is started as a separate job/pod, so you can configure `pod` and `job` similarly to general Test Workflow.
+
+### Lifecycle
+
+You can apply [**readinessProbe**](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-tcp-liveness-probe) to ensure that the service will be available for the next step.
+
+The Test Workflow won't continue until the container will be ready then. To ensure that the execution won't get stuck, you can add `timeout` property (like `timeout: 1h30m20s`),
+so it will fail if the service is not ready after that time.
+
+### Matrix and sharding
+
+The services are meant to support matrix and sharding, to run multiple replicas and/or distribute the load across multiple instances.
+It is supported by regular matrix/sharding properties (`matrix`, `shards`, `count` and `maxCount`).
+
+You can read more about it in the general [**Matrix and Sharding**](./test-workflows-matrix-and-sharding.md) documentation.
+
+## Providing content
+
+There are multiple ways to provide the files inside the services.
+
+:::info
+
+As the services are started in a separate pod, they don't share the file system with the Test Workflow execution.
+
+:::
+
+### Copying content inside
+
+It is possible to copy the files from the original Test Workflow into the services.
+As an example, you may want to fetch the repository and install the dependencies on the original TestWorkflow,
+and then distribute it to the services.
+
+To do so, you can use `transfer` property. It takes list of files to transfer:
+
+* `{ from: "/data/repo/build" }` will copy the `/data/repo/build` directory from execution's Pod into `/data/repo/build` in the service's Pod
+* `{ from: "/data/repo/build", to: "/out" }` will copy the `/data/repo/build` directory from execution's Pod into `/out` in the service's Pod
+* `{ from: "/data/repo/build", to: "/out", "files": ["**/*.json"] }` will copy only JSON files from the `/data/repo/build` directory from execution's Pod into `/out` in the service's Pod
+
+#### Example
+
+
+
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-workflow-with-building-app-and-files-transfer
+spec:
+ content:
+ git:
+ uri: https://github.com/kubeshop/testkube-docs.git
+ revision: main
+ container:
+ workingDir: /data/repo
+ resources:
+ requests:
+ cpu: 1
+ memory: 2Gi
+
+ steps:
+ - name: Build the application
+ run:
+ image: node:21
+ shell: npm i && npm run build
+
+ - name: Test the application
+ services:
+ server:
+ timeout: 1m
+ transfer:
+ - from: /data/repo/build
+ to: /usr/share/nginx/html
+ image: nginx:1.25.4
+ logs: always
+ readinessProbe:
+ httpGet:
+ path: /
+ port: 80
+ periodSeconds: 1
+ steps:
+ - shell: wget -q -O - {{ services.server.0.ip }}
+```
+
+
+
+
+![example-workflow-with-building-app-and-files-transfer.png](../img/example-workflow-with-building-app-and-files-transfer.png)
+
+
+
+
+### Static content or a Git repository
+
+Services allow to provide the `content` property similar to the one directly in the Test Workflow. As an example, you may provide static configuration files to the service:
+
+
+
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-workflow-with-nginx
+spec:
+ services:
+ http:
+ timeout: 5m
+ content:
+ files:
+ - path: /etc/nginx/nginx.conf
+ content: |
+ events { worker_connections 1024; }
+ http {
+ server {
+ listen 8888;
+ location / { root /www; }
+ }
+ }
+ - path: /www/index.html
+ content: "foo-bar"
+ image: nginx:1.25.4
+ readinessProbe:
+ httpGet:
+ path: /
+ port: 8888
+ periodSeconds: 1
+ steps:
+ - shell: wget -q -O - {{ services.http.0.ip }}:8888
+```
+
+
+
+
+![example-workflow-with-nginx.png](../img/example-workflow-with-nginx.png)
+
+
+
+
+## Examples
+
+### JMeter with distributed Remote Workers
+
+You can easily run JMeter with distributed remote workers, that could be even spread evenly across all the Kubernetes nodes.
+
+The example below:
+
+* Read JMX configuration from Git repository (`spec.content.git`)
+* Start 5 remote workers (`spec.services.slave.count`)
+ * Distribute them evenly across nodes (`spec.services.slave.use[0]` - `distribute/evenly` template is setting common [`pod.topologySpreadConstraints`](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/))
+ * Reserve 1/8 CPU and 128MB memory for each instance (`spec.services.slave.container.resources`)
+ * Wait until they will accept connection at port 1099 (`spec.services.slave.readinessProbe`)
+* Run JMeter controller against all the remote workers (`spec.services.steps[0].run`)
+ * It uses `{{ services.slave.*.ip }}` as an argument - `services.slave.*.ip` will return list of IPs, and they will be joined by comma (`,`) to convert to text
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: distributed-jmeter-example
+spec:
+ content:
+ git:
+ uri: https://github.com/kubeshop/testkube
+ revision: main
+ paths:
+ - test/jmeter/executor-tests/jmeter-executor-smoke.jmx
+ container:
+ workingDir: /data/repo/test/jmeter/executor-tests
+ services:
+ slave:
+ use:
+ - name: distribute/evenly
+ count: 5
+ timeout: 30s
+ image: justb4/jmeter:5.5
+ command:
+ - jmeter-server
+ - -Dserver.rmi.localport=60000
+ - -Dserver_port=1099
+ - -Jserver.rmi.ssl.disable=true
+ container:
+ resources:
+ requests:
+ cpu: 128m
+ memory: 128Mi
+ readinessProbe:
+ tcpSocket:
+ port: 1099
+ periodSeconds: 1
+ steps:
+ - name: Run tests
+ run:
+ image: justb4/jmeter:5.5
+ shell: |
+ jmeter -n \
+ -X -Jserver.rmi.ssl.disable=true -Jclient.rmi.localport=7000 \
+ -R {{ services.slave.*.ip }} \
+ -t jmeter-executor-smoke.jmx
+```
+
+### Selenium tests with multiple remote browsers
+
+You can initialize multiple remote browsers, and then [**run tests against them in parallel**](./test-workflows-parallel.md).
+
+The example below:
+
+* Clone the test code (`content`)
+* Start 3 instances of `remote` service (`services.remote`)
+ * Each instance have different browser used (`image` of `services.remote.matrix.browser` passed to `services.remote.image`)
+ * Each instance expose the driver name in their description (`driver` of `services.remote.matrix.browser` passed to `services.remote.description`)
+ * Wait until the browser is ready to connect (`services.remote.readinessProbe`)
+ * Always save the browser logs (`services.remote.logs`)
+* Run tests in parallel for each of the browser (`steps[0].parallel`)
+ * Run for each `remote` service instance (`steps[0].parallel.matrix.browser`)
+ * Transfer the code from the repository to the parallel step (`steps[0].parallel.transfer`)
+ * Sets the environment variables based on the service instance's description and IP (`steps[0].parallel.container.env`)
+ * Run tests (`steps[0].parallel.shell`)
+
+
+
+
+```yaml
+kind: TestWorkflow
+apiVersion: testworkflows.testkube.io/v1
+metadata:
+ name: selenium-remote-browsers-example
+spec:
+ content:
+ git:
+ uri: https://github.com/cerebro1/selenium-testkube.git
+ paths:
+ - selenium-java
+ services:
+ remote:
+ matrix:
+ browser:
+ - driver: chrome
+ image: selenium/standalone-chrome:4.21.0-20240517
+ - driver: edge
+ image: selenium/standalone-edge:4.21.0-20240517
+ - driver: firefox
+ image: selenium/standalone-firefox:4.21.0-20240517
+ logs: always
+ image: "{{ matrix.browser.image }}"
+ description: "{{ matrix.browser.driver }}"
+ readinessProbe:
+ httpGet:
+ path: /wd/hub/status
+ port: 4444
+ periodSeconds: 1
+ steps:
+ - name: Run cross-browser tests
+ parallel:
+ matrix:
+ browser: 'services.remote'
+ transfer:
+ - from: /data/repo/selenium-java
+ container:
+ workingDir: /data/repo/selenium-java
+ image: maven:3.9.6-eclipse-temurin-22-alpine
+ env:
+ - name: SELENIUM_BROWSER
+ value: '{{ matrix.browser.description }}'
+ - name: SELENIUM_HOST
+ value: '{{ matrix.browser.ip }}:4444'
+ shell: mvn test
+```
+
+
+
+
+![selenium-remote-browsers-example.png](../img/selenium-remote-browsers-example.png)
+
+
+
+
+### Run database for integration tests
+
+To test the application, you often want to check if it works well with the external components too.
+As an example, unit tests won't cover if there is a syntax error in SQL query, or there are deadlocks in the process, unless you will run it against actual database.
+
+The example below:
+
+* Start single MongoDB instance as `db` service (`services.db`)
+ * Configure initial credentials to `root`/`p4ssw0rd` (`services.db.env`)
+ * Wait until the MongoDB accept connections (`services.db.readinessProbe`)
+* Run integration tests (`steps[0].run`)
+ * Configure `API_MONGO_DSN` environment variable to point to MongoDB (`steps[0].run.env[0]`)
+ * Install local dependencies and run tests (`steps[0].run.shell`)
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: database-service-example
+spec:
+ content:
+ git:
+ uri: https://github.com/kubeshop/testkube.git
+ revision: develop
+ services:
+ db:
+ image: mongo:latest
+ env:
+ - name: MONGO_INITDB_ROOT_USERNAME
+ value: root
+ - name: MONGO_INITDB_ROOT_PASSWORD
+ value: p4ssw0rd
+ readinessProbe:
+ tcpSocket:
+ port: 27017
+ periodSeconds: 1
+ container:
+ workingDir: /data/repo
+ steps:
+ - name: Run integration tests
+ run:
+ image: golang:1.22.3-bookworm
+ env:
+ - name: API_MONGO_DSN
+ value: mongodb://root:p4ssw0rd@{{services.db.0.ip}}:27017
+ shell: |
+ apt-get update
+ apt-get install -y ca-certificates libssl3 git skopeo
+ go install gotest.tools/gotestsum@v1.9.0
+
+ INTEGRATION=y gotestsum --format short-verbose -- -count 1 -run _Integration -cover ./pkg/repository/...
+```
\ No newline at end of file
diff --git a/docs/docs/articles/test-workflows-test-suites.md b/docs/docs/articles/test-workflows-test-suites.md
new file mode 100644
index 00000000000..222eafffebc
--- /dev/null
+++ b/docs/docs/articles/test-workflows-test-suites.md
@@ -0,0 +1,211 @@
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+
+# Test Workflows - Test Suites
+
+With Test Workflows it is possible to run downstream Test Workflows and Tests with `execute` operation,
+similarly to what you can do in Test Suites.
+
+## Advantages over original Test Suite
+
+:::tip
+
+We consider Test Workflows as a long-term solution, so keep in mind that the original Test Suites will [**become deprecated**](https://testkube.io/blog/the-future-of-testkube-with-test-workflows).
+
+:::
+
+As it is regular Test Workflow, where a single step is dispatching downstream Test Workflows and Tests,
+the execution is very flexible. You can:
+
+* Fetch input data before (i.e. by using `curl`/`wget` to download data, or fetching Git repository)
+* Run setup operations (i.e. start shared instance of database, or generate API key)
+* Process the results (i.e. by notifying about the status)
+* Run other tests based on the previous results
+
+## Syntax
+
+You have to use `execute` operation in the step, and provide definition of the Test Workflows and Tests to run.
+
+
+
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-test-suite
+spec:
+ steps:
+ - execute:
+ workflows:
+ - name: example-distributed-k6
+ description: Run {{ index + 1 }} of {{ count }}
+ count: 2
+ config:
+ vus: 8
+ duration: 1s
+ workers: 2
+ - name: example-sharded-cypress
+ tests:
+ - name: example-test
+ description: Example without request
+ - name: example-test
+ description: Example with env variables
+ executionRequest:
+ variables:
+ SOME_VARIABLE:
+ type: basic
+ name: SOME_VARIABLE
+ value: some-value
+```
+
+
+
+
+![example-test-suite.png](../img/example-test-suite.png)
+
+
+
+
+### Running Test Workflows
+
+To run Test Workflow as part of the `execute` step, you have to add its reference in the `workflows` list.
+
+You need to provide `name`, along with optional `config` values for parametrization.
+
+### Running Tests
+
+:::tip
+
+We consider Test Workflows as a long-term solution, so keep in mind that the Tests will [**become deprecated**](https://testkube.io/blog/the-future-of-testkube-with-test-workflows).
+
+:::
+
+To run Tests as part of the `execute` step, you have to add its reference in the `tests` list.
+
+You need to provide `name`, along with optional `executionRequest` values for parametrization,
+that are similar to the regular Test execution request.
+
+### Controlling the concurrency level
+
+You can use `parallelism` property to control how many Test Workflows and Tests will be running at once.
+
+In example, to run all the downstream jobs sequentially, you can use `parallelism: 1`.
+It affects jobs instantiated by [**matrix and sharding**](./test-workflows-matrix-and-sharding.md) properties (like `count`) too.
+
+
+
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-sequential-test-suite
+spec:
+ steps:
+ - execute:
+ parallelism: 1
+ workflows:
+ - name: example-distributed-k6
+ count: 2
+ config:
+ vus: 8
+ duration: 1s
+ workers: 2
+ - name: example-sharded-cypress
+ tests:
+ - name: example-test
+ count: 5
+```
+
+
+
+
+![example-sequential-test-suite.png](../img/example-sequential-test-suite.png)
+
+
+
+
+## Passing input from files
+
+It may happen that you will need to pass information from the file system. You can either pass the files using Test Workflow expressions (like `file("./file-content.txt")`) or using a `tarball` syntax.
+
+### Specific files
+
+You can easily use Test Workflow expressions to fetch some files and send them as a configuration variable:
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-test-suite-with-file-input
+spec:
+ content:
+ git:
+ uri: https://github.com/kubeshop/testkube
+ revision: main
+ paths:
+ - test/k6/executor-tests/k6-smoke-test-without-envs.js
+ steps:
+ - execute:
+ workflows:
+ - name: example-distributed-k6
+ config:
+ vus: 8
+ duration: 1s
+ workers: 2
+ script: '{{ file("/data/repo/test/k6/executor-tests/k6-smoke-test-without-envs.js") }}'
+```
+
+### Multiple files transfer
+
+To transfer multiple files, similarly to `transfer` in [**Parallel Steps**](./test-workflows-parallel.md#copying-content-inside),
+you can use a `tarball` syntax that will pack selected files and return the URL to download them:
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-test-suite-with-file-input-packaged
+spec:
+ content:
+ git:
+ uri: https://github.com/kubeshop/testkube
+ revision: main
+ paths:
+ - test/k6/executor-tests/k6-smoke-test-without-envs.js
+ steps:
+ - execute:
+ workflows:
+ - name: example-test-reading-files
+ tarball:
+ scripts:
+ from: /data/repo
+ config:
+ input: '{{ tarball.scripts.url }}'
+```
+
+You can later use i.e. `content.tarball` to unpack them in destination test:
+
+```yaml
+apiVersion: testworkflows.testkube.io/v1
+kind: TestWorkflow
+metadata:
+ name: example-test-reading-files
+spec:
+ config:
+ input: {type: string}
+ content:
+ tarball:
+ - url: "{{ config.input }}" # extract provided tarball
+ path: "/data/repo" # to local /data/repo directory (or any other)
+ steps:
+ - shell: tree /data/repo
+```
+
+### Matrix and sharding
+
+The `execute` operation supports matrix and sharding, to run multiple replicas and/or distribute the load across multiple runs.
+It is supported by regular matrix/sharding properties (`matrix`, `shards`, `count` and `maxCount`) for each Test Workflow or Test reference.
+
+You can read more about it in the general [**Matrix and Sharding**](./test-workflows-matrix-and-sharding.md) documentation.
\ No newline at end of file
diff --git a/docs/docs/img/example-distributed-k6-logs.png b/docs/docs/img/example-distributed-k6-logs.png
new file mode 100644
index 00000000000..e8fa5def4b7
Binary files /dev/null and b/docs/docs/img/example-distributed-k6-logs.png differ
diff --git a/docs/docs/img/example-distributed-k6-run.png b/docs/docs/img/example-distributed-k6-run.png
new file mode 100644
index 00000000000..54c860583bf
Binary files /dev/null and b/docs/docs/img/example-distributed-k6-run.png differ
diff --git a/docs/docs/img/example-parallel-log.png b/docs/docs/img/example-parallel-log.png
new file mode 100644
index 00000000000..69106c52eb2
Binary files /dev/null and b/docs/docs/img/example-parallel-log.png differ
diff --git a/docs/docs/img/example-sequential-test-suite.png b/docs/docs/img/example-sequential-test-suite.png
new file mode 100644
index 00000000000..4bea3247168
Binary files /dev/null and b/docs/docs/img/example-sequential-test-suite.png differ
diff --git a/docs/docs/img/example-sharded-cypress.png b/docs/docs/img/example-sharded-cypress.png
new file mode 100644
index 00000000000..bf463b5a549
Binary files /dev/null and b/docs/docs/img/example-sharded-cypress.png differ
diff --git a/docs/docs/img/example-sharded-playwright-test-with-artifacts-artifacts.png b/docs/docs/img/example-sharded-playwright-test-with-artifacts-artifacts.png
new file mode 100644
index 00000000000..de04f9f1966
Binary files /dev/null and b/docs/docs/img/example-sharded-playwright-test-with-artifacts-artifacts.png differ
diff --git a/docs/docs/img/example-sharded-playwright-test-with-artifacts-fetch-artifacts.png b/docs/docs/img/example-sharded-playwright-test-with-artifacts-fetch-artifacts.png
new file mode 100644
index 00000000000..71bb01fcd2d
Binary files /dev/null and b/docs/docs/img/example-sharded-playwright-test-with-artifacts-fetch-artifacts.png differ
diff --git a/docs/docs/img/example-sharded-playwright-test-with-artifacts-fetch-logs.png b/docs/docs/img/example-sharded-playwright-test-with-artifacts-fetch-logs.png
new file mode 100644
index 00000000000..37a553582a1
Binary files /dev/null and b/docs/docs/img/example-sharded-playwright-test-with-artifacts-fetch-logs.png differ
diff --git a/docs/docs/img/example-sharded-playwright-test-with-artifacts-logs.png b/docs/docs/img/example-sharded-playwright-test-with-artifacts-logs.png
new file mode 100644
index 00000000000..07620cd4f60
Binary files /dev/null and b/docs/docs/img/example-sharded-playwright-test-with-artifacts-logs.png differ
diff --git a/docs/docs/img/example-sharded-playwright-test.png b/docs/docs/img/example-sharded-playwright-test.png
new file mode 100644
index 00000000000..d30a61286de
Binary files /dev/null and b/docs/docs/img/example-sharded-playwright-test.png differ
diff --git a/docs/docs/img/example-sharded-playwright-with-merged-report-artifacts.png b/docs/docs/img/example-sharded-playwright-with-merged-report-artifacts.png
new file mode 100644
index 00000000000..4a51854f65a
Binary files /dev/null and b/docs/docs/img/example-sharded-playwright-with-merged-report-artifacts.png differ
diff --git a/docs/docs/img/example-sharded-playwright-with-merged-report-logs.png b/docs/docs/img/example-sharded-playwright-with-merged-report-logs.png
new file mode 100644
index 00000000000..da3f3427a70
Binary files /dev/null and b/docs/docs/img/example-sharded-playwright-with-merged-report-logs.png differ
diff --git a/docs/docs/img/example-test-suite.png b/docs/docs/img/example-test-suite.png
new file mode 100644
index 00000000000..3a2973f80bc
Binary files /dev/null and b/docs/docs/img/example-test-suite.png differ
diff --git a/docs/docs/img/example-workflow-sync-lock.png b/docs/docs/img/example-workflow-sync-lock.png
new file mode 100644
index 00000000000..90c65536e17
Binary files /dev/null and b/docs/docs/img/example-workflow-sync-lock.png differ
diff --git a/docs/docs/img/example-workflow-with-building-app-and-files-transfer.png b/docs/docs/img/example-workflow-with-building-app-and-files-transfer.png
new file mode 100644
index 00000000000..96305d49e13
Binary files /dev/null and b/docs/docs/img/example-workflow-with-building-app-and-files-transfer.png differ
diff --git a/docs/docs/img/example-workflow-with-mongo-service.png b/docs/docs/img/example-workflow-with-mongo-service.png
new file mode 100644
index 00000000000..9236631b65d
Binary files /dev/null and b/docs/docs/img/example-workflow-with-mongo-service.png differ
diff --git a/docs/docs/img/example-workflow-with-nginx.png b/docs/docs/img/example-workflow-with-nginx.png
new file mode 100644
index 00000000000..efb06376037
Binary files /dev/null and b/docs/docs/img/example-workflow-with-nginx.png differ
diff --git a/docs/docs/img/selenium-remote-browsers-example.png b/docs/docs/img/selenium-remote-browsers-example.png
new file mode 100644
index 00000000000..e54e906df06
Binary files /dev/null and b/docs/docs/img/selenium-remote-browsers-example.png differ
diff --git a/docs/sidebars.js b/docs/sidebars.js
index 7e099fcb916..c572c1f1ac6 100644
--- a/docs/sidebars.js
+++ b/docs/sidebars.js
@@ -62,8 +62,12 @@ const sidebars = {
"articles/test-workflow-templates",
"articles/test-workflows-examples-basics",
"articles/test-workflows-examples-configuration",
- "articles/test-workflows-examples-expressions",
+ "articles/test-workflows-expressions",
"articles/test-workflows-examples-templates",
+ "articles/test-workflows-test-suites",
+ "articles/test-workflows-parallel",
+ "articles/test-workflows-services",
+ "articles/test-workflows-matrix-and-sharding",
],
},
{
diff --git a/pkg/tcl/expressionstcl/stdlib.go b/pkg/tcl/expressionstcl/stdlib.go
index 46ab8d1bfea..38fa662a98c 100644
--- a/pkg/tcl/expressionstcl/stdlib.go
+++ b/pkg/tcl/expressionstcl/stdlib.go
@@ -495,7 +495,7 @@ var stdFunctions = map[string]StdFunction{
ReturnType: TypeString,
Handler: func(value ...StaticValue) (Expression, error) {
if len(value) != 1 && len(value) != 2 {
- return nil, fmt.Errorf(`"relpath" function expects 1-2 arguments, %d provided`, len(value))
+ return nil, fmt.Errorf(`"abspath" function expects 1-2 arguments, %d provided`, len(value))
}
destinationPath, _ := value[0].StringValue()
if filepath.IsAbs(destinationPath) {