Skip to content

Commit

Permalink
update document
Browse files Browse the repository at this point in the history
Signed-off-by: lvliang-intel <[email protected]>
  • Loading branch information
lvliang-intel committed Aug 16, 2024
1 parent aede338 commit 680c8d6
Showing 1 changed file with 3 additions and 5 deletions.
8 changes: 3 additions & 5 deletions evals/benchmark/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# OPEA Benchmark Tool

This Tool provides a microservices benchmarking framework that uses YAML configurations to define test cases for different services. It executes these tests using `stresscli`, which is built on top of `locust`, and logs the results for performance analysis and data virsualization.
This Tool provides a microservices benchmarking framework that uses YAML configurations to define test cases for different services. It executes these tests using `stresscli`, built on top of [locust](https://github.com/locustio/locust), a performance/load testing tool for HTTP and other protocols and logs the results for performance analysis and data visualization.

## Features

Expand All @@ -19,7 +19,6 @@ This Tool provides a microservices benchmarking framework that uses YAML configu
- [Test Cases](#test-cases)



## Installation

### Prerequisites
Expand All @@ -41,14 +40,13 @@ pip install -r ../../requirements.txt
python benchmark.py
```

The results will be stored in the directory specified by test_output_dir in the configuration.
The results will be stored in the directory specified by `test_output_dir` in the configuration.


## Configuration

The benchmark.yaml file defines the test suite and individual test cases. Below are the primary sections:


### Test Suite Configuration

```yaml
Expand All @@ -65,7 +63,7 @@ test_suite_config:
### Test Cases
Each test case includes multiple services, each of which can be toggled on/off using the run_test flag. You can also define specific parameters for each service.
Each test case includes multiple services, each of which can be toggled on/off using the `run_test` flag. You can also change specific parameters for each service for performance tuning.

Example test case configuration for `chatqna`:

Expand Down

0 comments on commit 680c8d6

Please sign in to comment.