-
Notifications
You must be signed in to change notification settings - Fork 198
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
refactor structure, add python sample
- Loading branch information
1 parent
7cab496
commit bb1113c
Showing
15 changed files
with
279 additions
and
154 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,3 @@ | ||
# benchmark OpenVINO GenAI sample | ||
|
||
TODO: adapt from python sample to c++ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,66 @@ | ||
# Benchmark Vanilla GenAI | ||
|
||
This sample script demonstrates how to benchmark an LLMModel in OpenVINO GenAI. The script includes functionality for warm-up iterations, generating text, and calculating various performance metrics. | ||
|
||
# ov.genai.PerfMetrics structure | ||
ov.genai.PerfMetrics is a structure which holds performance metric for each generate call. Each generate call calcualtes the following metrics: | ||
- mean_ttft | ||
- std_ttft | ||
- mean_tpot | ||
- std_tpot | ||
- load_time | ||
- mean_generate_duration | ||
- std_generate_duration | ||
- mean_tokenization_duration | ||
- std_tokenization_duration | ||
- mean_detokenization_duration | ||
- std_detokenization_duration | ||
- mean_throughput | ||
- std_throughput | ||
- num_generated_tokens | ||
- num_input_tokens | ||
|
||
Performance metrics can be added to one another and accumulated using the += operator or the + operator. In that case the mean values accumulated by several generate calls will be calculated. | ||
|
||
|
||
## Download and convert the model and tokenizers | ||
|
||
The `--upgrade-strategy eager` option is needed to ensure `optimum-intel` is upgraded to the latest version. | ||
|
||
It's not required to install [../../requirements.txt](../../requirements.txt) for deployment if the model has already been exported. | ||
|
||
```sh | ||
pip install --upgrade-strategy eager -r ../../requirements.txt | ||
optimum-cli export openvino --trust-remote-code --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 TinyLlama-1.1B-Chat-v1.0 | ||
``` | ||
|
||
## Usage | ||
|
||
```sh | ||
python benchmark_vanilla_genai.py [OPTIONS] | ||
``` | ||
|
||
### Options | ||
|
||
- `-m, --model`: Path to the model and tokenizers base directory. | ||
- `-p, --prompt` (default: `"The Sky is blue because"`): The prompt to generate text. | ||
- `-nw, --num_warmup` (default: `1`): Number of warmup iterations. | ||
- `-mt, --max_new_tokens` (default: `20`): Number of warmup iterations. | ||
- `-n, --num_iter` (default: `3`): Number of iterations. | ||
- `-d, --device` (default: `"CPU"`): Device to run the model on. | ||
|
||
### Output: | ||
|
||
``` | ||
python benchmark_vanilla_genai.py -m TinyLlama-1.1B-Chat-v1.0/ | ||
``` | ||
|
||
``` | ||
Load time: 3446 ms | ||
Generate time: 876.2 ± 3.30719 ms | ||
Tokenization time: 0 ± 0 ms | ||
Detokenization time: 0 ± 0 ms | ||
ttft: 168 ± 0 ms | ||
tpot: 174.68 ± 4.08671 ms | ||
Tokens/s: 5.72475 ± 0.133933 | ||
``` |
50 changes: 50 additions & 0 deletions
50
samples/python/benchmark_vanilla_genai/benchmark_vanilla_genai.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
# Copyright (C) 2023-2024 Intel Corporation | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
import argparse | ||
import openvino_genai as ov_genai | ||
import pdb | ||
|
||
def main(): | ||
parser = argparse.ArgumentParser(description="Help command") | ||
parser.add_argument("-m", "--model", type=str, help="Path to model and tokenizers base directory") | ||
parser.add_argument("-p", "--prompt", type=str, default="The Sky is blue because", help="Prompt") | ||
parser.add_argument("-nw", "--num_warmup", type=int, default=1, help="Number of warmup iterations") | ||
parser.add_argument("-n", "--num_iter", type=int, default=3, help="Number of iterations") | ||
parser.add_argument("-mt", "--max_new_tokens", type=int, default=20, help="Maximal number of new tokens") | ||
parser.add_argument("-d", "--device", type=str, default="CPU", help="Device") | ||
|
||
args = parser.parse_args() | ||
|
||
prompt = [args.prompt] | ||
model_path = args.model | ||
device = args.device | ||
num_warmup = args.num_warmup | ||
num_iter = args.num_iter | ||
|
||
|
||
config = ov_genai.GenerationConfig() | ||
config.max_new_tokens = args.num_new_tokens | ||
|
||
pipe = ov_genai.LLMPipeline(model_path, device) | ||
|
||
for _ in range(num_warmup): | ||
pipe.generate(prompt, config) | ||
|
||
res = pipe.generate(prompt, config) | ||
metrics = res.metrics | ||
for _ in range(num_iter - 1): | ||
# pdb.set_trace() | ||
res = pipe.generate(prompt, config) | ||
metrics += res.metrics | ||
|
||
print(f"Load time: {metrics.load_time} ms") | ||
print(f"Generate time: {metrics.mean_generate_duration:.2f} ± {metrics.std_generate_duration:.2f} ms") | ||
print(f"Tokenization time: {metrics.mean_tokenization_duration:.2f} ± {metrics.std_tokenization_duration:.2f} ms") | ||
print(f"Detokenization time: {metrics.mean_detokenization_duration:.2f} ± {metrics.std_detokenization_duration:.2f} ms") | ||
print(f"TTFT: {metrics.mean_ttft:.2f} ± {metrics.std_ttft:.2f} ms") | ||
print(f"TPOT: {metrics.mean_tpot:.2f} ± {metrics.std_tpot:.2f} ms") | ||
print(f"Throughput tokens/s: {metrics.mean_throughput:.2f} ± {metrics.std_throughput:.2f}") | ||
|
||
if __name__ == "__main__": | ||
main() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.