diff --git a/README.md b/README.md index 2d9c04513b..e4986db0cf 100644 --- a/README.md +++ b/README.md @@ -20,11 +20,13 @@ It includes the following pipelines: 2. Text generation samples that support most popular models like LLaMA 2: - Python: 1. [beam_search_causal_lm](./samples/python/beam_search_causal_lm/README.md) + 1. [benchmark_genai](./samples/python/benchmark_genai/README.md) 2. [chat_sample](./samples/python/chat_sample/README.md) 3. [greedy_causal_lm](./samples/python/greedy_causal_lm/README.md) 4. [multinomial_causal_lm](./samples/python/multinomial_causal_lm/README.md) - C++: 1. [beam_search_causal_lm](./samples/cpp/beam_search_causal_lm/README.md) + 1. [benchmark_genai](./samples/cpp/benchmark_genai/README.md) 2. [chat_sample](./samples/cpp/chat_sample/README.md) 3. [continuous_batching_accuracy](./samples/cpp/continuous_batching_accuracy) 4. [continuous_batching_benchmark](./samples/cpp/continuous_batching_benchmark) diff --git a/samples/cpp/benchmark_genai/README.md b/samples/cpp/benchmark_genai/README.md index 616bb6a36d..1a46db05d9 100644 --- a/samples/cpp/benchmark_genai/README.md +++ b/samples/cpp/benchmark_genai/README.md @@ -16,7 +16,7 @@ optimum-cli export openvino --trust-remote-code --model TinyLlama/TinyLlama-1.1B ## Usage ```sh -benchmark_vanilla_genai [OPTIONS] +benchmark_genai [OPTIONS] ``` ### Options @@ -31,7 +31,7 @@ benchmark_vanilla_genai [OPTIONS] ### Output: ``` -benchmark_vanilla_genai -m TinyLlama-1.1B-Chat-v1.0 -n 10 +benchmark_genai -m TinyLlama-1.1B-Chat-v1.0 -n 10 ``` ```