Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GHA] Samples tests #1374

Draft
wants to merge 151 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
151 commits
Select commit Hold shift + click to select a range
708dad9
init
mryzhov Nov 26, 2024
2c90ed9
merge conflict fix
mryzhov Nov 26, 2024
af47b59
disabled old pipelines
mryzhov Nov 26, 2024
3ee6e81
removed staging from runner names
mryzhov Nov 26, 2024
a8b20ae
renamed runners
mryzhov Nov 26, 2024
f63cd73
fixed cmake build
mryzhov Nov 26, 2024
6d7aa2b
staging runners
mryzhov Nov 26, 2024
9fe56d2
fixed build wheel
mryzhov Nov 26, 2024
2a14614
fixed tokenizer wheels
mryzhov Nov 26, 2024
8c7ad47
fixed tests
mryzhov Nov 26, 2024
80814c7
Revert "fixed tests"
mryzhov Nov 26, 2024
2fcdd5f
fixed artifact names
mryzhov Nov 26, 2024
1808d0c
test reqs
mryzhov Nov 26, 2024
c2d13c5
set working dir for the tests
mryzhov Nov 27, 2024
5923e1e
move smoke tests to the build
mryzhov Nov 27, 2024
2d9f81e
Smoke wheel tests
mryzhov Nov 27, 2024
826748a
split archive and wheel testing
mryzhov Nov 27, 2024
3af630b
freeze ov commit
mryzhov Nov 27, 2024
b83a039
set LD_LIBRARY_PATH in smoke tests
mryzhov Nov 27, 2024
489716e
update Overall_Status
mryzhov Nov 27, 2024
e3bcdb0
revert ov revision
mryzhov Nov 27, 2024
010ff08
fixed wheel install
mryzhov Nov 27, 2024
680e8d7
set PYTHONPATH for tokenizers
mryzhov Nov 27, 2024
dd48290
reduce test smoke reqs
mryzhov Nov 27, 2024
42777ea
+numpy
mryzhov Nov 27, 2024
ab57dc2
uninstall openvino before tests
mryzhov Nov 27, 2024
6a3b634
samples build
mryzhov Nov 27, 2024
c20e24f
fixed ci tests
mryzhov Nov 27, 2024
1c39e2a
changed instalaltuion order
mryzhov Nov 27, 2024
a562723
Extract Artifacts
mryzhov Nov 27, 2024
c9a0323
fixed the name of archive
mryzhov Nov 27, 2024
b8e55f6
fixed packaging command
mryzhov Nov 27, 2024
5071278
changed tarball install dir
mryzhov Nov 27, 2024
5709f57
use install dir for packaging
mryzhov Nov 27, 2024
34b063e
fixed wheel install
mryzhov Nov 27, 2024
e7165f8
fixed yaml syntax
mryzhov Nov 27, 2024
64358ac
GenAI Samples Tests
mryzhov Nov 27, 2024
d7f9d99
debug code
mryzhov Nov 27, 2024
863d2df
mkdir -p model path
mryzhov Nov 27, 2024
e8ff16f
fixed MODELS_DIR
mryzhov Nov 28, 2024
52499e8
fixed wheels dir
mryzhov Nov 28, 2024
bedf7af
set common env
mryzhov Nov 28, 2024
2bcc3dd
removed unnecessary ifs
mryzhov Nov 28, 2024
60d70fd
cleanup dev code
mryzhov Nov 28, 2024
818f3fd
add wheels subdir
mryzhov Nov 28, 2024
6ee6237
explicitly install tokenizers
mryzhov Nov 28, 2024
7b6618d
wget -O
mryzhov Nov 28, 2024
849525f
debug support
mryzhov Nov 28, 2024
3262a1c
fixed samples artifact name
mryzhov Nov 28, 2024
f2368e4
install wheels in samples tests
mryzhov Nov 28, 2024
a8f8936
samples build fix
mryzhov Nov 28, 2024
965ab30
changed artifacts pattern
mryzhov Nov 28, 2024
a0d49a9
fixed typo
mryzhov Nov 28, 2024
c6ef86f
simplified names
mryzhov Nov 28, 2024
aecf95e
fixed artifact pattern
mryzhov Nov 28, 2024
9940843
test ov nightly
mryzhov Nov 28, 2024
eba1547
revert ov commit
mryzhov Nov 28, 2024
a312540
set options for download artifacts action
mryzhov Nov 28, 2024
6496d56
test ov commit
mryzhov Nov 29, 2024
07e9b93
Revert "test ov commit"
mryzhov Nov 29, 2024
1c8081f
Align naming
mryzhov Nov 29, 2024
ded64c8
test ov comit
mryzhov Nov 29, 2024
3cd2815
test provider fix
mryzhov Nov 29, 2024
aec4f55
reverty w/a
mryzhov Nov 29, 2024
f5e3670
use anothe ov commit
mryzhov Nov 29, 2024
97150ce
revert to latest_available_commit
mryzhov Nov 29, 2024
e319f62
review changes
mryzhov Nov 29, 2024
c825fa3
reduce steps
mryzhov Nov 29, 2024
9c7e5f6
fixed typo
mryzhov Nov 29, 2024
9e09337
fixed closing )
mryzhov Nov 29, 2024
1ec7362
revert openvino_download action
mryzhov Nov 29, 2024
0018248
fixed test args
mryzhov Nov 29, 2024
d7544b3
improved test matrix
mryzhov Nov 29, 2024
7180cee
increase debuging
mryzhov Dec 2, 2024
5277560
Update .github/workflows/linux.yml
mryzhov Nov 29, 2024
edf5507
minor stylr changes
mryzhov Dec 2, 2024
8bd4e90
removed stagings
mryzhov Dec 2, 2024
e888e13
split tests by markers
mryzhov Dec 2, 2024
8aa7db3
fixed test call
mryzhov Dec 2, 2024
5acba89
register the new marker
mryzhov Dec 2, 2024
cc79b19
changed filter
mryzhov Dec 2, 2024
e769df7
Revert "register the new marker"
mryzhov Dec 2, 2024
dfc6476
removed @pytest.mark.suite_1
mryzhov Dec 2, 2024
12ec98f
using name filtering
mryzhov Dec 2, 2024
0b20ef8
exclude crashed test
mryzhov Dec 2, 2024
eb050da
limit numpy verion
mryzhov Dec 3, 2024
0e2b1bc
test all test_chat_generate_api
mryzhov Dec 3, 2024
c55f1f5
removed build samples job
mryzhov Dec 3, 2024
1e981c7
exclude Qwen2-0.5B-Instruct failed tests
mryzhov Dec 3, 2024
decc3af
fixed artifact packaging
mryzhov Dec 3, 2024
73609dd
fixed install samples dir
mryzhov Dec 3, 2024
b10b738
changed step name
mryzhov Dec 3, 2024
bc2f6c5
Revert "removed build samples job"
mryzhov Dec 3, 2024
1ac669c
revert samples tests
mryzhov Dec 3, 2024
320a88d
fixed download artifact pattern
mryzhov Dec 3, 2024
13e7f4c
split artifcats
mryzhov Dec 3, 2024
b851f74
Revert "split artifcats"
mryzhov Dec 3, 2024
ac41c0e
split filter
mryzhov Dec 3, 2024
2c8941c
remove quotes
mryzhov Dec 3, 2024
d70bd9d
do not split pattern
mryzhov Dec 3, 2024
df957c7
fixed yaml syntax
mryzhov Dec 3, 2024
9ea1374
mkdir -p ${{ env.OV_INSTALL_DIR }}
mryzhov Dec 3, 2024
cbe17c0
fixed samples dir
mryzhov Dec 3, 2024
f478f68
optimize downloading artifacts
mryzhov Dec 4, 2024
64b87b5
added samples test wrapper
mryzhov Dec 4, 2024
2ff73e8
group samples tests
mryzhov Dec 4, 2024
c6e8d25
install test deps
mryzhov Dec 4, 2024
e7d2412
changed samples test path
mryzhov Dec 4, 2024
daa1e7b
add the new markers
mryzhov Dec 4, 2024
2fbaf65
fixed samples dir
mryzhov Dec 4, 2024
8d67465
disable C++ Tests Prerequisites
mryzhov Dec 5, 2024
fd44fb5
use shared test data
mryzhov Dec 5, 2024
15a73b3
test debug
mryzhov Dec 5, 2024
298b01d
sample_multinomial_causal_lm
mryzhov Dec 6, 2024
689a3fa
extend test params
mryzhov Dec 6, 2024
9df7f8b
chaneged runners
mryzhov Dec 6, 2024
feef729
set test ids
mryzhov Dec 6, 2024
e318928
adress security issue
mryzhov Dec 6, 2024
f948da5
rename test samples
mryzhov Dec 6, 2024
e117758
extend test_cpp_sample_greedy_causal_lm
mryzhov Dec 6, 2024
cab49e8
Merge branch 'gha/samples_tests' into gha/samples_tests_rebased
mryzhov Dec 12, 2024
71e7cb6
minimal test deps
mryzhov Dec 12, 2024
e6cfa35
use shared temp dir
mryzhov Dec 12, 2024
361bdb6
text_generation sample
mryzhov Dec 12, 2024
9a49e2f
fixed model names
mryzhov Dec 12, 2024
d521014
text_generation sample fix
mryzhov Dec 13, 2024
d3b782c
Merge branch 'master' into gha/samples_tests_rebased
mryzhov Dec 13, 2024
204aea2
lock ov commit
mryzhov Dec 13, 2024
38fe84a
change ov commit
mryzhov Dec 13, 2024
62f0808
Merge branch 'master' into gha/samples_tests_rebased
mryzhov Dec 16, 2024
a004858
try another runner
mryzhov Dec 16, 2024
53445c8
fix bandit issues
mryzhov Dec 16, 2024
3668070
aks-linux-8-cores-32gb
mryzhov Dec 16, 2024
6b9de57
fixed subprocess import
mryzhov Dec 16, 2024
5587bfd
nosec B404
mryzhov Dec 16, 2024
a39aec2
revert subprocess import
mryzhov Dec 16, 2024
c1e9a5f
automatic cleanup
mryzhov Dec 16, 2024
0ad4a86
shared tests
mryzhov Dec 16, 2024
862c2d7
Cleanup the test content
mryzhov Dec 16, 2024
dfad339
compare py and cpp results
mryzhov Dec 17, 2024
f7a2f64
test the both cpp and py samples
mryzhov Dec 17, 2024
6dddad3
set samples dirs
mryzhov Dec 17, 2024
313f5b5
split samples args
mryzhov Dec 17, 2024
a0af132
split tests
mryzhov Dec 17, 2024
873db4b
fixed test path
mryzhov Dec 17, 2024
5649ff7
fixed test
mryzhov Dec 17, 2024
f546e0e
test fix
mryzhov Dec 17, 2024
1688e47
merge py and cpp tests
mryzhov Dec 17, 2024
286172b
Merge branch 'master' into gha/samples_tests_rebased
mryzhov Dec 18, 2024
5b54681
add tbb path
mryzhov Dec 18, 2024
f89fd34
Merge branch 'gha/samples_tests_rebased' of https://github.com/mryzho…
mryzhov Dec 18, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 22 additions & 38 deletions .github/workflows/linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -83,9 +83,10 @@ jobs:
runs-on: aks-linux-4-cores-16gb
container:
image: openvinogithubactions.azurecr.io/ov_build/ubuntu_22_04_x64:${{ needs.openvino_download.outputs.docker_tag }}
volumes:
volumes:
- /mount:/mount
options: -e SCCACHE_AZURE_BLOB_CONTAINER -e SCCACHE_AZURE_CONNECTION_STRING -v ${{ github.workspace }}:${{ github.workspace }}
- ${{ github.workspace }}:${{ github.workspace }}
options: -e SCCACHE_AZURE_BLOB_CONTAINER -e SCCACHE_AZURE_CONNECTION_STRING
env:
CMAKE_GENERATOR: Unix Makefiles
OV_INSTALL_DIR: ${{ github.workspace }}/ov
Expand Down Expand Up @@ -314,17 +315,25 @@ jobs:
working-directory: ${{ env.SRC_DIR }}

genai_samples_tests:
name: Samples Tests - ${{ matrix.build-type }}
name: Sample ${{ matrix.test.name }} (${{ matrix.build-type }})
strategy:
fail-fast: false
matrix:
build-type: [Release]
test:
- name: 'LLM'
marker: 'llm'
cmd: 'tests/python_tests/samples'
- name: 'Whisper'
marker: 'whisper'
cmd: 'tests/python_tests/samples'

needs: [ openvino_download, genai_build_cmake, genai_build_wheel, genai_build_samples ]
timeout-minutes: 45
defaults:
run:
shell: bash
runs-on: aks-linux-2-cores-8gb
runs-on: aks-linux-8-cores-32gb
container:
image: openvinogithubactions.azurecr.io/ov_test/ubuntu_22_04_x64:${{ needs.openvino_download.outputs.docker_tag }}
volumes:
Expand All @@ -336,6 +345,7 @@ jobs:
SRC_DIR: ${{ github.workspace }}/src
BUILD_DIR: ${{ github.workspace }}/build
MODELS_DIR: ${{ github.workspace }}/models
TEMP_DIR: ${{ github.workspace }}/temp

steps:
- name: Clone openvino.genai
Expand All @@ -360,43 +370,17 @@ jobs:
- name: Install Wheels
uses: ./src/.github/actions/install_wheel
with:
packages: "openvino;openvino_tokenizers[transformers];openvino_genai"
packages: "openvino;openvino_tokenizers[transformers];openvino_genai[testing]"
requirements_files: "${{ env.SRC_DIR }}/samples/requirements.txt"
local_wheel_dir: ${{ env.INSTALL_DIR }}/wheels

- name: Download & convert Models and data
run: |
mkdir -p ${{ env.MODELS_DIR }}
optimum-cli export openvino --trust-remote-code --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 ${{ env.MODELS_DIR }}/TinyLlama-1.1B-Chat-v1.0
optimum-cli export openvino --trust-remote-code --model openai/whisper-tiny ${{ env.MODELS_DIR }}/whisper-tiny
wget https://storage.openvinotoolkit.org/models_contrib/speech/2021.2/librispeech_s5/how_are_you_doing_today.wav -O ${{ env.MODELS_DIR }}/how_are_you_doing_today.wav

- name: Test multinomial_causal_lm.py
if: ${{ 'Release' == matrix.build-type }} # Python bindings can be built in Release only
timeout-minutes: 1
run: ${{ env.INSTALL_DIR }}/samples/python/multinomial_causal_lm/multinomial_causal_lm.py ./TinyLlama-1.1B-Chat-v1.0/ 0
working-directory: ${{ env.MODELS_DIR }}

- name: Test whisper_speech_recognition.py
if: ${{ 'Release' == matrix.build-type }} # Python bindings can be built in Release only
timeout-minutes: 1
run: ${{ env.INSTALL_DIR }}/samples/python/whisper_speech_recognition/whisper_speech_recognition.py ./whisper-tiny/ how_are_you_doing_today.wav
working-directory: ${{ env.MODELS_DIR }}

- name: C++ Tests Prerequisites
run: python -m pip uninstall openvino openvino-tokenizers openvino-genai -y

- name: Test greedy_causal_lm
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_DIR }}/samples_bin/greedy_causal_lm ./TinyLlama-1.1B-Chat-v1.0/ ""
working-directory: ${{ env.MODELS_DIR }}

- name: Test whisper_speech_recognition
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_DIR }}/samples_bin/whisper_speech_recognition ./whisper-tiny/ how_are_you_doing_today.wav
working-directory: ${{ env.MODELS_DIR }}
- name: Test Samples (Python and C++)
run: python -m pytest -vv -s ${{ env.SRC_DIR }}/${{ matrix.test.cmd }} -m "${{ env.TEST_MARKERS }}"
env:
LD_LIBRARY_PATH: "${{ env.INSTALL_DIR }}/runtime/lib/intel64:${{ env.INSTALL_DIR }}/runtime/3rdparty/tbb/lib:$LD_LIBRARY_PATH" # Required for C++ samples
SAMPLES_PY_DIR: "${{ env.INSTALL_DIR }}/samples/python"
SAMPLES_CPP_DIR: "${{ env.INSTALL_DIR }}/samples_bin"
TEST_MARKERS: ${{ (matrix.build-type == 'Release') && matrix.test.marker || format('{0} and cpp', matrix.test.marker) }}

Overall_Status:
name: ci/gha_overall_status_linux
Expand Down
4 changes: 3 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ classifiers = [
dependencies = [
"openvino_tokenizers~=2025.0.0.0.dev"
]
[project.optional-dependencies]
testing = ["pytest>=6.0"]

[tool.py-build-cmake.module]
directory = "src/python"
Expand Down Expand Up @@ -62,4 +64,4 @@ build-backend = "py_build_cmake.build"
markers = [
"nightly",
"precommit: (deselect with '-m \"precommit\"')",
]
]
12 changes: 12 additions & 0 deletions tests/python_tests/pytest.ini
Original file line number Diff line number Diff line change
@@ -1,8 +1,20 @@
[pytest]

markers =
; The following markers are defined for categorizing tests:
; precommit - Tests that should be run before committing code.
; nightly - Tests that are run as part of the nightly build process.
; real_models - Tests that involve real model execution.
; llm - Tests related to large language models.
; whisper - Tests related to the Whisper model.
; cpp - Tests that involve C++ code.
; py - Tests that involve Python code.
precommit
nightly
real_models
llm
whisper
cpp
py

addopts = -m precommit
94 changes: 94 additions & 0 deletions tests/python_tests/samples/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import subprocess
import os
import tempfile
import pytest
import shutil

# Define model names and directories
MODELS = {
"TinyLlama-1.1B-Chat-v1.0": "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"TinyLlama-1.1B-intermediate-step-1431k-3T": "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"WhisperTiny": "openai/whisper-tiny",
"open_llama_3b_v2": "openlm-research/open_llama_3b_v2"
}

TEST_FILES = {
"how_are_you_doing_today.wav": "https://storage.openvinotoolkit.org/models_contrib/speech/2021.2/librispeech_s5/how_are_you_doing_today.wav",
"adapter_model.safetensors": "https://huggingface.co/smangrul/tinyllama_lora_sql/resolve/main/adapter_model.safetensors"
}

TEMP_DIR = os.environ.get("TEMP_DIR", tempfile.mkdtemp())
MODELS_DIR = os.path.join(TEMP_DIR, "test_models")
TEST_DATA = os.path.join(TEMP_DIR, "test_data")

SAMPLES_PY_DIR = os.environ.get("SAMPLES_PY_DIR", os.getcwd())
SAMPLES_CPP_DIR = os.environ.get("SAMPLES_CPP_DIR", os.getcwd())

# A shared fixture to hold data
@pytest.fixture(scope="session")
def shared_data():
return {}

@pytest.fixture(scope="session", autouse=True)
def setup_and_teardown(request):
"""Fixture to set up and tear down the temporary directories."""
print(f"Creating directories: {MODELS_DIR} and {TEST_DATA}")
os.makedirs(MODELS_DIR, exist_ok=True)
os.makedirs(TEST_DATA, exist_ok=True)
yield
if not os.environ.get("TEMP_DIR"):
print(f"Removing temporary directory: {TEMP_DIR}")
shutil.rmtree(TEMP_DIR)
else:
print(f"Skipping cleanup of temporary directory: {TEMP_DIR}")

@pytest.fixture(scope="session")
def convert_model(request):
"""Fixture to convert the model once for the session."""
params = request.param
model_id = params.get("model_id")
extra_args = params.get("extra_args", [])
model_name = MODELS[model_id]
model_path = os.path.join(MODELS_DIR, model_name)
print(f"Prepearing model: {model_name}")
# Convert the model if not already converted
if not os.path.exists(model_path):
print(f"Converting model: {model_name}")
command = [
"optimum-cli", "export", "openvino",
"--model", model_name, model_path
]
if extra_args:
command.extend(extra_args)
result = subprocess.run(command, check=True)
assert result.returncode == 0, f"Model {model_name} conversion failed"
yield model_path
# Cleanup the model after tests
if os.path.exists(model_path):
print(f"Removing converted model: {model_path}")
shutil.rmtree(model_path)

@pytest.fixture(scope="session")
def download_test_content(request):
"""Download the test content from the given URL and return the file path."""
file_url = request.param
file_name = os.path.basename(file_url)
file_path = os.path.join(TEST_DATA, file_name)
if not os.path.exists(file_path):
print(f"Downloading test content from {file_url}...")
result = subprocess.run(
["wget", file_url, "-O", file_path],
check=True
)
assert result.returncode == 0, "Failed to download test content"
print(f"Downloaded test content to {file_path}")
else:
print(f"Test content already exists at {file_path}")
yield file_path
# Cleanup the test content after tests
if os.path.exists(file_path):
print(f"Removing test content: {file_path}")
os.remove(file_path)
44 changes: 44 additions & 0 deletions tests/python_tests/samples/test_greedy_causal_lm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import os
import subprocess
import pytest
from conftest import TEST_FILES, SAMPLES_PY_DIR, SAMPLES_CPP_DIR

# Greedy causal LM samples

@pytest.mark.llm
@pytest.mark.cpp
@pytest.mark.parametrize("convert_model", [
{"model_id": "TinyLlama-1.1B-Chat-v1.0"}
], indirect=["convert_model"])
@pytest.mark.parametrize("sample_args", [""])
def test_cpp_sample_greedy_causal_lm_tiny_llama(convert_model, sample_args):
cpp_sample = os.path.join(SAMPLES_CPP_DIR, 'greedy_causal_lm')
exit_code = subprocess.run([cpp_sample, convert_model, sample_args], check=True).returncode
assert exit_code == 0, "C++ sample execution failed"

@pytest.mark.llm
@pytest.mark.cpp
@pytest.mark.parametrize("convert_model", [
{"model_id": "open_llama_3b_v2", "extra_args": ["--trust-remote-code", "--weight-format", "fp16"]}
], indirect=["convert_model"])
@pytest.mark.parametrize("sample_args", ["return 0"])
def test_cpp_sample_greedy_causal_lm_open_llama(convert_model, sample_args):
cpp_sample = os.path.join(SAMPLES_CPP_DIR, 'greedy_causal_lm')
exit_code = subprocess.run([cpp_sample, convert_model, sample_args], check=True).returncode
assert exit_code == 0, "C++ sample execution failed"

# text_generation sample
@pytest.mark.llm
@pytest.mark.py
@pytest.mark.parametrize("convert_model", [
{"model_id": "TinyLlama-1.1B-intermediate-step-1431k-3T", "extra_args": ["--trust-remote-code"]}
], indirect=["convert_model"])
@pytest.mark.parametrize("sample_args", ["How to create a table with two columns, one of them has type float, another one has type int?"])
@pytest.mark.parametrize("download_test_content", [TEST_FILES["adapter_model.safetensors"]], indirect=True)
def test_python_sample_text_generation(convert_model, download_test_content, sample_args):
script = os.path.join(SAMPLES_PY_DIR, "text_generation/lora.py")
result = subprocess.run(["python", script, convert_model, download_test_content, sample_args], check=True)
assert result.returncode == 0, f"Script execution failed for model {convert_model}"
54 changes: 54 additions & 0 deletions tests/python_tests/samples/test_multinomial_causal_lm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import os
import subprocess
import pytest
from conftest import SAMPLES_PY_DIR, SAMPLES_CPP_DIR

# multinomial_causal_lm sample

@pytest.mark.llm
@pytest.mark.py
@pytest.mark.parametrize("convert_model", [
{"model_id": "TinyLlama-1.1B-Chat-v1.0", "extra_args": ["--trust-remote-code"]},
], indirect=["convert_model"])
@pytest.mark.parametrize("sample_args", ["0"])
def test_python_sample_multinomial_causal_lm_tiny_llama(convert_model, sample_args, shared_data):
script = os.path.join(SAMPLES_PY_DIR, "multinomial_causal_lm/multinomial_causal_lm.py")
result = subprocess.run(["python", script, convert_model, sample_args], check=True)
assert result.returncode == 0, f"Script execution failed for model {convert_model} with argument {sample_args}"

@pytest.mark.llm
@pytest.mark.py
@pytest.mark.parametrize("convert_model", [
{"model_id": "open_llama_3b_v2", "extra_args": ["--trust-remote-code", "--weight-format", "fp16"]},
], indirect=["convert_model"])
@pytest.mark.parametrize("sample_args", ["a", "return 0"])
def test_python_sample_multinomial_causal_lm_open_llama(convert_model, sample_args, shared_data):
script = os.path.join(SAMPLES_PY_DIR, "multinomial_causal_lm/multinomial_causal_lm.py")
result = subprocess.run(["python", script, convert_model, sample_args], check=True)
assert result.returncode == 0, f"Script execution failed for model {convert_model} with argument {sample_args}"
shared_data.setdefault("multinomial_causal_lm", {}).setdefault("py", {}).setdefault("open_llama_3b_v2", {})[sample_args] = result.stdout

@pytest.mark.llm
@pytest.mark.cpp
@pytest.mark.parametrize("convert_model", [
{"model_id": "open_llama_3b_v2", "extra_args": ["--trust-remote-code", "--weight-format", "fp16"]}
], indirect=["convert_model"])
@pytest.mark.parametrize("sample_args", ["b", "return 0"])
def test_cpp_sample_multinomial_causal_lm(convert_model, sample_args, shared_data):
cpp_sample = os.path.join(SAMPLES_CPP_DIR, 'multinomial_causal_lm')
result = subprocess.run([cpp_sample, convert_model, sample_args], check=True)
assert result.returncode == 0, "C++ sample execution failed"
shared_data.setdefault("multinomial_causal_lm", {}).setdefault("cpp", {}).setdefault("open_llama_3b_v2", {})[sample_args] = result.stdout

@pytest.mark.llm
@pytest.mark.cpp
@pytest.mark.py
def test_sample_multinomial_causal_lm_diff(shared_data):
py_result = shared_data.get("multinomial_causal_lm", {}).get("py", {}).get("open_llama_3b_v2", {}).get("return 0")
cpp_result = shared_data.get("multinomial_causal_lm", {}).get("cpp", {}).get("open_llama_3b_v2", {}).get("return 0")
if not py_result or not cpp_result:
pytest.skip("Skipping because one of the prior tests was skipped or failed.")
assert py_result == cpp_result, "Results should match"
21 changes: 21 additions & 0 deletions tests/python_tests/samples/test_text_generation.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import os
import subprocess
import pytest
from conftest import TEST_FILES, SAMPLES_PY_DIR, SAMPLES_CPP_DIR

# text_generation sample

@pytest.mark.llm
@pytest.mark.py
@pytest.mark.parametrize("convert_model", [
{"model_id": "TinyLlama-1.1B-intermediate-step-1431k-3T", "extra_args": ["--trust-remote-code"]}
], indirect=["convert_model"])
@pytest.mark.parametrize("sample_args", ["How to create a table with two columns, one of them has type float, another one has type int?"])
@pytest.mark.parametrize("download_test_content", [TEST_FILES["adapter_model.safetensors"]], indirect=True)
def test_python_sample_text_generation(convert_model, download_test_content, sample_args):
script = os.path.join(SAMPLES_PY_DIR, "text_generation/lora.py")
result = subprocess.run(["python", script, convert_model, download_test_content, sample_args], check=True)
assert result.returncode == 0, f"Script execution failed for model {convert_model}"
28 changes: 28 additions & 0 deletions tests/python_tests/samples/test_whisper_speech_recognition.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import os
import subprocess
import pytest
from conftest import TEST_FILES, SAMPLES_PY_DIR, SAMPLES_CPP_DIR

# whisper_speech_recognition sample
@pytest.mark.whisper
@pytest.mark.py
@pytest.mark.parametrize("convert_model", [{"model_id": "WhisperTiny", "extra_args": ["--trust-remote-code"]}],
indirect=True, ids=lambda p: f"model={p['model_id']}")
@pytest.mark.parametrize("download_test_content", [TEST_FILES["how_are_you_doing_today.wav"]], indirect=True)
def test_python_sample_whisper_speech_recognition(convert_model, download_test_content):
script = os.path.join(SAMPLES_PY_DIR, "whisper_speech_recognition/whisper_speech_recognition.py")
result = subprocess.run(["python", script, convert_model, download_test_content], check=True)
assert result.returncode == 0, f"Script execution failed for model {convert_model}"

@pytest.mark.whisper
@pytest.mark.cpp
@pytest.mark.parametrize("convert_model", [{"model_id": "WhisperTiny"}],
indirect=True, ids=lambda p: f"model={p['model_id']}")
@pytest.mark.parametrize("download_test_content", [TEST_FILES["how_are_you_doing_today.wav"]], indirect=True)
def test_cpp_sample_whisper_speech_recognition(convert_model, download_test_content):
cpp_sample = os.path.join(SAMPLES_CPP_DIR, 'whisper_speech_recognition')
exit_code = subprocess.run([cpp_sample, convert_model, download_test_content], check=True).returncode
assert exit_code == 0, "C++ sample execution failed"
Loading