Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StaticLLMPipeline: Enable chat test #1117

Open
wants to merge 21 commits into
base: master
Choose a base branch
from

Conversation

TolyaTalamanov
Copy link
Collaborator

No description provided.

@github-actions github-actions bot added the category: sampling Sampling / Decoding algorithms label Oct 31, 2024
@ilya-lavrenov ilya-lavrenov added this to the 2024.5 milestone Nov 1, 2024
@ilya-lavrenov ilya-lavrenov added category: LLM LLM pipeline (stateful, static) and removed category: sampling Sampling / Decoding algorithms labels Nov 1, 2024
@pytest.mark.precommit
@pytest.mark.nightly
def test_chat_generation(model_descr):
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Wovchena Wovchena enabled auto-merge November 1, 2024 10:38
@github-actions github-actions bot added category: sampling Sampling / Decoding algorithms and removed category: LLM LLM pipeline (stateful, static) labels Nov 1, 2024
@TolyaTalamanov
Copy link
Collaborator Author

TolyaTalamanov commented Nov 1, 2024

Any idea why did this happen?

RuntimeError: Exception from src\inference\src\cpp\core.cpp:85:
Check 'util::directory_exists(path) || util::file_exists(path)' failed at src\frontends\common\src\frontend.cpp:117:
FrontEnd API failed with GeneralFailure:
ir: Could not open the file: "Qwen2-0.5B-Instruct\openvino_tokenizer.xml"

@Wovchena
Copy link
Collaborator

I don't have a solution to ir: Could not open the file: "Qwen2-0.5B-Instruct\openvino_tokenizer.xml"

questions = [
'1+1=',
'What is the previous answer?',
'Why is the Sun yellow?',
'What was my first question?'
]

model_path = get_chat_models_lists()[0][1]
model_path = get_chat_models_list()[0][1]
Copy link
Contributor

@ilya-lavrenov ilya-lavrenov Nov 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's not a model path, it's a model_id. E.g. from CI error we can see:

model_path = WindowsPath('Qwen2-0.5B-Instruct'), device = 'CPU'

which means model is not even converted by Optimum

In other places, it's used like:

    pipe = read_model(get_models_list()[0])[4]

where read model converts model and created pipe on to top it.

So, the question is - have you run tests locally? do they even magically pass?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get_chat_models_list() returns List[tuple(model_id, model_path)]
https://github.com/openvinotoolkit/openvino.genai/blob/master/tests/python_tests/ov_genai_test_utils.py#L106

I'd assume get_chat_models_list()[0][1] is the path of the first model, but probably I'm wrong... But at least for get_models_list() it works fine: https://github.com/openvinotoolkit/openvino.genai/blob/master/tests/python_tests/test_llm_pipeline_static.py#L37

Perhaps that problem is that I need to call read_model explicitly at least once, let me try it.

So, the question is - have you run tests locally? do they even magically pass?

I've used my own list of models available on local setup, so yes, I didn't check if this machinery works with default models

@ilya-lavrenov ilya-lavrenov added this to the 2025.0 milestone Nov 19, 2024
@ilya-lavrenov ilya-lavrenov removed the category: sampling Sampling / Decoding algorithms label Nov 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants