-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
StaticLLMPipeline: Enable chat test #1117
base: master
Are you sure you want to change the base?
Changes from 16 commits
f7a63e6
f87b049
d584e5d
66e384c
614da55
3acec5b
2470613
e640af3
13ce329
3f318be
fbd14c3
d4fd072
b3e737c
81adab0
fb21060
2a8a541
1eccfae
e22945c
f11c96f
cc68e28
5ed704a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -132,20 +132,18 @@ def test_max_number_of_tokens(): | |
assert len(encoded_results.tokens[0]) == num_tokens | ||
|
||
|
||
# FIXME: Known problem, output differs from stateful pipeline starting from 3rd prompt! | ||
@pytest.mark.skipif(sys.platform in ["darwin", "linux"], reason="Not supposed to work on mac. Segfault on linux CI") | ||
@pytest.mark.skip(reason="JIRA-144780: Output differs from stateful pipeline") | ||
@pytest.mark.precommit | ||
@pytest.mark.nightly | ||
def test_chat_generation(model_descr): | ||
def test_chat_generation(): | ||
questions = [ | ||
'1+1=', | ||
'What is the previous answer?', | ||
'Why is the Sun yellow?', | ||
'What was my first question?' | ||
] | ||
|
||
model_path = get_chat_models_lists()[0][1] | ||
model_path = get_chat_models_list()[0][1] | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. it's not a model path, it's a
which means model is not even converted by Optimum In other places, it's used like: pipe = read_model(get_models_list()[0])[4] where read model converts model and created pipe on to top it. So, the question is - have you run tests locally? do they even magically pass? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I'd assume Perhaps that problem is that I need to call
I've used my own list of models available on local setup, so yes, I didn't check if this machinery works with default models |
||
|
||
chat_history_stateful = generate_chat_history(model_path, "CPU", { }, questions) | ||
chat_history_static = generate_chat_history(model_path, "NPU", common_config, questions) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/openvinotoolkit/openvino.genai/actions/runs/11627421408/job/32383657944?pr=1117#step:9:870