This repository has been archived by the owner on Aug 30, 2024. It is now read-only.
[Neural Speed] Support continuous batching + beam search inference in LLAMA #403
Triggered via pull request
February 29, 2024 09:03
Status
Cancelled
Total duration
3m 13s
Artifacts
–
cpp-graph-test.yml
on: pull_request
Matrix: CPP-Graph-Workflow
Genreate-Report
0s
Annotations
3 errors
CPP-Graph-Workflow (llama-2-7b-chat)
Canceling since a higher priority waiting request for 'CPP Graph Test-145' exists
|
CPP-Graph-Workflow (gptj-6b)
Canceling since a higher priority waiting request for 'CPP Graph Test-145' exists
|
CPP-Graph-Workflow (gptj-6b)
The operation was canceled.
|