Skip to content
This repository has been archived by the owner on Aug 30, 2024. It is now read-only.

[Neural Speed] Support continuous batching + beam search inference in LLAMA #323

[Neural Speed] Support continuous batching + beam search inference in LLAMA

[Neural Speed] Support continuous batching + beam search inference in LLAMA #323

Triggered via pull request February 29, 2024 09:06
Status Cancelled
Total duration 12s
Artifacts

unit-test-llmruntime.yml

on: pull_request
unit-test
unit-test
Fit to window
Zoom out
Zoom in

Annotations

1 error
Python Unit Test
Canceling since a higher priority waiting request for 'Python Unit Test-145' exists