Skip to content
This repository has been archived by the owner on Aug 30, 2024. It is now read-only.

[Neural Speed] Support continuous batching + beam search inference in LLAMA #44

[Neural Speed] Support continuous batching + beam search inference in LLAMA

[Neural Speed] Support continuous batching + beam search inference in LLAMA #44

Triggered via pull request February 29, 2024 09:06
Status Cancelled
Total duration 23s
Artifacts

windows-test.yml

on: pull_request
Windows-Binary-Test
13s
Windows-Binary-Test
Fit to window
Zoom out
Zoom in

Annotations

2 errors
Windows-Binary-Test
Canceling since a higher priority waiting request for 'Windows Binary Test-145' exists
Windows-Binary-Test
The operation was canceled.