Skip to content
This repository has been archived by the owner on Aug 30, 2024. It is now read-only.

[Neural Speed] Support continuous batching + beam search inference in LLAMA #68

[Neural Speed] Support continuous batching + beam search inference in LLAMA

[Neural Speed] Support continuous batching + beam search inference in LLAMA #68

Triggered via pull request March 1, 2024 07:48
Status Cancelled
Total duration 47s
Artifacts

windows-test.yml

on: pull_request
Windows-Binary-Test
36s
Windows-Binary-Test
Fit to window
Zoom out
Zoom in

Annotations

2 errors
Windows-Binary-Test
Canceling since a higher priority waiting request for 'Windows Binary Test-145' exists
Windows-Binary-Test
The operation was canceled.