Skip to content

Update crag eval with benchmark results #211

Update crag eval with benchmark results

Update crag eval with benchmark results #211

Triggered via pull request December 2, 2024 23:32
Status Success
Total duration 4m 51s
Artifacts 2

model_test_hpu.yml

on: pull_request
Matrix: Evaluation-Workflow
Genreate-Report
12s
Genreate-Report
Fit to window
Zoom out
Zoom in

Annotations

1 warning
Genreate-Report
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636

Artifacts

Produced during runtime
Name Size
FinalReport
1.55 KB
hpu-text-generation-opt-125m-lambada_openai
6.08 KB