Skip to content

Training ops kernels: Speeding up the Llama-based MoE architectures #15319

Training ops kernels: Speeding up the Llama-based MoE architectures

Training ops kernels: Speeding up the Llama-based MoE architectures #15319

Re-run triggered November 11, 2024 13:22
Status Failure
Total duration 1m 30s
Artifacts

formatting.yml

on: pull_request
Fit to window
Zoom out
Zoom in

Annotations

1 error
unit-tests
Process completed with exit code 1.