Skip to content

Actions: l3utterfly/llama.cpp

Server

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
181 workflow runs
181 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

merge from upstream
Server #183: Pull request #49 synchronize by l3utterfly
December 28, 2024 07:23 16m 56s merge
December 28, 2024 07:23 16m 56s
vulkan: multi-row k quants (#10846)
Server #182: Commit d79d8f3 pushed by l3utterfly
December 28, 2024 07:21 8m 44s master
December 28, 2024 07:21 8m 44s
ggml: load all backends from a user-provided search path (#10699)
Server #181: Commit 43041d2 pushed by l3utterfly
December 11, 2024 06:59 5m 35s master
December 11, 2024 06:59 5m 35s
merge upstream
Server #180: Pull request #47 opened by l3utterfly
November 29, 2024 10:09 20m 18s master
November 29, 2024 10:09 20m 18s
sycl : Reroute permuted mul_mats through oneMKL (#10408)
Server #179: Commit 266b851 pushed by l3utterfly
November 29, 2024 10:09 4m 29s master
November 29, 2024 10:09 4m 29s
merge upstream
Server #178: Pull request #46 synchronize by l3utterfly
November 27, 2024 12:02 21m 28s merge
November 27, 2024 12:02 21m 28s
ci : faster CUDA toolkit installation method and use ccache (#10537)
Server #177: Commit 46c69e0 pushed by l3utterfly
November 27, 2024 12:01 4m 8s master
November 27, 2024 12:01 4m 8s
merge from upstream
Server #176: Pull request #45 synchronize by l3utterfly
November 16, 2024 06:50 30m 0s merge
November 16, 2024 06:50 30m 0s
vulkan: Optimize some mat-vec mul quant shaders (#10296)
Server #175: Commit 772703c pushed by l3utterfly
November 16, 2024 06:42 11m 16s master
November 16, 2024 06:42 11m 16s
ggml : fix arch check in bf16_to_fp32 (#10164)
Server #174: Commit a9e8a9a pushed by l3utterfly
November 5, 2024 07:31 8m 13s master
November 5, 2024 07:31 8m 13s
sync : ggml
Server #173: Commit cc2983d pushed by l3utterfly
October 27, 2024 07:53 8m 50s master
October 27, 2024 07:53 8m 50s
merge upstream
Server #172: Pull request #42 opened by l3utterfly
October 10, 2024 02:44 29m 21s master
October 10, 2024 02:44 29m 21s
October 10, 2024 02:39 6m 48s
merge upstream
Server #170: Pull request #41 opened by l3utterfly
October 3, 2024 04:10 26m 29s master
October 3, 2024 04:10 26m 29s
ggml-backend : add device and backend reg interfaces (#9707)
Server #169: Commit c83ad6d pushed by l3utterfly
October 3, 2024 04:10 7m 21s master
October 3, 2024 04:10 7m 21s
llama : print correct model type for Llama 3.2 1B and 3B
Server #168: Commit a90484c pushed by l3utterfly
October 1, 2024 09:28 12m 33s master
October 1, 2024 09:28 12m 33s
mtgpu: enable VMM (#9597)
Server #167: Commit 7691654 pushed by l3utterfly
September 26, 2024 09:08 10m 19s master
September 26, 2024 09:08 10m 19s
merge upstream
Server #166: Pull request #38 synchronize by l3utterfly
September 23, 2024 06:10 48m 3s merge
September 23, 2024 06:10 48m 3s
Revert "[SYCL] fallback mmvq (#9088)" (#9579)
Server #165: Commit e62e978 pushed by l3utterfly
September 23, 2024 06:10 10m 55s master
September 23, 2024 06:10 10m 55s
py : add "LLaMAForCausalLM" conversion support (#9485)
Server #164: Commit 3c7989f pushed by l3utterfly
September 15, 2024 12:48 9m 56s master
September 15, 2024 12:48 9m 56s
[SYCL] add check malloc result on device (#9346)
Server #163: Commit 2a358fb pushed by l3utterfly
September 8, 2024 11:39 17m 48s master
September 8, 2024 11:39 17m 48s
metal : update support condition for im2col + fix warning (#0)
Server #162: Commit a876861 pushed by l3utterfly
September 8, 2024 09:49 10m 33s master
September 8, 2024 09:49 10m 33s
merge from upstream
Server #161: Pull request #35 opened by l3utterfly
September 2, 2024 07:18 29m 25s master
September 2, 2024 07:18 29m 25s
llama : support RWKV v6 models (#8980)
Server #160: Commit 8f1d81a pushed by l3utterfly
September 2, 2024 07:16 24m 58s master
September 2, 2024 07:16 24m 58s
merge upstream
Server #159: Pull request #34 opened by l3utterfly
August 28, 2024 01:46 35m 58s master
August 28, 2024 01:46 35m 58s