diff --git a/LLama.KernelMemory/LLamaSharp.KernelMemory.csproj b/LLama.KernelMemory/LLamaSharp.KernelMemory.csproj
index 1a056682e..9b991e2e7 100644
--- a/LLama.KernelMemory/LLamaSharp.KernelMemory.csproj
+++ b/LLama.KernelMemory/LLamaSharp.KernelMemory.csproj
@@ -4,7 +4,7 @@
net8.0
enable
enable
- 0.18.0
+ 0.19.0
Xbotter
SciSharp STACK
true
@@ -17,7 +17,7 @@
The integration of LLamaSharp and Microsoft kernel-memory. It could make it easy to support document search for LLamaSharp model inference.
- v0.18.0 released with v0.18.0 of LLamaSharp.
+ v0.19.0 released with v0.19.0 of LLamaSharp.
MIT
packages
diff --git a/LLama.SemanticKernel/LLamaSharp.SemanticKernel.csproj b/LLama.SemanticKernel/LLamaSharp.SemanticKernel.csproj
index 95d0f3de2..2b0de1db7 100644
--- a/LLama.SemanticKernel/LLamaSharp.SemanticKernel.csproj
+++ b/LLama.SemanticKernel/LLamaSharp.SemanticKernel.csproj
@@ -10,7 +10,7 @@
enable
enable
- 0.18.0
+ 0.19.0
Tim Miller, Xbotter
SciSharp STACK
true
@@ -23,7 +23,7 @@
The integration of LLamaSharp and Microsoft semantic-kernel.
- v0.18.0 released with v0.18.0 of LLamaSharp.
+ v0.19.0 released with v0.19.0 of LLamaSharp.
MIT
packages
diff --git a/LLama/LLamaSharp.csproj b/LLama/LLamaSharp.csproj
index 8c06598a8..2358cb5e1 100644
--- a/LLama/LLamaSharp.csproj
+++ b/LLama/LLamaSharp.csproj
@@ -7,7 +7,7 @@
AnyCPU;x64;Arm64
True
- 0.18.0
+ 0.19.0
Rinne, Martin Evans, jlsantiago and all the other contributors in https://github.com/SciSharp/LLamaSharp/graphs/contributors.
SciSharp STACK
true
diff --git a/README.md b/README.md
index 7d6734fa7..429afdadf 100644
--- a/README.md
+++ b/README.md
@@ -76,7 +76,7 @@ The following examples show how to build APPs with LLamaSharp.
- [ASP.NET Demo](./LLama.Web/)
- [LLamaWorker (ASP.NET Web API like OAI and Function Calling Support)](https://github.com/sangyuxiaowu/LLamaWorker)
-![LLamaShrp-Integrations](./Assets/LLamaSharp-Integrations.png)
+![LLamaSharp-Integrations](./Assets/LLamaSharp-Integrations.png)
## 🚀Get started
@@ -177,7 +177,7 @@ For more examples, please refer to [LLamaSharp.Examples](./LLama.Examples).
#### Why is my GPU not used when I have installed CUDA?
-1. If you are using backend packages, please make sure you have installed the CUDA backend package which matches the CUDA version installed on your system. Please note that before LLamaSharp v0.10.0, only one backend package should be installed at a time.
+1. If you are using backend packages, please make sure you have installed the CUDA backend package which matches the CUDA version installed on your system.
2. Add the following line to the very beginning of your code. The log will show which native library file is loaded. If the CPU library is loaded, please try to compile the native library yourself and open an issue for that. If the CUDA library is loaded, please check if `GpuLayerCount > 0` when loading the model weight.
```cs
@@ -258,6 +258,7 @@ If you want to compile llama.cpp yourself you **must** use the exact commit ID l
| v0.16.0 | | [`11b84eb4`](https://github.com/ggerganov/llama.cpp/tree/11b84eb4578864827afcf956db5b571003f18180) |
| v0.17.0 | | [`c35e586e`](https://github.com/ggerganov/llama.cpp/tree/c35e586ea57221844442c65a1172498c54971cb0) |
| v0.18.0 | | [`c35e586e`](https://github.com/ggerganov/llama.cpp/tree/c35e586ea57221844442c65a1172498c54971cb0) |
+| v0.19.0 | | [`958367bf`](https://github.com/ggerganov/llama.cpp/tree/958367bf530d943a902afa1ce1c342476098576b) |
## License