Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Core] Block Allocator to support KV cache CPU offloading #11532

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

ApostaC
Copy link

@ApostaC ApostaC commented Dec 26, 2024

Note: This PR is part of the big CPU offloading PR #10874 -- this PR contains the CPU-offloading block allocator implementation as well as the changes in the scheduler.

TL; DR: CPU offloading is better than prefix caching in our benchmark, we also found that the evictor can be optimized to save 10-30% of the runtime.

End-to-end benchmarking results:

A long document QA workload (see benchmarks/benchmark_long_document_qa.py) running on A100-40G-SXM GPU. The GPU can cache 8 documents and the CPU can cache 30 documents.

image

(Following are the original data for the above figure)

Num documents vLLM vLLM w/ prefix caching vLLM w/ prefix caching + CPU offloading
8 13.66 0.49 0.5
16 27.28 7.22 2.3
32 54.54 49.96 17.26
64 109.27 126.08 110.96

Implementation

This PR has much less features compared to #8694, but it is really minimum and creates very little core change. So I guess we can use this PR to enable CPU KV cache offloading first, and then focus on disk.

The key idea of this implementation is to maintain those allocated blocks that didn't hit the cache, and constantly copy them into CPU after each scheduler step.

Here is the flow diagram
image

This idea is borrowed from ConServe (paper link: https://arxiv.org/abs/2410.01228), based on the assumption that the CPU-GPU bandwidth is much higher than GPU KV cache generation throughput. Thanks Yifan for this idea.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@KuntaiDu KuntaiDu added the ready ONLY add when PR is ready to merge/full CI is needed label Dec 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
frontend ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants