-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] Block Allocator to support KV cache CPU offloading #11532
base: main
Are you sure you want to change the base?
[Core] Block Allocator to support KV cache CPU offloading #11532
Conversation
… scheduler Signed-off-by: ApostaC <[email protected]> Co-authored-by: KuntaiDu <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
Signed-off-by: ApostaC <[email protected]>
I will hand it over to @comaniac for final reviews. |
I've gone through that PR. Seems like that PR implements a similar functionality of CPU offloading, but not sure what the performance will be. By the way, is the implementation (where offloading the KV during the execution of model runner) of that PR duplicated with Kuntai's previous disaggregated prefill PR (#10502)? |
@ApostaC I tried this pr with flashinfer backend, but get wrong decoding results after running serveral requests (maybe 100 to 200), I have no idea about how to trace the error. |
For PR #11385, it is essentially "sending" KV cache to the CPU pool after prefill and "receiving" KV cache from the CPU pool before prefill, so the abstractions exposed by disaggregated prefill can help that PR handle all the control-plane stuff. |
Hey @DearPlanet , can you give some basic scripts to help reproduce the problem? This will be very helpful to debug. |
@ApostaC , here is a simple reproducing process, the commands below executed on RTX3090x2: Start service:
Run benchmark script at
Print output content of responses, then you can see the abnormal decoding results. I tried with default/xformers/flashinfer backends: The correct output log file: The error output log file: |
I think there is a bug in _uncached_blocks. If a block is stored in _uncached_blocks, the block will be released before it is saved to the CPU after the inference is complete. Other requests will reuse the block, causing the block ID to be rewritten. |
|
Note: This PR is part of the big CPU offloading PR #10874 -- this PR contains the CPU-offloading block allocator implementation as well as the changes in the scheduler.
TL; DR: CPU offloading is better than prefix caching in our benchmark, we also found that the evictor can be optimized to save 10-30% of the runtime.
End-to-end benchmarking results:
A long document QA workload (see
benchmarks/benchmark_long_document_qa.py
) running on A100-40G-SXM GPU. The GPU can cache 8 documents and the CPU can cache 30 documents.(Following are the original data for the above figure)
Implementation
This PR has much less features compared to #8694, but it is really minimum and creates very little core change. So I guess we can use this PR to enable CPU KV cache offloading first, and then focus on disk.
The key idea of this implementation is to maintain those allocated blocks that didn't hit the cache, and constantly copy them into CPU after each scheduler step.
Here is the flow diagram
This idea is borrowed from ConServe (paper link: https://arxiv.org/abs/2410.01228), based on the assumption that the CPU-GPU bandwidth is much higher than GPU KV cache generation throughput. Thanks Yifan for this idea.