-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] Support global prefix caching #11385
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Jingxin Pan <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
Looks interesting. I think @comaniac is the expert on this. |
At the first glance this is basically CPU offloading of kv-cache? I recall that there are already some efforts on this and you may need to align with others (cc @KrishnaM251 @KuntaiDu @zachzzc). One good thing about this PR is the simple implementation, but I'm worry about the overhead. Since the code change is on the critical path, logic like iterating block tables may introduce non-negligible overheads in online serving scenarios, which decoding latency can be just a few milliseconds. Also we may need to think about how the global (aka CPU) prefix caching work with swapping. Since swapped blocks and global prefix caching blocks are basically the same, they should be managed together so that users can better control the usage of CPU blocks. |
Thanks @youkaichao @comaniac for the feedback. I agree iterating block tables brings additional overhead, but it only impacts the prefill phase, and it can be further optimized by modifying BlockTable. For simplicity I didn't add this part. Another overhead is GPU->CPU async copy after first computation in prefill. Good thing is users can choose to disable this feature in their real scenarios if the overhead is too large for them. |
Please note that we are now enabling chunked prefill in most use cases, so decode requests will be batched with prefill together. It means the prefill scheduling overhead affects decode performance as well. |
6e58711
to
e28cb94
Compare
Signed-off-by: Jingxin Pan <[email protected]>
e28cb94
to
7a43c65
Compare
Signed-off-by: Jingxin Pan <[email protected]>
Signed-off-by: Jingxin Pan <[email protected]>
@comaniac that's good to know, thanks. I just submitted a change to avoid block table iteration. |
An extension of APC to implement global prefix cache
Global prefix cache can be useful in the following use cases:
This PR extends APC to implement global prefix cache. It simply uses a local Dict as the global prefix KV cache pool, write to it when KV cache computing in prefill phase is done and read from it when updating input_tokens in model_runner. Currently the implementation is simple, it doesn't involve model/cuda change and I can observe some performance improvement in my environment. I tested with some papers(10-40KB) on dataset Long-document-dataset on L4, compared with vanilla vllm, APC+this PR reduces the generation time around 10%~28% with the same result.
Next Steps
In theory it should work better when the prompt is longer, based on the assumption that CPU->GPU copy is faster than GPU KV cache re-computation in prefill phase, but I will do more testing on other datasets and hardware. The cpu<->gpu memory copy can be optimized to improve performance. It can also be integrated with other distributed KV cache pool projects. Please leave comments and feedbacks, thanks.