Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Core] Support global prefix caching #11385

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

lyppg
Copy link

@lyppg lyppg commented Dec 20, 2024

An extension of APC to implement global prefix cache

Global prefix cache can be useful in the following use cases:

  1. When the APC cache is evicted and can't hit, the processing of new prompts in prefill phase can reuse the KV cache in global KV cache pool and skip the re-computation
  2. Another vllm instance can use the KV cache in global KV cache pool directly even if it runs the prompts the first time(which means no APC cache available yet)

This PR extends APC to implement global prefix cache. It simply uses a local Dict as the global prefix KV cache pool, write to it when KV cache computing in prefill phase is done and read from it when updating input_tokens in model_runner. Currently the implementation is simple, it doesn't involve model/cuda change and I can observe some performance improvement in my environment. I tested with some papers(10-40KB) on dataset Long-document-dataset on L4, compared with vanilla vllm, APC+this PR reduces the generation time around 10%~28% with the same result.

Next Steps

In theory it should work better when the prompt is longer, based on the assumption that CPU->GPU copy is faster than GPU KV cache re-computation in prefill phase, but I will do more testing on other datasets and hardware. The cpu<->gpu memory copy can be optimized to improve performance. It can also be integrated with other distributed KV cache pool projects. Please leave comments and feedbacks, thanks.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@youkaichao
Copy link
Member

Looks interesting. I think @comaniac is the expert on this.

@comaniac
Copy link
Collaborator

At the first glance this is basically CPU offloading of kv-cache? I recall that there are already some efforts on this and you may need to align with others (cc @KrishnaM251 @KuntaiDu @zachzzc).

One good thing about this PR is the simple implementation, but I'm worry about the overhead. Since the code change is on the critical path, logic like iterating block tables may introduce non-negligible overheads in online serving scenarios, which decoding latency can be just a few milliseconds.

Also we may need to think about how the global (aka CPU) prefix caching work with swapping. Since swapped blocks and global prefix caching blocks are basically the same, they should be managed together so that users can better control the usage of CPU blocks.

@lyppg
Copy link
Author

lyppg commented Dec 29, 2024

At the first glance this is basically CPU offloading of kv-cache? I recall that there are already some efforts on this and you may need to align with others (cc @KrishnaM251 @KuntaiDu @zachzzc).

One good thing about this PR is the simple implementation, but I'm worry about the overhead. Since the code change is on the critical path, logic like iterating block tables may introduce non-negligible overheads in online serving scenarios, which decoding latency can be just a few milliseconds.

Also we may need to think about how the global (aka CPU) prefix caching work with swapping. Since swapped blocks and global prefix caching blocks are basically the same, they should be managed together so that users can better control the usage of CPU blocks.

Thanks @youkaichao @comaniac for the feedback.
It seems the idea in this PR is a little different from other implementation, it mainly focuses on the prefill phase(as prefill phase is compute intensive, we can get the most benefit by skipping the re-computation of KV cache) caching and can be served as the second layer of cache. And I want to keep it simple and easy to extend, so that it can be easily used in vLLM cluster and distributed KV cache pool in the future. Please let me know if my understanding is correct and if other work also on this direction.

I agree iterating block tables brings additional overhead, but it only impacts the prefill phase, and it can be further optimized by modifying BlockTable. For simplicity I didn't add this part. Another overhead is GPU->CPU async copy after first computation in prefill. Good thing is users can choose to disable this feature in their real scenarios if the overhead is too large for them.

@comaniac
Copy link
Collaborator

Please note that we are now enabling chunked prefill in most use cases, so decode requests will be batched with prefill together. It means the prefill scheduling overhead affects decode performance as well.

@lyppg lyppg force-pushed the global_prefix_kvcache branch 2 times, most recently from 6e58711 to e28cb94 Compare December 30, 2024 04:48
Signed-off-by: Jingxin Pan <[email protected]>
@lyppg lyppg force-pushed the global_prefix_kvcache branch from e28cb94 to 7a43c65 Compare December 30, 2024 05:03
Signed-off-by: Jingxin Pan <[email protected]>
@lyppg
Copy link
Author

lyppg commented Jan 3, 2025

Please note that we are now enabling chunked prefill in most use cases, so decode requests will be batched with prefill together. It means the prefill scheduling overhead affects decode performance as well.

@comaniac that's good to know, thanks. I just submitted a change to avoid block table iteration.
I took a deeper looked at #11532, it seems we both want to improve the cache hit ratio.
And in our case, we want to separate the prefill and decode, so I didn't test the chunked prefill a lot, let me test more. While I am testing/fixing/optimizing current code, could you advise how to align with #11532?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants