Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Core] Block Allocator to support KV cache CPU offloading #11532

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

ApostaC
Copy link
Contributor

@ApostaC ApostaC commented Dec 26, 2024

Note: This PR is part of the big CPU offloading PR #10874 -- this PR contains the CPU-offloading block allocator implementation as well as the changes in the scheduler.

TL; DR: CPU offloading is better than prefix caching in our benchmark, we also found that the evictor can be optimized to save 10-30% of the runtime.

End-to-end benchmarking results:

A long document QA workload (see benchmarks/benchmark_long_document_qa.py) running on A100-40G-SXM GPU. The GPU can cache 8 documents and the CPU can cache 30 documents.

image

(Following are the original data for the above figure)

Num documents vLLM vLLM w/ prefix caching vLLM w/ prefix caching + CPU offloading
8 13.66 0.49 0.5
16 27.28 7.22 2.3
32 54.54 49.96 17.26
64 109.27 126.08 110.96

Implementation

This PR has much less features compared to #8694, but it is really minimum and creates very little core change. So I guess we can use this PR to enable CPU KV cache offloading first, and then focus on disk.

The key idea of this implementation is to maintain those allocated blocks that didn't hit the cache, and constantly copy them into CPU after each scheduler step.

Here is the flow diagram
image

This idea is borrowed from ConServe (paper link: https://arxiv.org/abs/2410.01228), based on the assumption that the CPU-GPU bandwidth is much higher than GPU KV cache generation throughput. Thanks Yifan for this idea.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@KuntaiDu KuntaiDu added the ready ONLY add when PR is ready to merge/full CI is needed label Dec 27, 2024
@youkaichao
Copy link
Member

I will hand it over to @comaniac for final reviews.

@comaniac
Copy link
Collaborator

@ApostaC could you take a look at #11385 and see if it's related?

@ApostaC
Copy link
Contributor Author

ApostaC commented Dec 31, 2024

@ApostaC could you take a look at #11385 and see if it's related?

I've gone through that PR. Seems like that PR implements a similar functionality of CPU offloading, but not sure what the performance will be.

By the way, is the implementation (where offloading the KV during the execution of model runner) of that PR duplicated with Kuntai's previous disaggregated prefill PR (#10502)?
@KuntaiDu Please chime in if you have more understanding about this, thanks.

@DearPlanet
Copy link
Contributor

@ApostaC I tried this pr with flashinfer backend, but get wrong decoding results after running serveral requests (maybe 100 to 200), I have no idea about how to trace the error.
It works well on xformers/default attention backend.

@KuntaiDu
Copy link
Collaborator

KuntaiDu commented Jan 5, 2025

@ApostaC could you take a look at #11385 and see if it's related?

I've gone through that PR. Seems like that PR implements a similar functionality of CPU offloading, but not sure what the performance will be.

By the way, is the implementation (where offloading the KV during the execution of model runner) of that PR duplicated with Kuntai's previous disaggregated prefill PR (#10502)? @KuntaiDu Please chime in if you have more understanding about this, thanks.

For PR #11385, it is essentially "sending" KV cache to the CPU pool after prefill and "receiving" KV cache from the CPU pool before prefill, so the abstractions exposed by disaggregated prefill can help that PR handle all the control-plane stuff.

@ApostaC
Copy link
Contributor Author

ApostaC commented Jan 6, 2025

@ApostaC I tried this pr with flashinfer backend, but get wrong decoding results after running serveral requests (maybe 100 to 200), I have no idea about how to trace the error. It works well on xformers/default attention backend.

Hey @DearPlanet , can you give some basic scripts to help reproduce the problem? This will be very helpful to debug.

@DearPlanet
Copy link
Contributor

DearPlanet commented Jan 7, 2025

@ApostaC , here is a simple reproducing process, the commands below executed on RTX3090x2:

Start service:

VLLM_ATTENTION_BACKEND=FLASHINFER CUDA_VISIBLE_DEVICES=0,1 vllm serve /host/models/Qwen2.5-32B-Instruct-AWQ/ --served-model-name qwen2.5-32b --enable-prefix-caching --block-allocator CpuOffloadingBlockAllocator --preemption_mode recomputation --swap-space 25  --tensor-parallel-size 2 --host 0.0.0.0 --port 8080 --gpu-memory-utilization 0.65 --max-model-len 3000

Run benchmark script at vllm/benchmark/ :

python3 benchmark_serving.py --base-url http://0.0.0.0:8080 --dataset-path ./sonnet.txt --model qwen2.5-32b --tokenizer /mnt/root/models/Qwen2.5-32B-Instruct-AWQ/ --request-rate 3 --backend openai-chat --endpoint /v1/chat/completions --dataset-name sonnet

Print output content of responses, then you can see the abnormal decoding results.

I tried with default/xformers/flashinfer backends:
3090x2: default✅xformers✅ flashinfer❌
L20x2(with fp8 kv cache): default❓xformers❌flashinfer❌

The correct output log file:
test_out_sonnet_default.log

The error output log file:
test_out_sonnet_flashinfer.log

@boposki
Copy link

boposki commented Jan 17, 2025

I think there is a bug in _uncached_blocks. If a block is stored in _uncached_blocks, the block will be released before it is saved to the CPU after the inference is complete. Other requests will reuse the block, causing the block ID to be rewritten.

@boposki
Copy link

boposki commented Jan 17, 2025

I think there is a bug in _uncached_blocks. If a block is stored in _uncached_blocks, the block will be released before it is saved to the CPU after the inference is complete. Other requests will reuse the block, causing the block ID to be rewritten.
sorry, i found the problem is solved。 code: self._uncached_blocks.remove(block_id)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
frontend ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants