-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] Performance optimization for swap_blocks by cuda kernels #11531
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
Signed-off-by: ApostaC <[email protected]>
Signed-off-by: ApostaC <[email protected]>
vllm/worker/worker.py
Outdated
self.blocks_to_swap_out_buffer = torch.zeros((max_num_blocks, 2), | ||
dtype=torch.int64, | ||
device="cpu", | ||
pin_memory=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some systems do not have pin memory (notably, WSL). we need to take care of that. otherwise this PR LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WSL does not support UVA, either. You can use is_pin_memory_available
to determine if this optimization can be used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sound good, just pushed a new PR fixing this
Signed-off-by: ApostaC <[email protected]>
This PR is part of the big CPU offloading PR #10874 -- this PR contains the new CUDA kernel implementation for swap_blocks.
Performance benchmark
The numbers are collected on A100-40GB-SXM GPUs
Notes: CUDA graph compatibility
Currently, it pre-allocates a CPU pin-memory tensor for the
blocks_to_swap_in
andblocks_to_swap_out
. It could support CUDA graphs in the future since the address of the pre-allocated buffer won't change.I did not include it in the PR currently, and as a next step, I can create a new PR for CUDA graphs.