forked from mlc-ai/mlc-llm
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[Serving] Support multi-threading CPU sampling (mlc-ai#1232)
This PR supports the multi-threading token sampling on CPU. In serving scenarios, the token sampling process becomes one of the bottlenecks, as the model computation has higher throughput than single-sequence settings. Therefore, we enhances the CPU sampling with multi-threading. Particularly, * this PR changes the design scope of Sampler. Prior to this PR, the sampling function exposed by Sampler focuses on sampling a single token. After this PR, a function processes a batch of tokens. This makes the multi-threading more manageable. * the multi-threading at this moment is backed by OpenMP, according to our micro-benchmark. Note: to effectively enable OpenMP, now need to compile with gcc/g++.
- Loading branch information
1 parent
3f38242
commit 36ea52d
Showing
3 changed files
with
72 additions
and
65 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters