Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
CPU SHM based inference_all_reduce improve (microsoft#5320)
This PR improves SHM based inference_all_reduce on CPU: 1. Optimize for larger message size which affects performance of first token generation with long context. For example, for llama2 70b with 1024 input sequence length, all_reduce message size is 32MB with single batch. * Increased SHM buffer size from 1MB/worker to 32MB/worker * Each worker allocate SHM buffer on its own NUMA node, instead of rank 0 allcoate SHM buffers for all other workers * For message size > 1MB, a more distributed algoritm is used to make memory bandwidth and computation evenly distributed among workers 2. Decouple SHM based collective code with oneCCL based code, making it ready to integrate with other backend i.e. gloo backend 3. Loosen the condition SHM based allreduce is used, i.e. message size does not have to divisible by 32 bytes. The new distributed algorithm, combine with larger per worker SHM buffer, brings ~3x allreduce performance improvement for 32MB message size on a 2 socket machine. --------- Co-authored-by: Logan Adams <[email protected]> Co-authored-by: Logan Adams <[email protected]> Co-authored-by: Olatunji Ruwase <[email protected]>
- Loading branch information