-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add zero3 module_granularity_threshold
to zero optimization.
#6649
Conversation
@inkcherry there are some error in CI workflows, are they related to your change? |
@delock - there were issues with the nv-accelerate and nv-torch workflows, but both of those should be resolved now. |
@inkcherry this PR looks very promising. on which model did you benchmark the performance? |
@nelyahu The model I’m testing has 64 experts per MoE layer, with each expert containing 3 linear layers. Including the non-expert parameters, each MoE layer consists of 197 parameters (all weights, without biases). There are a total of 48 layers in the model. I think it might be similar in style to the open-source model |
@inkcherry, thanks for this PR. Can you clarify the difference between coalesced params and the leaf modules? I notice that this implementation relies on the leaf modules code. |
thanks for the review @tjruwase , I found that it is also helpful for the GPU (although not as obvious as with the HPU) in such case. I think it is suitable to add it to the comm optimization config and renamed it. because personally I think that z3_leaf_module seems more suitable as an attribute or api name. And the reduce hook overhead scenario should be one case of z3_leaf_module(Another case seems to be aimed at fixing the issue where prefetch cannot accurately predict that the parameters used in the model's forward pass may differ from those in the trace). Adding an independent switch might facilitate conditional operations in the future. |
@inkcherry, thanks for the explanation. I agree that avoiding recursive I will be glad to hear your thoughts. Also, can you please share some unit tests to demonstrate usage? |
@tjruwase , thank you for your suggestions,Yes,I agree with your concerns. Initially, I used the config because I felt this API was difficult for users to be aware of (unless they encountered a related issue and searched in the issue tracker) or recognized the API but couldn’t determine its performance impact.(Compared to other fetch-related optimization configurations in the config,such as overlap_comm, bucket_size, etc.) I discussed this with @delock and I modified it to an int variable that represents the size of the model granularity, indicating ( |
module_granularity_threshold
to zero optimization.
Hi @inkcherry, I'm wondering how to set the
Can you provide a heuristic to set this value? |
Hi, @skyshine102 , When you enable this switch (set number > 0, regardless of whether it takes effect), it will print all module's granularity. In theory, the smallest value should appear in blocks like XXMoeSparseBlock (all experts params) or XXMoeDecoderLayer (all experts + some norm params). You only need to set a granularity greater than or equal to that printed value. If you find that the z3 hook overhead affects performance, this may be helpful. |
This PR adds Z3 coalesced fetch to zero optimization. Currently, some logic can be reused, but it's difficult to realize that as optimization choice(I only discovered these logic when trying to implement it).
The benefit of this approach is reducing host overhead(reduce many hooks) and during the process of recursive fetching parameters (especially in fine-grained models, such as those with a large number of moe experts). This is particularly helpful for host-sensitive devices (such as hpu), where it achieved a 40% performance improvement in our customer workloads.
FYI @delock @deepcharm