-
Notifications
You must be signed in to change notification settings - Fork 968
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Models With Tied Weights Need Re-Tieing After FSDP Param Init #3154
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix, just one question on when this should actually be being set
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!Can you fix the quality issues and then we can merge :)
Signed-off-by: Yu Chin Fabian Lim <[email protected]>
@muellerzr waiting on your approval to trigger the workflow |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
@muellerzr noticed that one of the tests failed.. the test |
This change creates an error if the model doesn't have |
@enesmsahin sorry for the overlook this is fixed here #3226 |
What does this PR do?
Currently in
FullyShardedDataParallelPlugin
, theparam_init_fn
is set when sync_module_states=True. This is required byFSDP
to initialize the shards (i.e. rank > 0) params in a variety of situations, including the important one where the parameter was ontorch.device("meta")
becauselow_cpu_mem_mode
was used.However
FullyShardedDataParallelPlugin.param_init_fn
is now set towhich causes problems when there are tied weights. Consider the following scenario
A
andB
that share a tied weightA.weight = B.weight
A.to_empty()
is called, thenA.weight
will be reassigned to a new tensor.B.empty()
is called, and then nowA.weight != B.weight
.This is observed to cause problems in the
low_cpu_mem_mode=True
case, because now when getting themanaged_params
in FSDP, see here,param_init_fn
is not called, becauselow_cpu_mem_mode
will load weights in this shard. Somanaged_params
will not have duplicates if weights are tied.param_init_fn
is called andmanaged_params
may have duplicates.Then when FSDP calls
_sync_module_params_and_buffers
, thetorch.distributed._broadcast_coalesced
will be trying to communicate different number of tensors. This is not as intended and causes unexpected behaviors.Fixes inconsistency in current logic in general, in particular is required to fix huggingface/trl#2089 to completion.
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@muellerzr