You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the pytorch-lightning2.2.1 environment, Maml is implemented by wrapping the meta_learn function with torch.enable_grad, which makes the model's outer losses uncomputed during validation, resulting in memory overflow. This can be avoided by the peripheral loss calculation procedure of the torch.no_grad() packaging validation procedure.
The text was updated successfully, but these errors were encountered:
In the pytorch-lightning2.2.1 environment, Maml is implemented by wrapping the meta_learn function with torch.enable_grad, which makes the model's outer losses uncomputed during validation, resulting in memory overflow. This can be avoided by the peripheral loss calculation procedure of the torch.no_grad() packaging validation procedure.
The text was updated successfully, but these errors were encountered: