You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Inside a model definition, the torch.nn.Module objects inside a Python list do not get their parameters registered. Hence such parameters do not get trained by the optimizer, even though they are in the call graph formed by forward(). This should be flagged by torchfix -- currently no warning is given for this issue.
Example:
classFeedForward(torch.nn.Module):
def__init__(self, n_features, n_classes, n_hidden, width):
super().__init__()
# Ideally, torchfix should issue a warning on below code# The parameters of the hidden layers do not get registered if they are in a list, and are not optimized!self.hidden_layers= [torch.nn.Linear(n_featuresifi==0elsewidth, width, bias=True) foriinrange(n_hidden)]
# Correct version of the above code -- use ModuleList([]) instead of python list []self.hidden_layers=torch.nn.ModuleList([torch.nn.Linear(n_featuresifi==0elsewidth, width, bias=True) foriinrange(n_hidden)])
# Dummy call to torch.solve() to throw a torchfix warning (to demonstrate that torchfix is working correctly)torch.solve()
Torchfix output:
$ torchfix --select=ALL ./supervised/nn/feed_forward_nn.py
supervised/nn/feed_forward_nn.py:20:9: TOR001 Use of removed function torch.solve: https://github.com/pytorch-labs/torchfix#torchsolve
Finished checking 1 files.
The text was updated successfully, but these errors were encountered:
Yikes! Perhaps it can be done on a best-effort basis for some commonly-used class types, such as Linear, Conv2d and other subclasses of torch.nn.Module as found here: https://pytorch.org/docs/stable/nn.html ? Maybe it can be done only for list comprehensions? I would imagine that it's a common idiom that many people use.
I'll contribute this rule. Got it working locally, just waiting for the open PRs to be reviewed/merged.
There is a real-world example in transformers (impact mitigated by the subsequent add_module calls). Other than that, the violation of this rule is fairly rare in larger projects, but moderately common in smaller repos (10+ examples)
Inside a model definition, the
torch.nn.Module
objects inside a Python list do not get their parameters registered. Hence such parameters do not get trained by the optimizer, even though they are in the call graph formed by forward(). This should be flagged by torchfix -- currently no warning is given for this issue.Example:
Torchfix output:
The text was updated successfully, but these errors were encountered: