Replies: 2 comments 2 replies
-
@TratsiakY you need to specify which model you are using, not all models have a head that is simply a nn.Linear so you need to make sure it is okay for your model. That said, if it is, the initization will be different between those two methods, when num_classes is passed through create model (or model entrypt fn eg 'resnet50(num_classes=..)` it will create the model with that nn.Linear and use the models default init for the classifier, this is model specific. When you call model.reset_classifier(num_classes=...) or replace the head manually like you did, it will use the default init for an nn.Linear layer. |
Beta Was this translation helpful? Give feedback.
-
Got it, thanks. I tested this with a few models, they are mobilenetv3_small_050 and reset18. In the case of reset I replaced the layer fc. I got the same issue with both of them. In the case of fabric creare_model method the accuracy is lower. |
Beta Was this translation helpful? Give feedback.
-
Hi,
I have an interesting issue with models that were creating using two different ways:
model = timm.create_model(model_name, pretrained = Trye, num_classes = n)
model = timm.models.model_name(pretrained = True)
model.classifier = torch.nn.Linear(n_layers, n)
model_name here is the name of the model which I was using, it is the same in both cases.
After the learnint these two models using the same dataset and number of epochs, I got the different accuracy on validation dataset. It is every time higher in the case of second approach. The accuracyes are 0.77 and 0.94 for the approach 1 and 2 respectively. The code is absolutely the same for the both cases, excluding only the part related the model creation. Learning and test datasets also are the same for both cases.
The main question is: Are the creation model methods the same for the both ways? I have checked the initial weights and they are the same. Why do I get different accuracy when it should be the same?
Could anybody check and confirm/disprove this issue?
Beta Was this translation helpful? Give feedback.
All reactions