-
Notifications
You must be signed in to change notification settings - Fork 33
pretrained weights are probably incorrect #7
Comments
I have also some issues regarding the pre-trained checkpoints. The checkpoints only include the keys "target_encoder" and "prototypes". If I want to load the checkpoint via the training script, I get errors because the keys "epoch" and "encoder" are missing. |
Hi @BestSonny, There are 1024 prototypes used in the loss, but I just checked the ViT-B/16 and ViT-B/4 pre-trained weights, and they both have the correct output dimension of 768. Please let me know if you would like some more clarification or help loading the models! |
Hi @brewormly, yes the current checkpoints only include the "target_encoder" since those are the network used at the end of pre-training to obtain the results in the paper, but I would be happy to release the full checkpoints as well in case you find these useful! Will ping you once these are online! |
@MidoAssran possible to release the ImageNet-1k specific checkpoints (fine-tuned and / or linear-eval'd)? By "linear-eval'd" I mean keeping the target encoder frozen and just training a linear layer on top of it. So, essentially, the target encoder params (which are already released) and the linear layer params. |
Also, the |
I have the same issue ! |
Sorry for the late reply after one year. I wonder if there is still a plan to release the full checkpoints? I think they will be very helpful in continuing the training for other tasks. |
The pretrained weights seem to be wrong.
For example, the vit_base has a dimension of 1024.
Could you upload the correct version? Thanks
The text was updated successfully, but these errors were encountered: