-
Notifications
You must be signed in to change notification settings - Fork 286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transfer Learning? #1574
Comments
Hi! You are correct that the focus of the library is on training models from scratch. We haven't incorporated finetuning because there are many possible finetuning tasks (classification, detection, segmentation, tracking, etc.) and there are already good libraries for them. Instead, we try to keep lightly as simple as possible and make it easy to pretrain models for any downstream task. In most cases you should be able to save the state dict from the pretrained backbone and load the weights in your favorite finetuning library. We also have some tutorials that include finetuning: |
Hey guarin! Thank you for the quick responses. Maybe I wasn't 100% precise with my comment. I wasn't talking about supervised fine-tuning, but about a self-supervised step in between. Something like that: https://arxiv.org/pdf/2103.12718 |
Ah sorry I didn't understand the question correctly! In principle everything described in the paper should already be possible with Lightly AFAIK (awesome paper btw!). If you have a pretrained model you can just continue pretraining on a different dataset. Adding heads should also be pretty straight forward (see classification tutorial from above). Are there any particular features that you would be interested in or are you mostly interested in some documentation on how to do transfer learning? We also discussed hosting pretrained model weights in the past but didn't make them very easily available yet (weights are available here). Would this be something that you are looking for? |
No, I more or less wanted to engage with the community and share that I think this is an interesting rather overlooked area. Note however that it isn't 100% clear how to do this. For example, one could fine-tune the original model using LoRA or elastic weight consolidation (penalize the model for deviating from the original pretrained weigths). |
This might be an interesting topic for a tutorial 🤔 |
If I my perception is correct, this repo is mostly about training models from scratch.
However, for many applications, it might be best to fine tune/domain adapt a pretrained model on custom data.
For example, one might stack a new head on top of a pretrained model, use a small learning rate with warmup for the original model and train the new model with sim-clr (or something else).
The text was updated successfully, but these errors were encountered: