- This repository provides a minimal template for optimized training of deep learning models with reproducible results.
- It provides configurable code for training using pytorch-lightning for both Slurm and Single GPU.
- The repository has code for:
- PyTorch Lightning Trainer and Pipeline
- HPO Pipeline Trainer using Ray-Tune
- Wandb Logger
- LR Scheduler to be used along with Trainer; which includes LARS LR Scheduler from MAE
- Feature Extraction Helper Base class to extract features from different models
- Downstream Logistic Regression based Classification and Segmentation Tasks, for easy comparison of learned features without additional processing.
To customize the current repo, template the repo to create a new repo. After the repo is created, the package folder needs to be updated if you want to customize and change. This needs to be done as follows:
- remove the poetry.lock
- in pyproject.toml, update the lines 1 -7 as needed
- update references training/pipeline/trainer.py and training/pipeline/training.py [TODO - Add Github actions] - This would be handled automatically if changed via refactor in vscode
- Add python packages as needed
mv training <new package name>
pip install poetry
poetry install