Skip to content

kinoai/skyhacks2020

Repository files navigation

Skyhacks2020

Streamlit

Build Streamlit docker image

Run in project root:

    docker build -f Dockerfile_streamlit -t streamlit_skyhacks3:latest .

Running and stopping Streamlit docker container from an image

The command below will run a Streamlit app in the background (as a deamon):

    docker run -p 80:8501 -d streamlit_skyhacks3:latest

Now you can access the Streamlit app from the browser at the <localhost or server IP>:8501

To stop the container run:

    docker stop `docker ps -qf "ancestor=streamlit_skyhacks3:latest"`

Hackathon template

A convenient starting template for most deep learning projects. Built with PyTorch Lightning and Weights&Biases (wandb).

Install anaconda

https://docs.conda.io/projects/conda/en/latest/user-guide/install/download.html

Create conda env

    conda create --name hack_env
    conda activate hack_env

Make sure proper python PATH is loaded

Unix

    which python

Windows

    for %i in (python.exe) do @echo. %~$PATH:i

Expected result: PATH_TO_CONDA/envs/ENV_NAME/bin/python

Install pytorch with conda

https://pytorch.org/get-started/locally/

    conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch

Clone repo

    git clone https://github.com/kinoai/hackathon-template

Install requirements with pip

    cd hackathon-template
    pip install -r requirements.txt

Important notes!

  • If you are not using GPU (CUDA incompatible GPU) you may need to specify the number of GPUs manually instead of leaving the default -1 in config.yaml:
    num_of_gpus: 0

Useful tips

  • Useful pl.Trainer() parameters:

    • gpus=-1 - use all gpus available on your machine
    • accumulate_grad_batches=5 - perform optimisation after accumulating gradient from 5 batches
    • accumulate_grad_batches={5: 3, 10: 20} - no accumulation for epochs 1-4. accumulate 3 for epochs 5-10. accumulate 20 after that
    • auto_scale_batch_size='power' - automatically find the largest batch size that fits into memory and is power of 2 (requires calling trainer.tune(model, datamodule))
    • check_val_every_n_epoch=10 - run validation loop every 10 training epochs
    • val_check_interval=0.25 - check validation set 4 times during a training epoch
    • fast_dev_run=True - runs 1 train, val, test batch and program ends (great for debugging)
    • min_epochs=1 - force training for at least these many epochs
    • overfit_batches=0.01 - use only 1% of the train set (and use the train set for val and test)
    • overfit_batches=10 - use only 10 batches of the train set (and use the train set for val and test)
    • limit_train_batches=0.25 - run through only 25% of the training set each epoch
    • limit_val_batches=0.25
    • limit_test_batches=0.25
    • precision=16 - set tensor precision (default is 32 bits)
    • gradient_clip_val=0.5 - gradient clipping value (0 means don’t clip), helps with exploding gradient issues
    • profiler=SimpleProfiler() - print execution time info for each method used
    • weights_summary='full' - print model info
    • amp_backend='apex' - apex backend for mixed precision training https://github.com/NVIDIA/apex
  • PyTorch Lightning Bolts is official collection of prebuilt models across many research domains:

  • Pre-trained pytorch model repository designed for research exploration:

  • List of all tools in PyTorch ecosystem:

About

Skyhacks 2020 - 6th place out of ~15 teams

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published