For instructions about how to submit to the VisioMel Challenge: Predicting Melanoma Relapse, start with the code submission format page of the competition website.
Welcome to the runtime repository for the VisioMel Challenge: Predicting Melanoma Relapse! This repository contains the definition of the environment where your code submissions will run. It specifies both the operating system and the software packages that will be available to your solution.
This repository has three primary uses for competitors:
π‘ Provide example solutions: You can find an examples to help you develop your solution.
- The random baseline solution contains minimal code that runs succesfully in the runtime environment output and outputs a proper submission. This simply generates a random probability between zero and one for each tif. You can use this as a guide to bring in your model and generate a submission.
- The benchmark solution contains submission code and assets from the benchmark blog post.
π§ Test your submission: Test your submission using a locally running version of the competition runtime to discover errors before submitting to the competition site. You can also find a scoring script implementing the competition metric.
π¦ Request new packages in the official runtime: Since your submission will not be able to access the internet, all packages must be pre-installed. If you want to use a package that is not in the runtime environment, make a pull request to this repository. Make sure to test out adding the new package to both official environments, CPU and GPU.
- A clone of this repository
- Docker
- At least 13 GB of free space for both the sample data and Docker images
- GNU make (optional, but useful for running the commands in the Makefile)
Additional requirements to run with GPU:
- NVIDIA drivers with CUDA 11
- NVIDIA Docker container runtime
In the official code execution platform, code_execution/data
will contain the actual test data, which no participants have access to, and is what will be used to compute your score for the leaderboard.
To help you develop and debug your submissions, we provide a small sample of data with the same format. These files are created from the train set, but mimic the setup that you'll have in the runtime container.
Start by downloading code_execution_development_data.tgz
from the data download page. Unzip and extract the archive to data
directory and you can develop and debug your submission on your local machine.
$ tree data
data
βββ 8tn0wx0q.tif
βββ qpbyhjj8.tif
βββ submission_format.csv
βββ test_labels.csv
βββ test_metadata.csv
Note that there will be around 500 tifs in the actual test data. And of course, there will not be a labels csv, which is included here for your use in scoring locally.
Your final submission should be a zip archive named with the extension .zip
(for example, submission.zip
). The root level of the submission.zip
file must contain a main.py
which performs inference on the test images and writes the predictions to a file named submission.csv
in the same directory as main.py
.
For more detail, see the "what to submit" section of the code submission page.
Here's a few dos and don'ts:
Do:
- Include a
main.py
in the root directory of your submission zip. There can be extra files with more code that is called. - Include any model weights in your submission zip as there will be no network access.
- Write out a
submissions.csv
to the root directory when inference is finished, matching the submission format exactly. - Log general information that will help you debug your submission.
- Test your submission locally and using the smoke test functionality on the platform.
- Consider ways to optimize your pipeline so that it runs end-to-end in under 8 hours.
Don't:
- Read from locations other than your solution directory.
- Use information from other images in the test set in making a prediction for a given tif file.
- Print or log any information about the test metadata or test images, including specific data values and aggregations such as sums, means, or counts.
Participants who violate the rules will be subject for disqualification from the competition.
When you make a submission on the DrivenData competition site, we run your submission inside a Docker container, a virtual operating system that allows for a consistent software environment across machines. The best way to make sure your submission to the site will run is to first run it successfully in the container on your local machine.
This section provides instructions on how to run the your submission in the code execution container from your local machine. To simplify the steps, key processes have been defined in the Makefile
. Commands from the Makefile
are then run with make {command_name}
. The basic steps are:
make pull
make pack-submission
make test-submission
Run make help
for more information about the available commands as well as information on the official and built images that are available locally.
Here's the process in a bit more detail:
- First, make sure you have set up the prerequisites.
- Download the code execution development dataset and extract it to
data
. - Download the official competition Docker image:
$ make pull
Note that if you have built a local version of the runtime image with
make build
, that image will take precedence over the pulled image when using any make commands that run a container. You can explicitly use the pulled image by setting theSUBMISSION_IMAGE
shell/environment variable to the pulled image or by deleting all locally built images.
-
Save all of your submission files, including the required
main.py
script, in thesubmission_src
folder of the runtime repository. Make sure any needed model weights and other assets are saved insubmission_src
as well. -
Create a
submission/submission.zip
file containing your code and model assets:
$ make pack-submission
mkdir -p submission/
cd submission_src; zip -r ../submission/submission.zip ./*
adding: solution.py (deflated 73%)
- Launch an instance of the competition Docker image, and run the same inference process that will take place in the official runtime:
$ make test-submission
This runs the container entrypoint, which unzips submission/submission.zip
in the root directory of the container and runs the main.py
script from your submission. In the local testing setting, the final submission is saved out to submission/submission.csv
on your local machine.
β οΈ Remember thatcode_execution/data
is just a mounted version of what you have saved locally indata
so you will just be using the publicly available training files for local testing. In the official code execution platform,code_execution/data
will contain the actual test data.
When you run make test-submission
the logs will be printed to the terminal and written out to submission/log.txt
. If you run into errors, use the log.txt
to determine what changes you need to make for your code to execute successfully. For an example of what the logs look like when the full process runs successfully, see example_log.txt
.
We have provided a scoring script to calculate the competition metric in the same way scores will be calculated in the DrivenData platform. The development dataset includes test labels, allowing you to score your predictions (although the actual scores won't be very meaningful).
- After running your submission, the predictions generated by your code should be saved to
submission/submission.csv
. - Make sure the simulated test labels are saved in
data/test_labels.csv
. - Run
scripts/score.py
on your predictions:
$ python scripts/score.py
2023-03-21 16:01:04.372 | SUCCESS | __main__:main:15 - Score: 1.4787577257948117
The benchmark code is also provided as an example of how to structure a submission that includes loading model assets. See the benchmark blog post for a full walkthrough of the solution development. The process to run the benchmark is the same as running your own submission, except that you will reference code in examples_src
rather than submission_src
.
make pull
make pack-example
make test-submission
Note that here we are running pack-example
instead of pack-submission
. Just like with your submission, the final predictions will be saved to submission/submission.csv
on your local machine.
You can also try out the random_baseline
solution by setting EXAMPLE
when you run the make command:
EXAMPLE=random_baseline make pack-example
Fine-tuning an existing model is common practice in machine learning. Many software packages will download the pre-trained model from the internet behind the scenes when you instantiate a model. That will fail in the the code execution environment, since submissions do not have open access to the internet. Instead you will need to include all weights along with your submission.zip
and make sure that your code loads them from disk and rather than trying to download them from the internet.
For example, PyTorch uses a local cache which by default is saved to ~/.cache/torch
. Identify which of the weights in that directory are needed to run inference (if any), and copy them into your submission. If we need pre-trained ResNet34 weights we downloaded from online, we could run:
# Copy your local pytorch cache into submission_src/assets
cp ~/.cache/torch/checkpoints/resnet34-333f7ec4.pth submission_src/assets/
# Zip it all up in your submission.zip
zip -r submission.zip submission_src
When the platform runs your code, it will extract assets
to /code_execution/assets
. You'll need to tell PyTorch to use your custom cache directory instead of ~/.cache/torch
by setting the TORCH_HOME
environment variable in your Python code (in main.py
for example).
import os
os.environ["TORCH_HOME"] = "/code_execution/assets/torch"
Now PyTorch will load the model weights from the local cache, and your submission will run correctly in the code execution environment without internet access.
The make
commands will try to select the CPU or GPU image automatically by setting the CPU_OR_GPU
variable based on whether make
detects nvidia-smi
.
If you have nvidia-smi
and a CUDA version other than 11, you will need to explicitly set make test-submission
to run on CPU rather than GPU. make
will detect your GPU and automatically select the GPU image, but it will fail because make test-submission
requires CUDA version 11.
CPU_OR_GPU=cpu make pull
CPU_OR_GPU=cpu make test-submission
If you want to try using the GPU image on your machine but you don't have a GPU device that can be recognized, you can use SKIP_GPU=true
. This will invoke docker
without the --gpus all
argument.
If you want to use a package that is not in the environment, you are welcome to make a pull request to this repository. If you're new to the GitHub contribution workflow, check out this guide by GitHub.
The runtime manages dependencies using conda environments. Here is a good general guide to conda environments. The official runtime uses Python 3.10.9 environments.
To submit a pull request for a new package:
-
Fork this repository.
-
Edit the conda environment YAML files,
runtime/environment-cpu.yml
andruntime/environment-gpu.yml
. There are two ways to add a requirement:- Conda package manager (preferred): Add an entry to the
dependencies
section. This installs from a conda channel usingconda install
. Conda performs robust dependency resolution with other packages in thedependencies
section, so we can avoid package version conflicts. - Pip package manager: Add an entry to the
pip
section. This installs from PyPI usingpip
, and is an option for packages that are not available in a conda channel.
- Conda package manager (preferred): Add an entry to the
For both methods be sure to include a version, e.g., numpy==1.20.3
. This ensures that all environments will be the same.
- Locally test that the Docker image builds successfully for CPU and GPU images:
CPU_OR_GPU=cpu make build
CPU_OR_GPU=gpu make build
-
Commit the changes to your forked repository.
-
Open a pull request from your branch to the
main
branch of this repository. Navigate to the Pull requests tab in this repository, and click the "New pull request" button. For more detailed instructions, check out GitHub's help page. -
Once you open the pull request, we will use Github Actions to build the Docker images with your changes and run the tests in
runtime/tests
. For security reasons, administrators may need to approve the workflow run before it happens. Once it starts, the process can take up to 30 minutes, and may take longer if your build is queued behind others. You will see a section on the pull request page that shows the status of the tests and links to the logs. -
You may be asked to submit revisions to your pull request if the tests fail or if a DrivenData team member has feedback. Pull requests won't be merged until all tests pass and the team has reviewed and approved the changes.
Thanks for reading! Enjoy the competition, and hit up the forum if you have any questions!