Skip to content

Latest commit

 

History

History
126 lines (81 loc) · 3.11 KB

README.md

File metadata and controls

126 lines (81 loc) · 3.11 KB

Alternative workers

Uses cool Azure features (ACI) to run compute worker docker container in serverless environment:

Adds support for nvidia GPUs

Adds support for real time detailed results

Running

Edit .env_sample and save it as .env:

BROKER_URL=<Your queue's broker URL>
BROKER_USE_SSL=True in .env.

Run the following command:

docker run \
    --env-file .env \
    --name compute_worker \
    -d \
    --restart unless-stopped \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v /tmp/codalab:/tmp/codalab \
    codalab/competitions-v1-compute-worker:1.1.5

For more details: codalab/codalab-competitions/wiki/Using-your-own-compute-workers.

If you want to run with GPU:

Install cuda, nvidia, docker and nvidia-docker (system dependent)

Make sure that you have nvidia-container-toolkit set up -- this also involves updating to Docker 19.03 and installing NVIDIA drivers.

Edit .env_sample and save it as .env. Make sure to uncomment USE_GPU=True.

Then make sure the temp directory you select is created and pass it in this command

Run the following command:

sudo mkdir -p /tmp/codalab && nvidia-docker run \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v /var/lib/nvidia-docker/nvidia-docker.sock:/var/lib/nvidia-docker/nvidia-docker.sock \
    -v /tmp/codalab:/tmp/codalab \
    -d \
    --name compute_worker \
    --env-file .env \
    --restart unless-stopped \
    --log-opt max-size=50m \
    --log-opt max-file=3 \
    codalab/competitions-v1-nvidia-worker:v1.5-compat

To get output of the worker

$ docker logs -f compute_worker

To stop the worker

$ docker kill compute_worker

Development

To re-build the image:

docker build -t competitions-v1-compute-worker .

Updating the image

docker build -t codalab/competitions-v1-compute-worker:latest .
docker push codalab/competitions-v1-compute-worker

Special env flags

USE_GPU

Default False, does not pass --gpus all flag

Note: Also requires Docker v19.03 or greater, nvidia-container-toolkit, and NVIDIA drivers.

SUBMISSION_TEMP_DIR

Default /tmp/codalab

SUBMISSION_CACHE_DIR

Default /tmp/cache

CODALAB_HOSTNAME

Default socket.gethostname()

DONT_FINALIZE_SUBMISSION

Default False

Sometimes it may be useful to pause the compute worker and return instead of finishing a submission. This leaves the submission in a state where it hasn't been cleaned up yet and you can attempt to re-run it manually.