-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathREADME
15 lines (11 loc) · 860 Bytes
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
to start processing one needs to:
redis: this is basically the head node (steers the messaging between headcli and workers). runtime: unlimited
headcli: this is what knows the DAG and submits tasks to workers. runtime: length of DAG processing
workers: these are celery worker that execute the tasks the headcli submits
know for docker swarm one would need to have a 3 step process:
1) start redis and worker containers (i.e. know we have messaging setup and workers waiting for jobs)
docker-compose scale redis=1 worker=X # note: order is important.. workers want to link to redis, so it needs to exist before
2) start head container (the run cmd here will submit actual jobs
docker-compose run --rm head recastworkflow-headnode /bla
3) tear down redis, workers after head exited.
docker-compose stop && docker-compose rm