Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cage stop is slow #64

Open
seamusabshere opened this issue Dec 19, 2016 · 5 comments
Open

cage stop is slow #64

seamusabshere opened this issue Dec 19, 2016 · 5 comments

Comments

@seamusabshere
Copy link
Member

it often takes multiple minutes to cage stop an app

a similar docker rm -f $(docker ps -qa) takes like 10 seconds

i'm guessing we are happy to delegate this to docker-compose, but maybe we should go full-hog docker ps -qa | parallel [...] ... 😀

@emk
Copy link
Contributor

emk commented Dec 19, 2016

The underlying issue is that docker stop is brutally slow. We can try to add parallelism, but I'm not sure how much it will help, since all the actual work gets done in the docker daemon. Worth a shot.

@seamusabshere
Copy link
Member Author

is it safe to docker kill or docker rm ?

@erithmetic
Copy link
Contributor

docker rm has different behavior than a stop. A stop stops the running container, leaving the stopped container in an inspectable state. An rm deletes the container after stopping.

@luislavena
Copy link

Hello,

I've found that stopping placeholder containers that are currently use (open connections) might be the cause of the slowdown.

Since containers seems to be stopped alphabetically, things like db is attempted to be stopped before web.

In order to confirm this, simply cage stop web first and then cage stop, which happens considerably faster:

$ cage status
db              enabled type:placeholder
└─ db           DONE
frontend        enabled type:service
└─ web          DONE ports:3000 mounted:rails_hello
rake            enabled type:task
└─ rake         DONE mounted:rails_hello

$ cage up
Starting myapp_db_1
WARNING: Found orphan containers (myapp_db_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Starting myapp_web_1

# perform a request to the webapp in order to establish a connection
# ...

$ time cage stop
Stopping myapp_db_1 ... done
Stopping myapp_web_1 ... done

real	0m11.258s
user	0m0.496s
sys	0m0.084s

Now, compare:

$ cage status
db              enabled type:placeholder
└─ db           RUNNING
frontend        enabled type:service
└─ web          RUNNING ports:3000 mounted:rails_hello
rake            enabled type:task
└─ rake         DONE mounted:rails_hello

$ time cage stop web
Stopping myapp_web_1 ... done

real	0m0.701s
user	0m0.372s
sys	0m0.056s

$ time cage stop
Stopping myapp_db_1 ... done

real	0m0.869s
user	0m0.504s
sys	0m0.044s

On my research I've found that docker-compose is capable to issue stop to multiple containers in parallel and wait for all of them, avoiding long wait times.

Not sure how that is issued by cage, but perhaps just ensuring non-dependent containers are stopped before placeholders might improve the situation.

Cheers.

@emk
Copy link
Contributor

emk commented Mar 5, 2017

Ah, nice find! I'm going to be doing a bunch of cage upgrades in the coming weeks and I'll take a look at this then!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants