This repository has been archived by the owner on Jan 4, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 41
fails to start with a timeout with Kubernetes 1.11 #282
Comments
After a second attempt, it works. |
2 tasks
I get this timeout just as @alban described, except it's reproducible every time. $ kube-spawn start
Warning: kube-proxy could crash due to insufficient nf_conntrack hashsize.
setting nf_conntrack hashsize to 131072...
making iptables FORWARD chain defaults to ACCEPT...
new poolSize to be : 5490739200
Starting 3 nodes in cluster default ...
Waiting for machine kube-spawn-default-worker-naz6fc to start up ...
Waiting for machine kube-spawn-default-master-yz3twq to start up ...
Waiting for machine kube-spawn-default-worker-u5fu6n to start up ...
Failed to start machine kube-spawn-default-master-yz3twq: timeout waiting for "kube-spawn-default-master-yz3twq" to start
Failed to start machine kube-spawn-default-worker-naz6fc: timeout waiting for "kube-spawn-default-worker-naz6fc" to start
Failed to start cluster: starting the cluster didn't succeed Note:
for debugging, is there any place this things logs itself into?
|
ok nevermind. all I had to do was:
and it works. jeez |
Seems to be related to #325. |
sure, except I didn't destroy it first. Got the timeout from apologies if that order in step 2 of resolution comment, created a confusion. also I can't reproduce it now. :/ |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
To Reproduce:
ssh -i ~/.ssh/$KEY fedora@$IP
Then the error message:
More debug info:
The third machine does not exist anymore?
The text was updated successfully, but these errors were encountered: