-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calico node pod crashes on a new physical worker node #9193
Comments
calico pod logs:
|
Not obvious to me why the pod is getting killed, it looks healthy in the logs
doesn't sound right, the init daemon in the pod is never supposed to exit |
I am trying to make it work for a week now. Even tried to change calico with flannel to the whole cluster, same behavior when trying to use flannel. I guess the problem is related to the physical host, but not sure what else I can check. |
Are there any more logs from calico-node (after the container terminates) or the ipam? |
Expected Behavior
I have a kubernetes cluster running on vcenter, with 1 master and 2 worker nodes (All vm's) and everything is working fine for a year.
Now I want to add another worker node, but this one is a physical server.
I installed it the exact same way I installed the rest of the nodes, set it in the same vlan as the other nodes and joined it to the cluster successfully.
Current Behavior
A new calico node pod is created and deployed on the new worker.
I can see using calicoctl that the connections are being established:
After about a minute I see that the pod is restarting and it loses connection and that's it.
Some logs:
Possible Solution
I suspect it is related to the fact this is a physical host...
Steps to Reproduce (for bugs)
Context
workload pods cannot run on this node until I fix it.
Your Environment
The text was updated successfully, but these errors were encountered: