k3s restore from backup fails when node IP changes #10041
-
Hello, I have a working multicloud HA cluster that uses external IP for flannel communication (initiated with Now I'd like to use tailscale vpn instead. Let's say to be able to add new master and agent nodes behind NAT. So I did following:
So, obviously the k3s expects to restore to the node with the same IP. Regarding changing the network. This is actually my use-case at the moment. I'd like to use tailscale vpn instead of native mesh through public IPs. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
No, that is not correct. The cluster-reset that is performed as part of the snapshot restore updates the cluster membership to the current node's IP.
You have specified |
Beta Was this translation helpful? Give feedback.
-
You are right, specifying Thanks a lot! |
Beta Was this translation helpful? Give feedback.
No, that is not correct. The cluster-reset that is performed as part of the snapshot restore updates the cluster membership to the current node's IP.
You have specified
--node-ip
when runningk3s server
but not when runningk3s server --cluster-reset
. You need to consistently set your node IPs. This may be easier to do if you place it in the config.yaml, so you don't forget to set it on the command-line when running through the restore.