-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
From 3.27 to upgrade 3.28, IPPool Issue #9100
Comments
Use the command to get the same result.
|
Do I need to delete all Pods? Currently, it can be confirmed that no Pods are using the IP address 192.168.x.x .
|
|
@kkbruce in Calico v3.28, the operator has been updated to reconcile changes to IP pools. If your IP pool is defined within your If you don't want to use the 192.168.0.0 IP pool, you should just be able to delete it (from the Installation) - unless you want it for other reasons like NAT? |
Due to the need to quickly restore the Calico CNI network to a functional state, we operated to downgrade to version 3.27. Currently, there is no temporary environment available for more information on version 3.28. From another perspective, we referred to the migrate-pools document. In the migrate-pools document before version 3.27, there was no mention of operations such as Operator ( |
We reference upgrade docs (uses the operator) to upgrade the Calico version from 3.27 to 3.28.
Expected Behavior
We can update it back to
disabled: true
or delete the olddefault-ipv4-ippool
configuration.Current Behavior
At 3.27, we set up a new IPPool according to the document and have already set
disabled: true
and it was working fine. However, after upgrading to 3.28, we found that the originaldisabled: true
was reset tofalse
, and we cannot update it back totrue
or delete the olddefault-ipv4-ippool
configuration as described in the steps below under "Steps to Reproduce".Possible Solution
Is it possible to have a downgraded restore file or steps, so that when there is a problem with the upgrade, it can be quickly repaired to a normal working version or state?
Steps to Reproduce (for bugs)
Context
The original default 192.168.x.x network segment conflicted with other internal network segments, causing abnormal access to the 192.168.x.x services of the Pod containers in the internal network. Therefore, the default value was modified to 10.244.x.x, and after
disabled: true
by default-ipv4-ippool, the entire network access became normal.Your Environment
3.28.1
kubernetes
Ubuntu 20.04
None
The text was updated successfully, but these errors were encountered: