-
Notifications
You must be signed in to change notification settings - Fork 292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Extend Failure Domain API to support extra network configuration #1967
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
On the previous run: https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_cluster-api-provider-vsphere/1967/pull-cluster-api-provider-vsphere-apidiff-main/1674828318085484544 Probably as a last commit, we should remove the dead Network struct and override this alert from apidiff. |
Co-authored-by: Christian Schlotter <[email protected]>
Fyi, tested locally with the changes, it works fine. Test executed:
|
@@ -321,7 +321,6 @@ spec: | |||
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids' | |||
type: string | |||
type: object | |||
x-kubernetes-map-type: atomic |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should not have happened 🤔 may be related to #1928
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, @srm09 pointed me to some issue on controller-gen IIRC, it keeps changing :)
@@ -84,6 +84,16 @@ func (r vsphereDeploymentZoneReconciler) reconcileTopology(ctx *context.VSphereD | |||
} | |||
} | |||
|
|||
for _, networkConfig := range topology.NetworkConfigurations { | |||
if networkConfig.NetworkName == "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we validate for NetworkName
being set instead of silently ignoring the array entry?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My question actually is: should we make NetworkName required? Per https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/pull/1967/files#diff-2f616e5f5f40526609f3723e82cbcf01b16b9f240ea00fcdf3fc06b6aa6d164cR548 we just merge if it is not empty. This would allow a case where a failure domain has different nameservers, but same Portgroup name.
OTOH, there's no point IMO in validate a failure domain network config if the portgroup is not configured, so probably marking Network Name as required would be better IMHO.
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the other code which overwrites stuff from NetworkConfig
to the vmNetworkDeviceSpec
and is ok with not replacing the NetworkName, we should drop the if here and keep it non-mandatory.
If that is not a use case, (and I assume it isn't according your above comment), then we should mark the NetworkName as required so we ensure on api side already that it cannot be misconfigured (by not setting the NetworkName).
Happy to also hear thoughts from @yastij or others :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think is better to make the networkName as a key
inside the network array. Because if there is Multiple devices
configured vsphereMachineTemplate, if the use forgot to specify the network name, it might accidentally configure Devices config on the vsphereMachineTemplate.
Co-authored-by: Christian Schlotter <[email protected]>
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @lubronzhan @rikatz , can you rebase this onto main? this will also fix the CRD generation. |
I think better to call out in the doc that the network devices specified in FailureDomain will conduct a |
@rikatz: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/close for now @rikatz feel free to just reopen if there is continued interest in making this happen |
@sbueringer: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What this PR does / why we need it:
This PR is the initial work to extend failure domain API to support extra network configuration.
The idea is that extra network configuration can be established on failure domains, so different configurations (like DHCP, a different set of DNS servers) can be configured based on the selected region/zone
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #1964
Special notes for your reviewer:
Some points to be discussed:
Release note: