Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Extend Failure Domain API to support extra network configuration #1967

Closed
wants to merge 10 commits into from

Conversation

rikatz
Copy link
Contributor

@rikatz rikatz commented Jun 30, 2023

What this PR does / why we need it:
This PR is the initial work to extend failure domain API to support extra network configuration.

The idea is that extra network configuration can be established on failure domains, so different configurations (like DHCP, a different set of DNS servers) can be configured based on the selected region/zone

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #1964

Special notes for your reviewer:
Some points to be discussed:

  • Should we copy the networkspec struct or should we implement (as it is now) a different API?
  • Can we drop/remove the dead code from Network struct (inside deploymentzone_types) and ignore the apidiff?
  • Is there any e2e test that can be added here? I couldn't find any failure domain tests on integration or e2e

Release note:

Introduce extra network configuration on Failure domain API

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jun 30, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign vincepri for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Jun 30, 2023
@rikatz
Copy link
Contributor Author

rikatz commented Jun 30, 2023

On the previous run: https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_cluster-api-provider-vsphere/1967/pull-cluster-api-provider-vsphere-apidiff-main/1674828318085484544

Probably as a last commit, we should remove the dead Network struct and override this alert from apidiff.

@rikatz rikatz changed the title WIP: ✨ Extend Failure Domain API to support extra network configuration ✨ Extend Failure Domain API to support extra network configuration Jun 30, 2023
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 30, 2023
apis/v1beta1/vspherefailuredomain_types.go Outdated Show resolved Hide resolved
apis/v1beta1/vspherefailuredomain_types.go Outdated Show resolved Hide resolved
apis/v1beta1/vspherefailuredomain_types.go Outdated Show resolved Hide resolved
@rikatz
Copy link
Contributor Author

rikatz commented Jul 3, 2023

Fyi, tested locally with the changes, it works fine.

Test executed:

  • Created 3 Failure domains/deployment zones
  • Each failure domain has a different portgroup, a different set of DNS, some with DHCPv6 disabled other enabled
  • Could verify that on each node the right configuration was added to netplan
  • Implemented the tests for controllers to verify that deploymentzone doesn't get ready if the Network Port group doesn't exists

@@ -321,7 +321,6 @@ spec:
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
type: string
type: object
x-kubernetes-map-type: atomic
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should not have happened 🤔 may be related to #1928

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, @srm09 pointed me to some issue on controller-gen IIRC, it keeps changing :)

@@ -84,6 +84,16 @@ func (r vsphereDeploymentZoneReconciler) reconcileTopology(ctx *context.VSphereD
}
}

for _, networkConfig := range topology.NetworkConfigurations {
if networkConfig.NetworkName == "" {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we validate for NetworkName being set instead of silently ignoring the array entry?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My question actually is: should we make NetworkName required? Per https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/pull/1967/files#diff-2f616e5f5f40526609f3723e82cbcf01b16b9f240ea00fcdf3fc06b6aa6d164cR548 we just merge if it is not empty. This would allow a case where a failure domain has different nameservers, but same Portgroup name.

OTOH, there's no point IMO in validate a failure domain network config if the portgroup is not configured, so probably marking Network Name as required would be better IMHO.

WDYT?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the other code which overwrites stuff from NetworkConfig to the vmNetworkDeviceSpec and is ok with not replacing the NetworkName, we should drop the if here and keep it non-mandatory.

If that is not a use case, (and I assume it isn't according your above comment), then we should mark the NetworkName as required so we ensure on api side already that it cannot be misconfigured (by not setting the NetworkName).

Happy to also hear thoughts from @yastij or others :-)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think is better to make the networkName as a key inside the network array. Because if there is Multiple devices
configured vsphereMachineTemplate, if the use forgot to specify the network name, it might accidentally configure Devices config on the vsphereMachineTemplate.

Co-authored-by: Christian Schlotter <[email protected]>
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 11, 2023
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@randomvariable
Copy link
Member

/assign @lubronzhan
for review on networking side.

@rikatz , can you rebase this onto main? this will also fix the CRD generation.

@lubronzhan
Copy link
Contributor

I think better to call out in the doc that the network devices specified in FailureDomain will conduct a Merge operation instead of Overwrite operation to the devices specified inside VSphereMachineTemplate

@k8s-ci-robot
Copy link
Contributor

@rikatz: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-cluster-api-provider-vsphere-verify-crds 4c1451a link false /test pull-cluster-api-provider-vsphere-verify-crds
pull-cluster-api-provider-vsphere-verify-gen 4c1451a link false /test pull-cluster-api-provider-vsphere-verify-gen
pull-cluster-api-provider-vsphere-test-main 4c1451a link true /test pull-cluster-api-provider-vsphere-test-main
pull-cluster-api-provider-vsphere-test-integration-main 4c1451a link true /test pull-cluster-api-provider-vsphere-test-integration-main
pull-cluster-api-provider-vsphere-e2e-main 4c1451a link true /test pull-cluster-api-provider-vsphere-e2e-main
pull-cluster-api-provider-vsphere-e2e-main-oldpreset 4c1451a link true /test pull-cluster-api-provider-vsphere-e2e-main-oldpreset

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 21, 2024
@sbueringer
Copy link
Member

/close

for now

@rikatz feel free to just reopen if there is continued interest in making this happen

@k8s-ci-robot
Copy link
Contributor

@sbueringer: Closed this PR.

In response to this:

/close

for now

@rikatz feel free to just reopen if there is continued interest in making this happen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Failure Domain network configuration should support similar fields of VMTemplate network config
7 participants