Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for multiple zones #1774

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Add support for multiple zones #1774

wants to merge 1 commit into from

Conversation

donovanmuller
Copy link
Contributor

@donovanmuller donovanmuller commented Nov 5, 2024

This PR adds the ability for K8gb to handle multiple edge and DNS zones.

We have a use case where we need to support multiple zones. There were some ideas on how to achieve this. Namely running multiple K8gb deployments in different namespaces, each with its own edgeDNSZone/dnsZone, but that ultimately turned out to be quite difficult to do given the Helm Chart, CRD and other issues. So we decided to add support for handling multiple edgeZone/zone pairs, taking inspiration from the edgeDNSServers implementation was done.

The Helm Chart has been updated to support the following values as an example of supporting two zones:

...
k8gb:
...
  dnsZones:
    - edgeZone: "example.com"
      zone: "cloud.example.com"
      dnsZoneNegTTL: 300
    - edgeZone: "example.org"
      zone: "cloud.example.org"
      dnsZoneNegTTL: 300
...
HOW TO RUN CI ---

By default, all the checks will be run automatically. Furthermore, when changing website-related stuff, the preview will be generated by the netlify bot.

Heavy tests

Add the heavy-tests label on this PR if you want full-blown tests that include more than 2-cluster scenarios.

Debug tests

If the test suite is failing for you, you may want to try triggering Re-run all jobs (top right) with debug logging enabled. It will also make the print debug action more verbose.

Signed-off-by: Donovan Muller <[email protected]>
Copy link
Collaborator

@abaguas abaguas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the PR, this is indeed a nice feature.

Is the implementation complete?
External DNS would need an updated domain-filter:

- --domain-filter={{ .Values.k8gb.edgeDNSZone }}

Btw, shameless plug, if the e2e framework is slowing down your development then chainsaw would be a great alternative: #1758

}
}

if zone == (utils.DNSZone{}) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this logic to trigger a requeue if no server matched necessary?
Let's suppose the configured dnsZone is "example.com". There is a GSLB resource with a server that has the hostname "cloud.example.com". Then, the users swaps the hostname to "cloud.other.com". With this implementation the GSLB will never stop advertising "cloud.example.com".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will test this out 👍

@@ -103,42 +104,6 @@ func (r *GslbReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.
Str("namespace", gslb.Namespace).
Interface("strategy", gslb.Spec.Strategy).
Msg("Resolved strategy")
// == Finalizer business ==
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you move the "Finalizer business" down?

Afaiu we try to keep it as up as possible to terminate the reconciliation immediately if there GSLB is going to be deleted

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, I didn't want to, but the finalizer needs to pass the zone, which is only available after the zone has been matched.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Actually, if we store the zone in the status of the GSLB then the finalizer can use the status of the last reconciliation, which is already present on the GSLB resource.

@@ -190,7 +222,7 @@ func (r *GslbReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.
Msg("Resolved LoadBalancer and Server configuration referenced by Ingress")

// == external-dns dnsendpoints CRs ==
dnsEndpoint, err := r.gslbDNSEndpoint(gslb)
dnsEndpoint, err := r.gslbDNSEndpoint(gslb, zone)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A GSLB may have servers with different zones.
What about storing the zone in the status of the GSLB, perhaps by extending Adapting the Server datastructure?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea, will add.

@donovanmuller
Copy link
Contributor Author

Is the implementation complete? External DNS would need an updated domain-filter:

- --domain-filter={{ .Values.k8gb.edgeDNSZone }}

Good catch! I wanted to raise the PR to get eyes on it, in case there were discussions around the overall design. I haven't had time to work through the integration tests.

I am following the Chainsaw work closely 👍

@abaguas
Copy link
Collaborator

abaguas commented Nov 7, 2024

One more idea. Do we need to check in the K8GB controller to which zone a domain belongs to?
We are already configuring coredns, which will ignore DNSEndpoint resources for zones it cannot control.
The only thing we really need to do is to configure zone delegation for all zones.

This would dramatically simplify the implementation of this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants