Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent flapping of external DNS configuration #1766

Closed
wants to merge 2 commits into from

Conversation

abaguas
Copy link
Collaborator

@abaguas abaguas commented Oct 29, 2024

A flapping DNSEndpoint affects external DNS configuration, which introduces unwanted behavior during e2e tests.

How the controller uses externalDNS to configure zone delegation

K8GB uses a DNSEndpoint to configure zone delegation on the upstream DNS servers. This DNSEndpoint is picked up by ExternalDNS and looks as follows:

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  annotations:
    k8gb.absa.oss/dnstype: extdns
  creationTimestamp: "2024-10-27T11:03:38Z"
  generation: 1
  name: k8gb-ns-extdns
  namespace: k8gb
  resourceVersion: "1608"
  uid: 2ff4476f-0efd-4de7-96e7-605ca2a8fc78
spec:
  endpoints:
  - dnsName: cloud.example.com recordTTL: 5 recordType: NS targets:
    - gslb-ns-eu-cloud.example.com
    - gslb-ns-us-cloud.example.com
  - dnsName: gslb-ns-eu-cloud.example.com recordTTL: 5 recordType: A targets:
    - 172.19.0.6
    - 172.19.0.7

This resource is independent of GSLB resources, but it is still updated on every reconciliation loop, since that is the only chance the controller has to update resources. In the end to end tests we do not know the IP address on which coreDNS is exposed, therefore we abuse the GSLB resource to fetch the IP addresses of the nodes in the cluster (see the update in controllers/providers/dns/external.go). However, this is quite some negative consequences if values are different for different GSLB resources, which happens quite often.

Flapping affects e2e tests

While trying out chainsaw I tried to increase the parallelism of tests, since I would like to have all e2e tests running simultaneously. This would prevent testing time to grow linearly with the number of strategies or ingress integration.

Frequent DNSEndpoint updates

Unfortunately, having all GSLB resources trying to modify the DNSEndpoint with different values resulted in flaky tests. This DNSEndpoint is important for the tests since it contains the records necessary for cross-cluster communication, if it is not available then K8GB instances on different clusters cannot discover their peers which leads to the following error:

2024-10-27T10:27:36Z WRN github.com/k8gb-io/k8gb/controllers/providers/assistant/gslb.go:255 > can't resolve FQDN using nameservers error="exchange error: all dns servers were tried and none of them were able to resolve, err: dial udp: lookup gslb-ns-us-cloud.example.com on 10.43.0.10:53: no such host" fqdn=localtargets-roundrobin-istio.cloud.example.com. nameservers=[{"Host":"gslb-ns-us-cloud.example.com","Port":1053},{"Host":"gslb-ns-us-cloud.example.com","Port":1053},{"Host":"gslb-ns-us-cloud.example.com","Port":53}]

Example

  • A GSLB using Kubernetes Ingress is created. The IP address is still not assigned by the cluster -> A DNSEndpoint is created with empty targets, so discovery of other clusters is not yet possible
  • The same GSLB has now an IP address assigned -> The DNSEndpoint is updated with target, so discovery is now possible
  • A new GSLB using Kubernetes Ingress is created. The IP address is still not assigned by the cluster -> The DNSEndpoint is updated, since there are no targets the discovery of other clusters is no longer possible

If the timings are unfortunate enough it would be possible that cluster discovery is always unavailable when we reconcile a particular GSLB resource, thus resulting in the advertisement of incorrect targets.

Solution

This PR proposed to fix this issue by not updating the DNSEndpoint if the list of targets that comes from the gslb resource and is empty. This should be only relevant for testing since all production usecases should use a coreDNS exposed via a load balancing service.

DNSEndpoint deletion

Additionally, the deletion of a GSLB resource was also leading to the deletion of the ExternalDNS resource, even if there were additional GSLB resources still in use. This also led disruption of the cross-cluster communication until the next GSLB resource is reconciled. This problem is fixed on the finalizer, by deleting the ExternalDNS resource only when the last GSLB resource is deleted.

TTL flapping

Lastly, even though this didn't affect the e2e tests I noticed that the TTL also flaps, since different GSLB resources may have different TTLs. To stabilize it we can create a new configuration option to set the TTL for the NS and glue record.

@abaguas abaguas changed the title Do not flap external DNS configuration Prevent flapping of external DNS configuration Oct 30, 2024
- How the controller uses externalDNS to configure zone delegation

K8GB uses a DNSEndpoint to configure zone delegation on the upstream DNS servers. This DNSEndpoint is picked up by ExternalDNS and looks as follows:

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  annotations:
    k8gb.absa.oss/dnstype: extdns
  creationTimestamp: "2024-10-27T11:03:38Z"
  generation: 1
  name: k8gb-ns-extdns
  namespace: k8gb
  resourceVersion: "1608"
  uid: 2ff4476f-0efd-4de7-96e7-605ca2a8fc78
spec:
  endpoints:
  - dnsName: cloud.example.com
    recordTTL: 5
    recordType: NS
    targets:
    - gslb-ns-eu-cloud.example.com
    - gslb-ns-us-cloud.example.com
  - dnsName: gslb-ns-eu-cloud.example.com
    recordTTL: 5
    recordType: A
    targets:
    - 172.19.0.6
    - 172.19.0.7

This resource is independent of GSLB resources, but it is still updated on every reconciliation loop, since that is the only chance the controller has to update resources. In the end to end tests we do not know the IP address on which coreDNS is exposed, therefore we abuse the GSLB resource to fetch the IP addresses of the nodes in the cluster (see the update in controllers/providers/dns/external.go). However, this is quite some negative consequences if values are different for different GSLB resources, which happens quite often.

- Flapping affects e2e tests

While trying out chainsaw I tried to increase the parallelism of tests, since I would like to have all e2e tests running simultaneously. This would prevent testing time to grow linearly with the number of strategies or ingress integration.
Unfortunately, having all GSLB resources trying to modify the DNSEndpoint with different values resulted in flaky tests. This DNSEndpoint is important for the tests since it contains the records necessary for cross-cluster communication, if it is not available then K8GB instances on different clusters cannot discover their peers which leads to the following error:

2024-10-27T10:27:36Z WRN github.com/k8gb-io/k8gb/controllers/providers/assistant/gslb.go:255 > can't resolve FQDN using nameservers error="exchange error: all dns servers were tried and none of them were able to resolve, err: dial udp: lookup gslb-ns-us-cloud.example.com on 10.43.0.10:53: no such host" fqdn=localtargets-roundrobin-istio.cloud.example.com. nameservers=[{"Host":"gslb-ns-us-cloud.example.com","Port":1053},{"Host":"gslb-ns-us-cloud.example.com","Port":1053},{"Host":"gslb-ns-us-cloud.example.com","Port":53}]

- Example
* A GSLB using Kubernetes Ingress is created. The IP address is still not assigned by the cluster -> A DNSEndpoint is created with empty targets, so discovery of other clusters is not yet possible
* The same GSLB has now an IP address assigned -> The DNSEndpoint is updated with target, so discovery is now possible
* A new GSLB using Kubernetes Ingress is created. The IP address is still not assigned by the cluster -> The DNSEndpoint is updated, since there are no targets the discovery of other clusters is no longer possible

If the timings are unfortunate enough it would be possible that cluster discovery is always unavailable when we reconcile a particular GSLB resource, thus resulting in the advertisement of incorrect targets.

- Solution

This PR proposed to fix this issue by not updating the DNSEndpoint if the list of targets that comes from the gslb resource and is empty. This should be only relevant for testing since all production usecases should use a coreDNS exposed via a load balancing service.

- Deletion problem

Additionaly, the deletion of a GSLB resource was also leading to the deletion of the ExternalDNS resource, even if there were additional GSLB resources still in use. This also led disruption of the cross-cluster communication until the next GSLB resource is reconciled.
This problem is fixed on the finalizer, by deleting the ExternalDNS resource only when the last GSLB resource is deleted.

- TTL flapping

Lastly, even though this didn't affect the e2e tests I noticed that the TTL also flaps, since different GSLB resources may have different TTLs. To stabilize it we can create a new configuration option to set the TTL for the NS and glue record.

Signed-off-by: Andre Aguas <[email protected]>
Copy link

netlify bot commented Oct 30, 2024

Deploy Preview for k8gb-preview ready!

Name Link
🔨 Latest commit 8741476
🔍 Latest deploy log https://app.netlify.com/sites/k8gb-preview/deploys/67221acaab52c60008af097b
😎 Deploy Preview https://deploy-preview-1766--k8gb-preview.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

value: {{ quote .Values.k8gb.reconcileRequeueSeconds}}
value: {{ quote .Values.k8gb.reconcileRequeueSeconds }}
- name: NS_RECORD_TTL
value: {{ quote .Values.k8gb.nsRecordTTL }}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why quote? it is int

@abaguas abaguas marked this pull request as draft October 30, 2024 11:45
@abaguas
Copy link
Collaborator Author

abaguas commented Oct 30, 2024

Superseded by #1767

@abaguas abaguas closed this Oct 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants