-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support daemonset to use nodelocal dns #173
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Leo Q <[email protected]>
b359c50
to
b881d87
Compare
Here's an example helm value if you want to use it. notable settings:
Full valueIf the DCO action in the integration test fails, one or more of your commits are not signed off. Please click on the Details link next to the DCO action for instructions on how to resolve this.
|
Signed-off-by: Leo Q <[email protected]>
Signed-off-by: Leo Q <[email protected]>
b881d87
to
3b396c3
Compare
Signed-off-by: Leo Q <[email protected]>
Hi, The idea of using CoreDNS as a nodelocal dns was thourghly discussed in:
And rejected due to various limitations, see the threads in the linked issues. I don't see any new reason to add support for it now, the right solution is to use proper nodelocal dns cache |
First, I would like to break down the points of contention into whether we should deploy CoreDNS as a DaemonSet and whether our project should support this deployment method. Kubernetes administrators want a local DNS service to act as a DNS cache, which corresponds exactly to the DaemonSet deployment mode. I believe CoreDNS has the capability to provide local DNS services, and you can refer to the source code of node-cache, which is essentially a streamlined version of CoreDNS with simplified plugins. I also summarized the differences between node-cache and CoreDNS in kubernetes/dns#594 (comment) Even if administrators, for various reasons, prefer to use node-cache or even any dns server based on the original CoreDNS, they can still use the CoreDNS Helm chart to manage the related manifests since the configuration files are identical. |
And lastly, I would believe this will not bring the helm chart maintainer much more burden as it’s targeting just deploy CoreDNS to every node. It follows the same configuration style with the previous chart. |
Why is this pull request needed and what does it do?
This PR introduces the daemonset deployment mode, which allows coredns to be deployed on every node. With subsequent configuration, all pods can use the DNS server on the current node instead of the cluster's internal server, thereby minimizing DNS latency.
To facilitate simple traffic redirection to local pods, I have embedded a Cilium local redirect policy in the helm chart. Through this Custom Resource (CR), traffic can be automatically forwarded from a virtual IP to the coredns on the current node. If a user's Kubernetes cluster does not use Cilium, this feature can be skipped, and they can resolve it on their own.
Additionally, in previous implementations, many resources were tied to
.Values.deployment.enabled
. I have removed these bindings to make these resources more independent and compatible with the daemonset. However, this change could potentially be breaking. If you need me to add related documentation, please let me know.Which issues (if any) are related?
fix #86
provides solution for: kubernetes/dns#594
Checklist:
Changes are automatically published when merged to
main
. They are not published on branches.Note on DCO
If the DCO action in the integration test fails, one or more of your commits are not signed off. Please click on the Details link next to the DCO action for instructions on how to resolve this.