ExternalDNS is a Kubernetes add-on for automatically managing Domain Name System (DNS) records for Kubernetes services by using different DNS providers. By default, Kubernetes manages DNS records internally, but ExternalDNS takes this functionality a step further by delegating the management of DNS records to an external DNS provider such as IONOS. Therefore, the IONOS webhook allows to manage your IONOS domains inside your kubernetes cluster with ExternalDNS.
To use ExternalDNS with IONOS, you need your IONOS API key or token of the account managing your domains. For detailed technical instructions on how the IONOS webhook is deployed using the Bitnami Helm charts for ExternalDNS, see deployment instructions.
The IONOS webhook is provided as a regular Open Container Initiative (OCI) image released in the GitHub container registry. The deployment can be performed in every way Kubernetes supports. The following example shows the deployment as a sidecar container in the ExternalDNS pod using the charts for ExternalDNS.
helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/
kubectl create secret generic ionos-credentials --from-literal=api-key='<EXAMPLE_PLEASE_REPLACE>'
# create the helm values file
cat <<EOF > external-dns-ionos-values.yaml
image:
tag: v0.15.0
# -- ExternalDNS Log level.
logLevel: debug # reduce in production
# -- if true, _ExternalDNS_ will run in a namespaced scope (Role and Rolebinding will be namespaced too).
namespaced: false
# -- _Kubernetes_ resources to monitor for DNS entries.
sources:
- ingress
- service
- crd
extraArgs:
## must override the default value with port 8888 with port 8080 because this is hard-coded in the helm chart
- --webhook-provider-url=http://localhost:8080
provider:
name: webhook
webhook:
image:
repository: ghcr.io/ionos-cloud/external-dns-ionos-webhook
tag: v0.6.1
env:
- name: LOG_LEVEL
value: debug # reduce in production
- name: IONOS_API_KEY
valueFrom:
secretKeyRef:
name: ionos-credentials
key: api-key
- name: SERVER_HOST
value: "0.0.0.0"
- name: SERVER_PORT
value: "8080"
- name: IONOS_DEBUG
value: "false" # put this to true if you want see details of the http requests
- name: DRY_RUN
value: "true" # set to false to apply changes
livenessProbe:
httpGet:
path: /health
readinessProbe:
httpGet:
path: /health
EOF
# install external-dns with helm
helm upgrade external-dns-ionos external-dns/external-dns --version 1.15.0 -f external-dns-ionos-values.yaml --install
Currently, the rbac created for a namespaced deployment is not sufficient for the ExternalDNS to work. In order to get ExternalDNS running in a namespaced mode, you need to create the necessary cluster-role-(binding) resources manually:
# don't forget to adjust the namespace for the service account in the rbac-for-namespaced.yaml file, if you are using a different namespace than 'default'
kubectl apply -f deployments/rbac-for-namespaced.yaml
In the helm chart configuration you then can skip the rbac configuration, so in the helm values file you set:
namespaced: true
rbac:
create: false
See here for all available configuration options of the ionos webhook.
All official webhooks provided by IONOS are signed using Cosign. The Cosign public key can be found in the cosign.pub file.
Note: Due to the early development stage of the webhook, the image is not yet signed by sigstores transparency log.
export RELEASE_VERSION=latest
cosign verify --insecure-ignore-tlog --key cosign.pub ghcr.io/ionos-cloud/external-dns-ionos-webhook:$RELEASE_VERSION
The basic development tasks are provided by make. Run make help
to see the available targets.
The webhook can be deployed locally with a kind cluster. As a prerequisite, you need to install:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add mockserver https://www.mock-server.com
helm repo update
# setup the kind cluster and deploy external-dns with ionos webhook and a dns mockserver
./scripts/deploy_on_kind.sh
# check if the webhook is running
kubectl get pods -l app.kubernetes.io/name=external-dns -o wide
# trigger a DNS change e.g. with annotating the ingress controller service
kubectl -n ingress-nginx annotate service ingress-nginx-controller "external-dns.alpha.kubernetes.io/internal-hostname=nginx.internal.example.org."
# cleanup
./scripts/deploy_on_kind.sh clean
The acceptance tests are run against a kind cluster with ExternalDNS and the webhook deployed. The DNS mock server is used to verify the DNS changes. The following diagram shows the test setup:
flowchart LR
subgraph local-machine
T[<h3>acceptance-test with hurl</h3><ul><li>create HTTP requests</li><li>check HTTP responses</li></ul>] -- 1. create expectations --> M
T -- 2. create annotations/ingress --> K
T -- 3. verify expectations --> M
subgraph k8s kind
E("external-dns") -. checks .-> K[k8s resources]
E -. apply record changes .-> M[dns-mockserver]
end
end
For running the acceptance tests locally you need to install hurl.
To check the test run execution, see the Hurl files.
To view the test reports, see the ./build/reports/hurl
directory.
scripts/acceptance-tests.sh