You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Looking up journalctl -xeu k3s-agent reveals the following (shortened):
Sep 19 18:12:25 charlie-d0 k3s[16470]: time="2024-09-19T18:12:25Z" level=info msg="Running kubelet [...] --node-labels=node-role.kubernetes.io/gha-runner=true [...]"
Sep 19 18:12:26 charlie-d0 k3s[16470]: Error: failed to validate kubelet flags: unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/gha-runner]
Sep 19 18:12:26 charlie-d0 k3s[16470]: --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/os, node.kubernetes.io/instance-type, topology.kubernetes.io/region, topology.kubernetes.io/zone)
Sep 19 18:12:26 charlie-d0 k3s[16470]: time="2024-09-19T18:12:26Z" level=error msg="kubelet exited: failed to validate kubelet flags: unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/gha-runner]\n--node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/os, node.kubernetes.io/instance-type, topology.kubernetes.io/region, topology.kubernetes.io/zone)"
Sep 19 18:12:26 charlie-d0 systemd[1]: k3s-agent.service: Main process exited, code=exited, status=1/FAILURE
Steps To Reproduce:
See above.
Expected behavior:
The k3s-agent service should not exit after starting.
Actual behavior:
It exits with status code 1 after starting.
Additional context / logs:
n/a
The text was updated successfully, but these errors were encountered:
All current versions of Kubernetes restrict nodes from registering with most labels with kubernetes.io and k8s.io prefixes, specifically including the kubernetes.io/role label. If you attempt to start a node with a disallowed label, K3s will fail to start. As stated by the Kubernetes authors:
Nodes are not permitted to assert their own role labels. Node roles are typically used to identify privileged or control plane types of nodes, and allowing nodes to label themselves into that pool allows a compromised node to trivially attract workloads (like control plane daemonsets) that confer access to higher privilege credentials.
If you want to change node labels and taints after node registration, or add reserved labels, you should use kubectl. Refer to the official Kubernetes documentation for details on how to add taints and node labels.
Environmental Info:
K3s Version: v1.31.0+k3s1
Node(s) CPU architecture, OS, and Version: Linux charlie-d0 6.1.75-vendor-rk35xx #1 SMP Wed Aug 21 11:45:59 UTC 2024 aarch64 GNU/Linux
Cluster Configuration: 1 server, 9 agents
Describe the bug:
When I add the following lines to /etc/rancher/k3s/config.yaml on a node then the k3s-agent service exits with status code 1 right after start:
Looking up
journalctl -xeu k3s-agent
reveals the following (shortened):Steps To Reproduce:
See above.
Expected behavior:
The k3s-agent service should not exit after starting.
Actual behavior:
It exits with status code 1 after starting.
Additional context / logs:
n/a
The text was updated successfully, but these errors were encountered: