10.43.0.1 inaccessible from workers pods #11517
Unanswered
DamianoSamperi
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Why are you running such an old Ubuntu + kernel on the workers? What does your server config look like? Have you done anything weird with the bind or advertise addresses? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Environmental Info:
K3s Version:
Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
Describe the bug:
The pods running on worker nodes are unable to establish any communication with the K3s API server. This issue occurs both when trying to resolve the DNS name for the Kubernetes API (kubernetes.default.svc.cluster.local) and when attempting to connect directly to the API server using its IP address (10.43.0.1). The problem appears to be specifically tied to the worker nodes and is preventing the pods from interacting with the K3s API
Steps To Reproduce:
-Configure the nodes with the runtime:nvidia setting.
-Deploy pods on the worker nodes.
-Attempt to communicate with the K3s API server from within the pods.
Expected behavior:
Pods should be able to successfully reach the K3s API server.
Actual behavior:
The pods are unable to resolve the DNS name of the API server and fail to establish a connection.
Attempts to connect directly to the K3s API server's IP address (10.43.0.1) result in a connection timeout:
Even basic network tests, such as ping, fail to establish a connection to the API server's IP:
However, the pod is able to successfully communicate with other pods scheduled on the master node, such as the CoreDNS pod. For example, the following ping test to CoreDNS is successful:
Additional context / logs:
-The issue is isolated to the worker nodes. Pods on these nodes cannot communicate with the K3s API server, but communication works fine with other pods (like CoreDNS) on the master node.
-No specific Network Policies are in place that should restrict the communication between the pods and the API server.
-Here is some additional diagnostic information from the worker node and pod:
iptables on the worker node:
Services
Pods in the kube-system namespace:
/etc/resolv.conf inside the test pod:
Network Interfaces on the worker node:
Routing Table on the worker node:
Beta Was this translation helpful? Give feedback.
All reactions