Skip to content
This repository has been archived by the owner on Nov 9, 2017. It is now read-only.

Configure Master and Worker Nodes to resolve DNS using kube-dns service first then host #141

Open
sputnick opened this issue Oct 11, 2016 · 2 comments

Comments

@sputnick
Copy link

I was testing Pod based NFS for Persistent Volumes and ran into an issue on Kube-Solo (and likely Kube-Cluster has the same). In version 0.9.6 you can successfully create a PersistentVolume that points to an NFS server via its cluster dns service name e.g. "nfs-server.default.svc.cluster.local". You can then also create a PersistentVolumeClaim. However when you create a new Pod/RC that uses that Claim, Kubernetes fails to mount using the NFS provider because the worker/master Nodes cannot resolve the Services' ClusterIP address from the DNS name. This is because /etc/resolve.conf does not contain the kube-dns Service IP address but only the local host IP. I manually changed resolve.conf and kubernetes immediately was able to start everything correctly.

So I guess this is an enhancement/bug. And it seems to be a common issue for k8s users so worth fixing, see: kubernetes/kubernetes#8735

I have attached my test case yamls.
fake-nfs.zip

@rimusz
Copy link
Member

rimusz commented Oct 11, 2016

@AntonioMeireles is it possible to add kube-dns Service IP address to /etc/resolve.conf?
What I mean is it possible to add it via cloud config?

@AntonioMeireles
Copy link
Member

@sputnick thanks for reporting!. going to look.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants