-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow creation of "Private Clusters" #903
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Hi folks! Is it possible to take a look at this issue? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
We would like the same thing in https://github.com/kubernetes-sigs/cluster-api-provider-openstack. IIUC there are 2 constraints here, and neither is provider-specific:
I don't believe this can be resolved here, but I'm very interested in a solution. cc @huxcrux |
I believe this is resolved with #1222 and subsequent changes. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
/kind feature
Describe the solution you'd like
Currently, CAPG is hard-wired to create GCE load balancing components with a public IP address for apiserver access. The nodes themselves do not receive public addresses unless explicitly configured as such, the same should apply to the apiserver's endpoint(s). Being able to provision clusters with access limited by private IP address connectivity would be beneficial for obvious reasons.
Anything else you would like to add:
I'm not 100% sure if using private endpoints should be the default - it'd be in line with how address management for nodes currently works, but would also have potential for being a breaking change.
The GKE-specific concept of private clusters is explained here. This feature request is scoped at allowing this to apply for both managed (GKE) and unmanaged (plain Cluster API) clusters.
The text was updated successfully, but these errors were encountered: