Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

work across namespaces #4

Open
ibotty opened this issue Aug 18, 2016 · 6 comments
Open

work across namespaces #4

ibotty opened this issue Aug 18, 2016 · 6 comments

Comments

@ibotty
Copy link
Owner

ibotty commented Aug 18, 2016

There can't be routes with the same hostname in different namespaces. That leaves two options for serving the well-known routes (webserver).

  • Run the letsencrypt management pod with integrated webserver in the same namespace as the routes to use (as implemented now), or
  • distribute the acme-challenges somehow, so that an external webserver pod can access them.

Integrated Webserver

It has the following big downside: It can't work with multiple namespaces. You'll have to deploy one instance, one service account, one acme credential and set the service account's permissions for every namespace.

It does only use one pod with three containers per namespace though and is conceptional easy.

Separated Webserver

I can think of the following options.

  • local secrets and respawning one webserver container per namespace (when attached secrets get mounted automatically, respawning might not be necessary. There is a bug about that in kubernetes somewhere). The management pod will then have to attach the secret volume to the webserver pod, wait for the namespace's pod to be available, and remove the volume again. The problem is that there will be one container per namespace pointlessly running constantly. Maybe that can be mitigated with a timeout.
  • local secret and spawning one webserver per acme-challenge. This will result in many one-shot pods. Maybe if letsencrypt's cron does use one per namespace, that is feasible.

Both will slow down certificate deployments considerable (scheduling, pulling, starting the webserver pod).

I tend to favor a separated webserver and on-demand spawning. Opinions welcome!

@ibotty
Copy link
Owner Author

ibotty commented Nov 4, 2016

I am thinking about the following plan.

  • Use kube-cert-manager for most parts.
  • Have one global controller, that watches routes, sets certificates and keys to routes and manages the link to kube-cert-manager.
  • For each route to manage,

at least until PalmStoneGames/kube-cert-manager#20 is implemented.

@loa
Copy link

loa commented Jan 27, 2017

Perhaps use dns challenge instead? This would remove the need of proxy'ing the standard acme-challenge.

@jam13
Copy link
Contributor

jam13 commented May 9, 2017

Have you considered using a modified router image to handle the /.well-known/acme-challenge path globally? It should be possible to use a custom haproxy-config.template to route challenge requests to a handler running in the default namespace either for every route, or limited to routes with a particular label.

The template can actually be replaced without modifying the image too afaik:

https://docs.openshift.com/container-platform/3.3/install_config/router/customized_haproxy_router.html

I was considering having a hack at it, but thought I'd check in case you'd considered it already and discounted it for good reasons.

@ibotty
Copy link
Owner Author

ibotty commented May 9, 2017 via email

@webner
Copy link

webner commented May 11, 2017

The namespace check can be disabled. (At least for Openshift Origin 1.5)
Of course you should only do that if you trust all your users or make sure they can't create routes on there own.

Simple set ROUTER_DISABLE_NAMESPACE_OWNERSHIP_CHECK=true on your router deployment or create your create with the --disable-namespace-ownership-check flag.

https://docs.openshift.org/latest/architecture/core_concepts/routes.html#disable-namespace-ownership-check
openshift/origin@d2dd466

@ibotty
Copy link
Owner Author

ibotty commented May 11, 2017 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants