Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Turn Server #12

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from
Draft

Turn Server #12

wants to merge 3 commits into from

Conversation

nikhiljha
Copy link
Member

@nikhiljha nikhiljha commented Apr 26, 2020

closes #10

Still WIP, I need to figure out the following...

  • How do we do multiple dockerfiles in one OCF repository? Or do we just pull the upstream image directly?
  • How do we do secret management?
  • What is the external IP for coturn, is it needed? The synapse docs say only the internal IP matters, but the Dockerfile I'm looking at says the internal IP doesn't matter if the external IP isn't set?
  • Should everything be namespaced to matrix? I don't even know what that does, but it sounds like a good idea to put matrix and coturn in the same namespace.
  • Mount an external config or keep the internal config like it has now? The config isn't that complex so I think it's fine as is.

BTW: Most of it was taken from here https://github.com/ananace/matrix-synapse/tree/master/kubernetes - which has updated almost immediately after each new matrix/riot update for a while now. Seems pretty reliable to me.

@nikhiljha nikhiljha added the enhancement New feature or request label Apr 26, 2020
Copy link
Member

@cg505 cg505 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good shit!

The big problem right now is we don't actually have any way of routing UDP traffic into the cluster. For HTTP traffic, take a look at https://www.ocf.berkeley.edu/docs/staff/backend/kubernetes/#h2_getting-traffic-into-the-cluster. (This is slightly out of date, we use HAProxy instead of NGINX now.) HAProxy does not support UDP, so we need to come up with some sort of different reverse proxy for UDP.

We could directly try to create a NodePort that exposes the UDP service directly on the masters at the lb IP (169.229.226.79). This might be as simple as creating a NodePort with externalIPs [169.229.226.79], but idk if that would work since that is a virtual IP. (For reference, here is the NodePort used by the normal HTTP flow: https://github.com/ocf/puppet/blob/master/modules/ocf_kubernetes/templates/ingress/ingress_expose.yaml.erb#L13-L36.)

This might be a good topic for discussion at the Kubernetes meeting on Wednesday! Thanks for spurring the conversation!

kubernetes/coturn.yml.erb Show resolved Hide resolved
kubernetes/coturn.yml.erb Outdated Show resolved Hide resolved
targetPort: 3487
selector:
app: coturn
type: LoadBalancer
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think LoadBalancer will do anything in our cluster.

kubernetes/coturn.yml.erb Outdated Show resolved Hide resolved
apiVersion: v1
data:
# see https://github.com/matrix-org/synapse/blob/master/docs/turn-howto.md#configuration
turnserver.conf: |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

imo we should build this into the container, like I mentioned in #11

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm... it's not fully static because it needs the INTERNAL_IP variable. I'll need to look into why it needs that in the first place.

data:
# `pwgen -s 64 1 | base64 -w0` to generate
# identical to secret in homeserver.yaml
auth-secret: U2pKbFNwRjU4d016TW9FM0piVzFxZUFXVTk1Nzhta0JlbzgxSGxwMk9jNFpsbnRZVVI0MXR4VkJpcVJ2a1I1RQo=
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

obviously we need to move these out of here before merging. homeserver.yaml will probably be moved to use ocf/utils#146, here we can just use a standard templated secret (like https://github.com/ocf/grafana/blob/c5e6a1c9c429dc1c0e962cb90842b01b94b5a887/kubernetes/grafana.yml.erb#L107)

kubernetes/coturn.yml.erb Outdated Show resolved Hide resolved
name: coturn
namespace: matrix
spec:
externalTrafficPolicy: Cluster
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure this does anything outside of AWS

kubernetes/coturn.yml.erb Show resolved Hide resolved
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like we would want the kubernetes pod IP, not the host IP (aka the IP of the worker). that would be status.podIP. that said, I don't really know what this is used for so that might be wrong

@cg505
Copy link
Member

cg505 commented Apr 26, 2020

  • How do we do multiple dockerfiles in one OCF repository? Or do we just pull the upstream image directly?

This is already being worked on this repo in #7, look how it's done there. Alternatively, look at ocfweb for a more stable repo that uses multiple Dockerfiles/images.

  • How do we do secret management?

We need docs for this... but basically this is changing pretty frequently. Currently, there are two paths: hostPath mounts (example), or templated secrets injected during CI (example).

Also, ocf/utils#146 is on the horizon which will allow for easier usage of templated injected secrets.

All the secrets themselves are stored in the puppet private share on lightning.

  • Should everything be namespaced to matrix? I don't even know what that does, but it sounds like a good idea to put matrix and coturn in the same namespace.

idk if they need to be in the same namespace. ocf-kubernetes-deploy will deploy to the app-matrix namespace, so you don't need to specify the namespace. If it should be in another namespace, it should be in a different repo.

  • Mount an external config or keep the internal config like it has now? The config isn't that complex so I think it's fine as is.

imo build it into the dockerfile. We want to put config in git as much as possible, we should definitely avoid mounting it from some external FS or something when possible. So, from best to worst:
build into Docker image > mount via ConfigMap or Secret > mount via external FS

@dkess
Copy link
Member

dkess commented Apr 26, 2020

If it's too hard to figure out how to get UDP into the cluster, you could opt to not use Kubernetes for this and instead make a VM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocked enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add TURN
4 participants