Skip to content

Commit

Permalink
Issues/2 (#26)
Browse files Browse the repository at this point in the history
* minor updates

* how to use as a sidecar for kubernetes dashboard
  • Loading branch information
bsctl authored Oct 4, 2020
1 parent 1f53e91 commit bbcddde
Show file tree
Hide file tree
Showing 5 changed files with 362 additions and 2 deletions.
240 changes: 240 additions & 0 deletions deploy/sidecar-setup.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,240 @@

apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---

apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---

apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---

kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---

kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: ns-filter
image: quay.io/clastix/capsule-ns-filter
imagePullPolicy: Always
command:
- /capsule-ns-filter
- --k8s-control-plane-url=https://kubernetes.default.svc
- --capsule-user-group=capsule.clastix.io
- --zap-devel
- --zap-log-level=10
- --enable-ssl=true
- --ssl-cert-path=/opt/certs/tls.crt
- --ssl-key-path=/opt/certs/tls.key
volumeMounts:
- name: ns-filter-certs
mountPath: /opt/certs
ports:
- containerPort: 9001
name: http
protocol: TCP
resources:
- name: dashboard
image: kubernetesui/dashboard:v2.0.4
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=cmp-system
- --tls-cert-file=tls.crt
- --tls-key-file=tls.key
- --apiserver-host=https://localhost:9001
- --kubeconfig=/opt/.kube/config
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
- mountPath: /tmp
name: tmp-volume
- mountPath: /opt/.kube
name: kubeconfig
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: ns-filter-certs
secret:
secretName: ns-filter-certs
- name: tmp-volume
emptyDir: {}
- name: kubeconfig
configMap:
defaultMode: 420
name: kubernetes-dashboard-kubeconfig
serviceAccountName: kubernetes-dashboard
---

apiVersion: v1
kind: ConfigMap
metadata:
name: kubernetes-dashboard-kubeconfig
namespace: kubernetes-dashboard
data:
config: |
kind: Config
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:9001
name: localhost
contexts:
- context:
cluster: localhost
user: kubernetes-admin
name: admin@localhost
current-context: admin@localhost
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
---

kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---

kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
ingress.kubernetes.io: ssl-redirect
spec:
rules:
- host: dashboard.clastix.io
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
path: /
---

apiVersion: v1
data:
tls.crt: REDACTED
tls.key: REDACTED
kind: Secret
metadata:
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
data:
tls.crt: REDACTED
tls.key: REDACTED
kind: Secret
metadata:
name: ns-filter-certs
namespace: kubernetes-dashboard
type: Opaque
Binary file added docs/images/kubernetes-dashboard.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/lens.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
118 changes: 118 additions & 0 deletions docs/sidecar.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
# Running capsule-ns-filter as sidecar container
The `capsule-ns-filter` can be deployed as sidecar container for server-side Kubernetes dashboards. It will intercept all requests sent from the client side to the server-side of the dashboard and it will proxy them to the Kubernetes APIs server.

```
capsule-ns-filter
+------------+
|:9001 +--------+
+------------+ v
+-----------+ | | +------------+
browser +------>+:443 +-------->+:8443 | |:6443 |
+-----------+ +------------+ +------------+
ingress-controller dashboard kube-apiserver
(ssl-passthrough) server-side backend
```

The server-side backend of the dashboard must leave to specify the URL of the Kubernetes APIs server. For example the [sidecar-setup.yaml](../deploy/sidecar-setup.yaml) manifest contains an example for deploying with [Kubernetes Dashboard](https://github.com/kubernetes/dashboard), and the ingress controller in ssl-passthrough mode.

Place the `capsule-ns-filter` in a pod with SSL mode, i.e. `--enable-ssl=true` and passing valid certificate and key files in a secret.

```yaml
...
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: ns-filter
image: quay.io/clastix/capsule-ns-filter
imagePullPolicy: IfNotPresent
command:
- /capsule-ns-filter
- --k8s-control-plane-url=https://kubernetes.default.svc
- --capsule-user-group=capsule.clastix.io
- --zap-log-level=5
- --enable-ssl=true
- --ssl-cert-path=/opt/certs/tls.crt
- --ssl-key-path=/opt/certs/tls.key
volumeMounts:
- name: ns-filter-certs
mountPath: /opt/certs
ports:
- containerPort: 9001
name: http
protocol: TCP
...
```

In the same pod, place the Kubernetes Dashboard in _"out-of-cluster"_ mode with `--apiserver-host=https://localhost:9001` to send all the requests to the `capsule-ns-filter` sidecar container:


```yaml
...
- name: dashboard
image: kubernetesui/dashboard:v2.0.4
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=cmp-system
- --tls-cert-file=tls.crt
- --tls-key-file=tls.key
- --apiserver-host=https://localhost:9001
- --kubeconfig=/opt/.kube/config
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
- mountPath: /tmp
name: tmp-volume
- mountPath: /opt/.kube
name: kubeconfig
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
...
```

Make sure you pass a valid `kubeconfig` file to the dashboard pointing to the `capsule-ns-filter` sidecar container instead of the `kube-apiserver` directly:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kubernetes-dashboard-kubeconfig
namespace: kubernetes-dashboard
data:
config: |
kind: Config
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:9001 # <- point to the capsule-ns-filter
name: localhost
contexts:
- context:
cluster: localhost
user: kubernetes-admin # <- dashboard has cluster-admin permissions
name: admin@localhost
current-context: admin@localhost
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
```
After starting the dashboard, login as a Tenant Owner user, e.g. `alice` according to the used authentication method, and check you can see only owned namespaces:

![Dashboard UI namespace page](images/kubernetes-dashboard.png)

6 changes: 4 additions & 2 deletions docs/standalone.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Running capsule-ns-filter for kubectl
The `capsule-ns-filter` can be deployed in standalone mode, e.g. running as a pod bridging any Kubernetes client to the `kube-apiserver`. Use this way to provide access to client-side command line tools like `kubectl` or even client-side dashboards.

As first option, use an Ingress Controller to expose the `capsule-ns-filter` endpoint or, depending on your environment, expose it with either a `NodePort`, or a `LoadBalancer` service. As alternatives, use `HostPort` or `HostNetwork` mode.
You can use an Ingress Controller to expose the `capsule-ns-filter` endpoint or, depending on your environment, you can expose it with either a `NodePort`, or a `LoadBalancer` service. As alternatives, use `HostPort` or `HostNetwork` mode.

```
+-----------+ +-----------+ +-----------+
Expand Down Expand Up @@ -212,6 +212,8 @@ In case the OIDC uses a self signed CA certificate, make sure to specify it with
The service account used for `capsule-ns-filter` needs to have `cluster-admin` permissions.

## Configuring client-only dashboards
If you're using a client-only dashboard, for example [Lens](https://k8slens.dev/), the `capsule-ns-filter` can be used as in the previous `kubectl` example since Lens just needs for a `kubeconfig` file. Assuming to use a `kubeconfig` file containing a valid OIDC token released for the `alice` user, you can access the cluster with Lens dashboard and see only namespaces belonging to the Alice's tenants.
If you're using a client-only dashboard, for example [Lens](https://k8slens.dev/), the `capsule-ns-filter` can be used as in the previous `kubectl` example since Lens just needs for a `kubeconfig` file. Assuming to use a `kubeconfig` file containing a valid OIDC token released for the `alice` user, you can access the cluster with Lens dashboard and see only namespaces belonging to the Alice's tenants:

![Lens dashboard](images/lens.png)

For web based dashboards, like the [Kubernetes Dashboard](https://github.com/kubernetes/dashboard), the `capsule-ns-filter` can be deployed as [sidecar](sidecar.md) container in the backend side of the dashboard.

0 comments on commit bbcddde

Please sign in to comment.