Knot is a complete environment for doing actual work on Kubernetes. It includes a complete set of web-based tools to help you unleash your productivity, without ever needing to use the command line. At its core, the Knot dashboard supplies the landing page for users, allowing them to launch notebooks and other services, design workflows, and specify parameters related to execution through a user-friendly interface. The dashboard manages users, wires up relevant storage to the appropriate paths inside running containers, securely provisions multiple services under one externally-accessible HTTPS endpoint, while keeping them isolated in per-user namespaces at the Kubernetes level, and provides an identity service for OAuth 2.0/OIDC-compatible applications.
The Knot installation includes JupyterHub, Argo Workflows, Harbor, and Grafana/Prometheus, all accessible through the dashboard. Behind the scenes, other popular tools are automatically installed to help with the integration, such as cert-manager, the NGINX Ingress Controller, Vouch Proxy, and the NFS CSI Driver. Knot also uses Helm charts internally for implementing service templates.
Check out the documentation (also available in the Knot dashboard under "Documentation" at the user menu), which includes installation instructions, a user guide, and technical notes on how Knot works internally. The Knot dashboard is written in Python using Django.
To deploy Knot you need a typical Kubernetes installation, Helm, the Helm diff plugin, and Helmfile installed. We develop, test, and run Knot on Kubernetes 1.27.x.
For quickly trying out Knot, apply the the latest Knot helmfile.yaml
with:
export KNOT_HOST=example.com
helmfile -f git::https://github.com/CARV-ICS-FORTH/[email protected] sync
The variable KNOT_HOST
is necessary. By default, we use cert-manager to self-sign a wildcard certificate for the given host. You need to make sure that at the DNS level, both the domain name and its wildcard point to your server (i.e., both example.com
and *.example.com
). If you already know your external IP address, you can use a nip.io name (i.e., set KNOT_HOST
to <your IP address>.nip.io
).
For storage, Knot uses two persistent volume claims: one for internal state (shared by all services) and one for user files. You can use helmfile variables to setup Knot on top of existing PVCs, or skip the storage controller and directly use local storage (useful for single-server, bare metal setups).
Deployment options are discussed in the deployment chapter of the documentation. To customize deployment values like volume sizes and OAuth secrets, you can create easily create a custom helmfile that extends the default.
To develop Knot in a local Kubernetes environment, like the one provided by Docker Desktop for macOS (tested with versions >= 4.21.x which use Kubernetes 1.27.x), first create and populate the Python virtual environment with:
make prepare-develop
Then install Knot in a special configuration, where all requests to the dashboard are forwarded locally:
make deploy-sync
Start the local server and async task worker with:
make develop
When done, point your browser to https://<your IP address>.nip.io
and login as "admin".
Container images for the Knot dashboard are available.
To build your own locally, run:
make container
To change the version, edit VERSION
. Other variables, like the kubectl
version and the container registry name are set in the Makefile
.
To test the container in a local Kubernetes environment, run:
make test-sync
Then point your browser to https://<your IP address>.nip.io
and login.
To build and push the container image, run:
make container-push
We use buildx
to build the Knot container for multiple architectures (linux/amd64
and linux/arm64
) automatically when a new version tag is pushed. This also triggers publishing the corresponding Knot dashboard Helm chart.
Knot (previously known as Karvdash) was realized in the context of project EVOLVE, which received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 825061. This work is also partly supported by project EUPEX, which is funded from the European High-Performance Computing Joint Undertaking (JU) under grant agreement No 101033975. The JU receives support from the European Union's Horizon 2020 research and innovation programme and France, Germany, Italy, Greece, United Kingdom, the Czech Republic, and Croatia.