-
Kubernetes
-
Starting at version 1.5.0, Otoroshi provides a native Kubernetes support. Multiple otoroshi jobs (that are actually kubernetes controllers) are provided in order to
-
- - sync kubernetes secrets of type
kubernetes.io/tls
to otoroshi certificates
- - act as a standard ingress controller (supporting
Ingress
objects)
- - provide Custom Resource Definitions (CRDs) to manage Otoroshi entities from Kubernetes and act as an ingress controller with its own resources
-
-
Installing otoroshi on your kubernetes cluster
Warning
-
You need to have cluster admin privileges to install otoroshi and its service account, role mapping and CRDs on a kubernetes cluster. We also advise you to create a dedicated namespace (you can name it otoroshi
for example) to install otoroshi
-
If you want to deploy otoroshi into your kubernetes cluster, you can download the deployment descriptors from https://github.com/MAIF/otoroshi/tree/master/kubernetes and use kustomize to create your own overlay.
-
You can also create a kustomization.yaml
file with a remote base
-
bases:
-- github.com/MAIF/otoroshi/kubernetes/kustomize/overlays/simple/?ref=v1.5.0-beta.6
-
-
Then deploy it with kubectl apply -k ./overlays/myoverlay
.
-
You can also use Helm to deploy a simple otoroshi cluster on your kubernetes cluster
-
helm repo add otoroshi https://maif.github.io/otoroshi/helm
-helm install my-otoroshi otoroshi/otoroshi
-
-
Below, you will find example of deployment. Do not hesitate to adapt them to your needs. Those descriptors have value placeholders that you will need to replace with actual values like
-
env:
- - name: APP_STORAGE_ROOT
- value: otoroshi
- - name: APP_DOMAIN
- value: ${domain}
-
-
you will have to edit it to make it look like
-
env:
- - name: APP_STORAGE_ROOT
- value: otoroshi
- - name: APP_DOMAIN
- value: 'apis.my.domain'
-
-
if you don’t want to use placeholders and environment variables, you can create a secret containing the configuration file of otoroshi
-
apiVersion: v1
-kind: Secret
-metadata:
- name: otoroshi-config
-type: Opaque
-stringData:
- oto.conf: >
- include "application.conf"
- app {
- storage = "redis"
- domain = "apis.my.domain"
- }
-
-
and mount it in the otoroshi container
-
apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: otoroshi-deployment
-spec:
- selector:
- matchLabels:
- run: otoroshi-deployment
- template:
- metadata:
- labels:
- run: otoroshi-deployment
- spec:
- serviceAccountName: otoroshi-admin-user
- terminationGracePeriodSeconds: 60
- hostNetwork: false
- containers:
- - image: maif/otoroshi:1.5.0-beta.6-jdk11
- imagePullPolicy: IfNotPresent
- name: otoroshi
- args: ['-Dconfig.file=/usr/app/otoroshi/conf/oto.conf']
- ports:
- - containerPort: 8080
- name: "http"
- protocol: TCP
- - containerPort: 8443
- name: "https"
- protocol: TCP
- volumeMounts:
- - name: otoroshi-config
- mountPath: "/usr/app/otoroshi/conf"
- readOnly: true
- volumes:
- - name: otoroshi-config
- secret:
- secretName: otoroshi-config
- ...
-
-
You can also create several secrets for each placeholder, mount them to the otoroshi container then use their file path as value
-
env:
- - name: APP_STORAGE_ROOT
- value: otoroshi
- - name: APP_DOMAIN
- value: 'file:///the/path/of/the/secret/file'
-
-
you can use the same trick in the config. file itself
-
Note on bare metal kubernetes cluster installation
Note
-
Bare metal kubernetes clusters don’t come with support for external loadbalancers (service of type LoadBalancer
). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like MetalLB that provide software LoadBalancer
services to bare metal clusters or you can use and customize examples below.
Warning
-
We don’t recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.
-
Common manifests
-
the following manifests are always needed. They create otoroshi CRDs, tokens, role, etc. Redis deployment is not mandatory, it’s just an example. You can use your own existing setup.
-
- - rbac.yaml
-
- -
-
---
-kind: ServiceAccount
-apiVersion: v1
-metadata:
- name: otoroshi-admin-user
----
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRoleBinding
-metadata:
- name: otoroshi-admin-user
-roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: otoroshi-admin-user
-subjects:
-- kind: ServiceAccount
- name: otoroshi-admin-user
- namespace: $namespace
----
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: otoroshi-admin-user
-rules:
- - apiGroups:
- - ""
- resources:
- - services
- - endpoints
- - secrets
- - configmaps
- - deployments
- - namespaces
- - pods
- verbs:
- - get
- - list
- - watch
- - apiGroups:
- - ""
- resources:
- - secrets
- - configmaps
- verbs:
- - update
- - create
- - delete
- - apiGroups:
- - extensions
- resources:
- - ingresses
- - ingressclasses
- verbs:
- - get
- - list
- - watch
- - apiGroups:
- - extensions
- resources:
- - ingresses/status
- verbs:
- - update
- - apiGroups:
- - admissionregistration.k8s.io
- resources:
- - validatingwebhookconfigurations
- - mutatingwebhookconfigurations
- verbs:
- - get
- - update
- - patch
- - apiGroups:
- - proxy.otoroshi.io
- resources:
- - service-groups
- - service-descriptors
- - apikeys
- - certificates
- - global-configs
- - jwt-verifiers
- - auth-modules
- - scripts
- - tcp-services
- - data-exporters
- - admins
- - organizations
- - teams
- verbs:
- - get
- - list
- - watch
- - crds.yaml
-
- -
-
---
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: service-groups.proxy.otoroshi.io
-spec:
- group: proxy.otoroshi.io
- version: v1alpha1
- preserveUnknownFields: true
- #validation:
- # openAPIV3Schema:
- # type: object
- # properties:
- # spec:
- # type: object
- # properties:
- # name:
- # type: string
- # description:
- # type: string
- # metadata:
- # type: object
- names:
- kind: ServiceGroup
- plural: service-groups
- singular: service-group
- scope: Namespaced
----
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: organizations.proxy.otoroshi.io
-spec:
- group: proxy.otoroshi.io
- version: v1alpha1
- preserveUnknownFields: true
- names:
- kind: Organization
- plural: organizations
- singular: organization
- scope: Namespaced
----
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: teams.proxy.otoroshi.io
-spec:
- group: proxy.otoroshi.io
- version: v1alpha1
- preserveUnknownFields: true
- names:
- kind: Team
- plural: teams
- singular: team
- scope: Namespaced
----
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: service-descriptors.proxy.otoroshi.io
-spec:
- group: proxy.otoroshi.io
- version: v1alpha1
- preserveUnknownFields: true
- #validation:
- # openAPIV3Schema:
- # type: object
- # properties:
- # spec:
- # type: object
- # properties:
- # group:
- # type: string
- # name:
- # type: string
- # env:
- # type: string
- # domain:
- # type: string
- # subdomain:
- # type: string
- # root:
- # type: string
- # matchingRoot:
- # type: string
- # stripPath:
- # type: boolean
- # enabled:
- # type: boolean
- # userFacing:
- # type: boolean
- # privateApp:
- # type: boolean
- # forceHttps:
- # type: boolean
- # maintenanceMode:
- # type: boolean
- # buildMode:
- # type: boolean
- # strictlyPrivate:
- # type: boolean
- # sendOtoroshiHeadersBack:
- # type: boolean
- # readOnly:
- # type: boolean
- # xForwardedHeaders:
- # type: boolean
- # overrideHost:
- # type: boolean
- # allowHttp10:
- # type: boolean
- # logAnalyticsOnServer:
- # type: boolean
- # useAkkaHttpClient:
- # type: boolean
- # useNewWSClient:
- # type: boolean
- # tcpUdpTunneling:
- # type: boolean
- # detectApiKeySooner:
- # type: boolean
- # letsEncrypt:
- # type: boolean
- # enforceSecureCommunication:
- # type: boolean
- # sendInfoToken:
- # type: boolean
- # sendStateChallenge:
- # type: boolean
- # securityExcludedPatterns:
- # type: array
- # publicPatterns:
- # type: array
- # privatePatterns:
- # type: array
- # additionalHeaders:
- # type: object
- # additionalHeadersOut:
- # type: object
- # missingOnlyHeadersIn:
- # type: object
- # missingOnlyHeadersOut:
- # type: object
- # removeHeadersIn:
- # type: array
- # removeHeadersOut:
- # type: array
- # headersVerification:
- # type: object
- # matchingHeaders:
- # type: object
- # metadata:
- # type: object
- # hosts:
- # type: array
- # paths:
- # type: array
- # issueCert:
- # type: boolean
- # issueCertCA:
- # type: string
- names:
- kind: ServiceDescriptor
- plural: service-descriptors
- singular: service-descriptor
- scope: Namespaced
----
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: apikeys.proxy.otoroshi.io
-spec:
- group: proxy.otoroshi.io
- version: v1alpha1
- preserveUnknownFields: true
- #validation:
- # openAPIV3Schema:
- # type: object
- # properties:
- # spec:
- # type: object
- # properties:
- # daikokuToken:
- # type: string
- # secretName:
- # type: string
- # exportSecret:
- # type: boolean
- # clientId:
- # type: string
- # clientSecret:
- # type: string
- # clientName:
- # type: string
- # authorizedEntities:
- # type: array
- # group:
- # type: string
- # enabled:
- # type: boolean
- # readOnly:
- # type: boolean
- # allowClientIdOnly:
- # type: boolean
- # throttlingQuota:
- # type: integer
- # format: int64
- # dailyQuota:
- # type: integer
- # format: int64
- # monthlyQuota:
- # type: integer
- # format: int64
- # constrainedServicesOnly:
- # type: boolean
- # validUntil:
- # type: integer
- # format: int64
- # tags:
- # type: array
- # metadata:
- # type: object
- names:
- kind: ApiKey
- plural: apikeys
- singular: apikey
- scope: Namespaced
----
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: certificates.proxy.otoroshi.io
-spec:
- group: proxy.otoroshi.io
- version: v1alpha1
- preserveUnknownFields: true
- #validation:
- # openAPIV3Schema:
- # type: object
- # properties:
- # spec:
- # type: object
- # required: ["csr"]
- # properties:
- # name:
- # type: string
- # description:
- # type: string
- # secretName:
- # type: string
- # exportSecret:
- # type: boolean
- # selfSigned:
- # type: boolean
- # ca:
- # type: boolean
- # autoRenew:
- # type: boolean
- # letsEncrypt:
- # type: boolean
- # client:
- # type: boolean
- # entityMetadata:
- # type: object
- # csr:
- # type: object
- names:
- kind: Certificate
- plural: certificates
- singular: certificate
- scope: Namespaced
----
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: global-configs.proxy.otoroshi.io
-spec:
- group: proxy.otoroshi.io
- version: v1alpha1
- preserveUnknownFields: true
- names:
- kind: GlobalConfig
- plural: global-configs
- singular: global-config
- scope: Namespaced
----
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: jwt-verifiers.proxy.otoroshi.io
-spec:
- group: proxy.otoroshi.io
- version: v1alpha1
- preserveUnknownFields: true
- names:
- kind: JwtVerifier
- plural: jwt-verifiers
- singular: jwt-verifier
- scope: Namespaced
----
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: auth-modules.proxy.otoroshi.io
-spec:
- group: proxy.otoroshi.io
- version: v1alpha1
- preserveUnknownFields: true
- names:
- kind: AuthModule
- plural: auth-modules
- singular: auth-module
- scope: Namespaced
----
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: scripts.proxy.otoroshi.io
-spec:
- group: proxy.otoroshi.io
- version: v1alpha1
- preserveUnknownFields: true
- #validation:
- # openAPIV3Schema:
- # type: object
- # properties:
- # spec:
- # type: object
- # required: ["code", "type"]
- # properties:
- # name:
- # type: string
- # desc:
- # type: string
- # code:
- # type: string
- # type:
- # type: string
- # metadata:
- # type: object
- names:
- kind: Script
- plural: scripts
- singular: script
- scope: Namespaced
----
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: tcp-services.proxy.otoroshi.io
-spec:
- group: proxy.otoroshi.io
- version: v1alpha1
- preserveUnknownFields: true
- names:
- kind: TcpService
- plural: tcp-services
- singular: tcp-service
- scope: Namespaced
----
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: data-exporters.proxy.otoroshi.io
-spec:
- group: proxy.otoroshi.io
- version: v1alpha1
- preserveUnknownFields: true
- names:
- kind: DataExporter
- plural: data-exporters
- singular: data-exporter
- scope: Namespaced
----
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: admins.proxy.otoroshi.io
-spec:
- group: proxy.otoroshi.io
- version: v1alpha1
- preserveUnknownFields: true
- names:
- kind: Admin
- plural: admins
- singular: admin
- scope: Namespaced
- - redis.yaml
-
- -
-
apiVersion: v1
-kind: Service
-metadata:
- name: redis-leader-service
-spec:
- ports:
- - port: 6379
- name: redis
- selector:
- run: redis-leader-deployment
----
-apiVersion: v1
-kind: Service
-metadata:
- name: redis-follower-service
-spec:
- ports:
- - port: 6379
- name: redis
- selector:
- run: redis-follower-deployment
----
-apiVersion: apps/v1
-kind: StatefulSet
-metadata:
- name: redis-leader-deployment
-spec:
- selector:
- matchLabels:
- run: redis-leader-deployment
- serviceName: redis-leader-service
- replicas: 1
- template:
- metadata:
- labels:
- run: redis-leader-deployment
- spec:
- containers:
- - name: redis-leader-container
- image: redis
- imagePullPolicy: Always
- command: ["redis-server", "--appendonly", "yes"]
- ports:
- - containerPort: 6379
- name: redis
- volumeMounts:
- - name: redis-leader-storage
- mountPath: /data
- readOnly: false
- readinessProbe:
- exec:
- command:
- - sh
- - -c
- - "redis-cli -h $(hostname) ping"
- initialDelaySeconds: 15
- timeoutSeconds: 5
- livenessProbe:
- exec:
- command:
- - sh
- - -c
- - "redis-cli -h $(hostname) ping"
- initialDelaySeconds: 20
- periodSeconds: 3
- volumeClaimTemplates:
- - metadata:
- name: redis-leader-storage
- labels:
- name: redis-leader-storage
- annotations:
- volume.alpha.kubernetes.io/storage-class: anything
- spec:
- accessModes: [ "ReadWriteOnce" ]
- resources:
- requests:
- storage: 100Mi
----
-apiVersion: apps/v1
-kind: StatefulSet
-metadata:
- name: redis-follower-deployment
-spec:
- selector:
- matchLabels:
- run: redis-follower-deployment
- serviceName: redis-follower-service
- replicas: 1
- template:
- metadata:
- labels:
- run: redis-follower-deployment
- spec:
- containers:
- - name: redis-follower-container
- image: redis
- imagePullPolicy: Always
- command: ["redis-server", "--appendonly", "yes", "--slaveof", "redis-leader-service", "6379"]
- ports:
- - containerPort: 6379
- name: redis
- volumeMounts:
- - name: redis-follower-storage
- mountPath: /data
- readOnly: false
- readinessProbe:
- exec:
- command:
- - sh
- - -c
- - "redis-cli -h $(hostname) ping"
- initialDelaySeconds: 15
- timeoutSeconds: 5
- livenessProbe:
- exec:
- command:
- - sh
- - -c
- - "redis-cli -h $(hostname) ping"
- initialDelaySeconds: 20
- periodSeconds: 3
- volumeClaimTemplates:
- - metadata:
- name: redis-follower-storage
- labels:
- name: redis-follower-storage
- annotations:
- volume.alpha.kubernetes.io/storage-class: anything
- spec:
- accessModes: [ "ReadWriteOnce" ]
- resources:
- requests:
- storage: 100Mi
-
-
Deploy a simple otoroshi instanciation on a cloud provider managed kubernetes cluster
-
Here we have 2 replicas connected to the same redis instance. Nothing fancy. We use a service of type LoadBalancer
to expose otoroshi to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the LoadBalancer
external CNAME
(see the example below)
-
- - deployment.yaml
-
- -
-
apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: otoroshi-deployment
-spec:
- selector:
- matchLabels:
- run: otoroshi-deployment
- replicas: 2
- strategy:
- type: RollingUpdate
- rollingUpdate:
- maxUnavailable: 1
- maxSurge: 1
- template:
- metadata:
- labels:
- run: otoroshi-deployment
- spec:
- serviceAccountName: otoroshi-admin-user
- terminationGracePeriodSeconds: 60
- hostNetwork: false
- containers:
- - image: maif/otoroshi:1.5.0-beta.6-jdk11
- imagePullPolicy: IfNotPresent
- name: otoroshi
- ports:
- - containerPort: 8080
- name: "http"
- protocol: TCP
- - containerPort: 8443
- name: "https"
- protocol: TCP
- env:
- - name: APP_STORAGE_ROOT
- value: otoroshi
- - name: OTOROSHI_INITIAL_ADMIN_PASSWORD
- value: ${password}
- - name: APP_DOMAIN
- value: ${domain}
- - name: APP_STORAGE
- value: lettuce
- - name: REDIS_URL
- value: ${redisUrl}
- # value: redis://redis-leader-service:6379/0
- - name: ADMIN_API_CLIENT_ID
- value: ${clientId}
- - name: ADMIN_API_CLIENT_SECRET
- value: ${clientSecret}
- - name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
- value: otoroshi-api-service.${namespace}.svc.cluster.local
- - name: OTOROSHI_SECRET
- value: ${otoroshiSecret}
- - name: HEALTH_LIMIT
- value: "5000"
- - name: SSL_OUTSIDE_CLIENT_AUTH
- value: Want
- - name: HTTPS_WANT_CLIENT_AUTH
- value: "true"
- - name: OTOROSHI_INITIAL_CUSTOMIZATION
- value: >
- {
- "config":{
- "tlsSettings": {
- "defaultDomain": "www.${domain}",
- "randomIfNotFound": false
- },
- "scripts":{
- "enabled":true,
- "sinkRefs":[
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator",
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector"
- ],
- "sinkConfig": {},
- "jobRefs":[
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesOtoroshiCRDsControllerJob"
- ],
- "jobConfig":{
- "KubernetesConfig": {
- "trust": false,
- "namespaces": [
- "*"
- ],
- "labels": {},
- "namespacesLabels": {},
- "ingressClasses": [
- "otoroshi"
- ],
- "defaultGroup": "default",
- "ingresses": false,
- "crds": true,
- "coreDnsIntegration": false,
- "coreDnsIntegrationDryRun": false,
- "kubeLeader": false,
- "restartDependantDeployments": false,
- "watch": false,
- "syncDaikokuApikeysOnly": false,
- "kubeSystemNamespace": "kube-system",
- "coreDnsConfigMapName": "coredns",
- "coreDnsDeploymentName": "coredns",
- "corednsPort": 53,
- "otoroshiServiceName": "otoroshi-service",
- "otoroshiNamespace": "${namespace}",
- "clusterDomain": "cluster.local",
- "syncIntervalSeconds": 60,
- "coreDnsEnv": null,
- "watchTimeoutSeconds": 60,
- "watchGracePeriodSeconds": 5,
- "mutatingWebhookName": "otoroshi-admission-webhook-injector",
- "validatingWebhookName": "otoroshi-admission-webhook-validation",
- "templates": {
- "service-group": {},
- "service-descriptor": {},
- "apikeys": {},
- "global-config": {},
- "jwt-verifier": {},
- "tcp-service": {},
- "certificate": {},
- "auth-module": {},
- "script": {},
- "organizations": {},
- "teams": {},
- "webhooks": {
- "flags": {
- "requestCert": true,
- "originCheck": true,
- "tokensCheck": true,
- "displayEnv": false,
- "tlsTrace": false
- }
- }
- }
- }
- }
- }
- }
- }
- - name: JAVA_OPTS
- value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
- readinessProbe:
- httpGet:
- path: /ready
- port: 8080
- failureThreshold: 1
- initialDelaySeconds: 60
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
- livenessProbe:
- httpGet:
- path: /live
- port: 8080
- failureThreshold: 3
- initialDelaySeconds: 60
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
- resources:
- # requests:
- # cpu: "100m"
- # memory: "50Mi"
- # limits:
- # cpu: "4G"
- # memory: "4Gi"
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-service
-spec:
- selector:
- run: otoroshi-deployment
- ports:
- - port: 8080
- name: "http"
- targetPort: "http"
- - port: 8443
- name: "https"
- targetPort: "https"
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-external-service
-spec:
- type: LoadBalancer
- selector:
- run: otoroshi-deployment
- ports:
- - port: 80
- name: "http"
- targetPort: "http"
- - port: 443
- name: "https"
- targetPort: "https"
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Certificate
-metadata:
- name: otoroshi-service-certificate
-spec:
- description: certificate for otoroshi-service
- autoRenew: true
- csr:
- issuer: CN=Otoroshi Root
- hosts:
- - otoroshi-service
- - otoroshi-service.${namespace}.svc.cluster.local
- - otoroshi-api-service.${namespace}.svc.cluster.local
- - otoroshi.${domain}
- - otoroshi-api.${domain}
- - privateapps.${domain}
- key:
- algo: rsa
- size: 2048
- subject: uid=otoroshi-service-cert, O=Otoroshi
- client: false
- ca: false
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
- - dns.example
-
- -
-
otoroshi.your.otoroshi.domain IN CNAME generated.cname.of.your.cluster.loadbalancer
-otoroshi-api.your.otoroshi.domain IN CNAME generated.cname.of.your.cluster.loadbalancer
-privateapps.your.otoroshi.domain IN CNAME generated.cname.of.your.cluster.loadbalancer
-api1.another.domain IN CNAME generated.cname.of.your.cluster.loadbalancer
-api2.another.domain IN CNAME generated.cname.of.your.cluster.loadbalancer
-*.api.the.api.domain IN CNAME generated.cname.of.your.cluster.loadbalancer
-
-
Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster
-
Here we have 2 replicas connected to the same redis instance. Nothing fancy. The otoroshi instance are exposed as nodePort
so you’ll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below).
-
- - deployment.yaml
-
- -
-
apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: otoroshi-deployment
-spec:
- selector:
- matchLabels:
- run: otoroshi-deployment
- replicas: 2
- strategy:
- type: RollingUpdate
- rollingUpdate:
- maxUnavailable: 1
- maxSurge: 1
- template:
- metadata:
- labels:
- run: otoroshi-deployment
- spec:
- serviceAccountName: otoroshi-admin-user
- terminationGracePeriodSeconds: 60
- hostNetwork: false
- containers:
- - image: maif/otoroshi:1.5.0-beta.6-jdk11
- imagePullPolicy: IfNotPresent
- name: otoroshi
- ports:
- - containerPort: 8080
- name: "http"
- protocol: TCP
- - containerPort: 8443
- name: "https"
- protocol: TCP
- env:
- - name: APP_STORAGE_ROOT
- value: otoroshi
- - name: OTOROSHI_INITIAL_ADMIN_PASSWORD
- value: ${password}
- - name: APP_DOMAIN
- value: ${domain}
- - name: APP_STORAGE
- value: lettuce
- - name: REDIS_URL
- value: ${redisUrl}
- # value: redis://redis-leader-service:6379/0
- - name: ADMIN_API_CLIENT_ID
- value: ${clientId}
- - name: ADMIN_API_CLIENT_SECRET
- value: ${clientSecret}
- - name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
- value: otoroshi-api-service.${namespace}.svc.cluster.local
- - name: OTOROSHI_SECRET
- value: ${otoroshiSecret}
- - name: HEALTH_LIMIT
- value: "5000"
- - name: SSL_OUTSIDE_CLIENT_AUTH
- value: Want
- - name: HTTPS_WANT_CLIENT_AUTH
- value: "true"
- - name: OTOROSHI_INITIAL_CUSTOMIZATION
- value: >
- {
- "config":{
- "tlsSettings": {
- "defaultDomain": "www.${domain}",
- "randomIfNotFound": false
- },
- "scripts":{
- "enabled":true,
- "sinkRefs":[
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator",
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector"
- ],
- "sinkConfig": {},
- "jobRefs":[
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesOtoroshiCRDsControllerJob"
- ],
- "jobConfig":{
- "KubernetesConfig": {
- "trust": false,
- "namespaces": [
- "*"
- ],
- "labels": {},
- "namespacesLabels": {},
- "ingressClasses": [
- "otoroshi"
- ],
- "defaultGroup": "default",
- "ingresses": false,
- "crds": true,
- "coreDnsIntegration": false,
- "coreDnsIntegrationDryRun": false,
- "kubeLeader": false,
- "restartDependantDeployments": false,
- "watch": false,
- "syncDaikokuApikeysOnly": false,
- "kubeSystemNamespace": "kube-system",
- "coreDnsConfigMapName": "coredns",
- "coreDnsDeploymentName": "coredns",
- "corednsPort": 53,
- "otoroshiServiceName": "otoroshi-service",
- "otoroshiNamespace": "${namespace}",
- "clusterDomain": "cluster.local",
- "syncIntervalSeconds": 60,
- "coreDnsEnv": null,
- "watchTimeoutSeconds": 60,
- "watchGracePeriodSeconds": 5,
- "mutatingWebhookName": "otoroshi-admission-webhook-injector",
- "validatingWebhookName": "otoroshi-admission-webhook-validation",
- "templates": {
- "service-group": {},
- "service-descriptor": {},
- "apikeys": {},
- "global-config": {},
- "jwt-verifier": {},
- "tcp-service": {},
- "certificate": {},
- "auth-module": {},
- "script": {},
- "organizations": {},
- "teams": {},
- "webhooks": {
- "flags": {
- "requestCert": true,
- "originCheck": true,
- "tokensCheck": true,
- "displayEnv": false,
- "tlsTrace": false
- }
- }
- }
- }
- }
- }
- }
- }
- - name: JAVA_OPTS
- value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
- readinessProbe:
- httpGet:
- path: /ready
- port: 8080
- failureThreshold: 1
- initialDelaySeconds: 60
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
- livenessProbe:
- httpGet:
- path: /live
- port: 8080
- failureThreshold: 3
- initialDelaySeconds: 10
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
- resources:
- # requests:
- # cpu: "100m"
- # memory: "50Mi"
- # limits:
- # cpu: "4G"
- # memory: "4Gi"
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-service
-spec:
- selector:
- run: otoroshi-deployment
- ports:
- - port: 8080
- name: "http"
- targetPort: "http"
- - port: 8443
- name: "https"
- targetPort: "https"
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-external-service
-spec:
- selector:
- run: otoroshi-deployment
- ports:
- - port: 80
- name: "http"
- targetPort: "http"
- nodePort: 31080
- - port: 443
- name: "https"
- targetPort: "https"
- nodePort: 31443
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Certificate
-metadata:
- name: otoroshi-service-certificate
-spec:
- description: certificate for otoroshi-service
- autoRenew: true
- csr:
- issuer: CN=Otoroshi Root
- hosts:
- - otoroshi-service
- - otoroshi-service.${namespace}.svc.cluster.local
- - otoroshi-api-service.${namespace}.svc.cluster.local
- - otoroshi.${domain}
- - otoroshi-api.${domain}
- - privateapps.${domain}
- key:
- algo: rsa
- size: 2048
- subject: uid=otoroshi-service-cert, O=Otoroshi
- client: false
- ca: false
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
- - haproxy.example
-
- -
-
frontend front_nodes_http
- bind *:80
- mode tcp
- default_backend back_http_nodes
- timeout client 1m
-
-frontend front_nodes_https
- bind *:443
- mode tcp
- default_backend back_https_nodes
- timeout client 1m
-
-backend back_http_nodes
- mode tcp
- balance roundrobin
- server kubernetes-node1 10.2.2.40:31080
- server kubernetes-node2 10.2.2.41:31080
- server kubernetes-node3 10.2.2.42:31080
- timeout connect 10s
- timeout server 1m
-
-backend back_https_nodes
- mode tcp
- balance roundrobin
- server kubernetes-node1 10.2.2.40:31443
- server kubernetes-node2 10.2.2.41:31443
- server kubernetes-node3 10.2.2.42:31443
- timeout connect 10s
- timeout server 1m
- - nginx.example
-
- -
-
stream {
-
- upstream back_http_nodes {
- zone back_http_nodes 64k;
- server 10.2.2.40:31080 max_fails=1;
- server 10.2.2.41:31080 max_fails=1;
- server 10.2.2.42:31080 max_fails=1;
- }
-
- upstream back_https_nodes {
- zone back_https_nodes 64k;
- server 10.2.2.40:31443 max_fails=1;
- server 10.2.2.41:31443 max_fails=1;
- server 10.2.2.42:31443 max_fails=1;
- }
-
- server {
- listen 80;
- proxy_pass back_http_nodes;
- health_check;
- }
-
- server {
- listen 443;
- proxy_pass back_https_nodes;
- health_check;
- }
-
-}
- - dns.example
-
- -
-
# if your loadbalancer is at ip address 10.2.2.50
-
-otoroshi.your.otoroshi.domain IN A 10.2.2.50
-otoroshi-api.your.otoroshi.domain IN A 10.2.2.50
-privateapps.your.otoroshi.domain IN A 10.2.2.50
-api1.another.domain IN A 10.2.2.50
-api2.another.domain IN A 10.2.2.50
-*.api.the.api.domain IN A 10.2.2.50
-
-
Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster using a DaemonSet
-
Here we have one otoroshi instance on each kubernetes node (with the otoroshi-kind: instance
label) with redis persistance. The otoroshi instances are exposed as hostPort
so you’ll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below).
-
- - deployment.yaml
-
- -
-
apiVersion: apps/v1
-kind: DaemonSet
-metadata:
- name: otoroshi-deployment
-spec:
- selector:
- matchLabels:
- run: otoroshi-deployment
- template:
- metadata:
- labels:
- run: otoroshi-deployment
- spec:
- affinity:
- nodeAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- nodeSelectorTerms:
- - matchExpressions:
- - key: otoroshi-kind
- operator: In
- values:
- - instance
- tolerations:
- - key: node-role.kubernetes.io/master
- effect: NoSchedule
- serviceAccountName: otoroshi-admin-user
- terminationGracePeriodSeconds: 60
- restartPolicy: Always
- hostNetwork: false
- containers:
- - image: maif/otoroshi:1.5.0-beta.6-jdk11
- imagePullPolicy: IfNotPresent
- name: otoroshi
- ports:
- - containerPort: 8080
- hostPort: 41080
- name: "http"
- protocol: TCP
- - containerPort: 8443
- hostPort: 41443
- name: "https"
- protocol: TCP
- env:
- - name: APP_STORAGE_ROOT
- value: otoroshi
- - name: OTOROSHI_INITIAL_ADMIN_PASSWORD
- value: ${password}
- - name: APP_DOMAIN
- value: ${domain}
- - name: APP_STORAGE
- value: lettuce
- - name: REDIS_URL
- value: ${redisUrl}
- # value: redis://redis-leader-service:6379/0
- - name: ADMIN_API_CLIENT_ID
- value: ${clientId}
- - name: ADMIN_API_CLIENT_SECRET
- value: ${clientSecret}
- - name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
- value: otoroshi-api-service.${namespace}.svc.cluster.local
- - name: OTOROSHI_SECRET
- value: ${otoroshiSecret}
- - name: HEALTH_LIMIT
- value: "5000"
- - name: SSL_OUTSIDE_CLIENT_AUTH
- value: Want
- - name: HTTPS_WANT_CLIENT_AUTH
- value: "true"
- - name: OTOROSHI_INITIAL_CUSTOMIZATION
- value: >
- {
- "config":{
- "tlsSettings": {
- "defaultDomain": "www.${domain}",
- "randomIfNotFound": false
- },
- "scripts":{
- "enabled":true,
- "sinkRefs":[
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator",
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector"
- ],
- "sinkConfig": {},
- "jobRefs":[
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesOtoroshiCRDsControllerJob"
- ],
- "jobConfig":{
- "KubernetesConfig": {
- "trust": false,
- "namespaces": [
- "*"
- ],
- "labels": {},
- "namespacesLabels": {},
- "ingressClasses": [
- "otoroshi"
- ],
- "defaultGroup": "default",
- "ingresses": false,
- "crds": true,
- "coreDnsIntegration": false,
- "coreDnsIntegrationDryRun": false,
- "kubeLeader": false,
- "restartDependantDeployments": false,
- "watch": false,
- "syncDaikokuApikeysOnly": false,
- "kubeSystemNamespace": "kube-system",
- "coreDnsConfigMapName": "coredns",
- "coreDnsDeploymentName": "coredns",
- "corednsPort": 53,
- "otoroshiServiceName": "otoroshi-service",
- "otoroshiNamespace": "${namespace}",
- "clusterDomain": "cluster.local",
- "syncIntervalSeconds": 60,
- "coreDnsEnv": null,
- "watchTimeoutSeconds": 60,
- "watchGracePeriodSeconds": 5,
- "mutatingWebhookName": "otoroshi-admission-webhook-injector",
- "validatingWebhookName": "otoroshi-admission-webhook-validation",
- "templates": {
- "service-group": {},
- "service-descriptor": {},
- "apikeys": {},
- "global-config": {},
- "jwt-verifier": {},
- "tcp-service": {},
- "certificate": {},
- "auth-module": {},
- "script": {},
- "organizations": {},
- "teams": {},
- "webhooks": {
- "flags": {
- "requestCert": true,
- "originCheck": true,
- "tokensCheck": true,
- "displayEnv": false,
- "tlsTrace": false
- }
- }
- }
- }
- }
- }
- }
- }
- - name: JAVA_OPTS
- value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
- readinessProbe:
- httpGet:
- path: /ready
- port: 8080
- failureThreshold: 1
- initialDelaySeconds: 60
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
- livenessProbe:
- httpGet:
- path: /live
- port: 8080
- failureThreshold: 3
- initialDelaySeconds: 60
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
- resources:
- # requests:
- # cpu: "100m"
- # memory: "50Mi"
- # limits:
- # cpu: "4G"
- # memory: "4Gi"
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-service
-spec:
- selector:
- run: otoroshi-deployment
- ports:
- - port: 8080
- name: "http"
- targetPort: "http"
- - port: 8443
- name: "https"
- targetPort: "https"
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Certificate
-metadata:
- name: otoroshi-service-certificate
-spec:
- description: certificate for otoroshi-service
- autoRenew: true
- csr:
- issuer: CN=Otoroshi Root
- hosts:
- - otoroshi-service
- - otoroshi-service.${namespace}.svc.cluster.local
- - otoroshi-api-service.${namespace}.svc.cluster.local
- - otoroshi.${domain}
- - otoroshi-api.${domain}
- - privateapps.${domain}
- key:
- algo: rsa
- size: 2048
- subject: uid=otoroshi-service-cert, O=Otoroshi
- client: false
- ca: false
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
- - haproxy.example
-
- -
-
frontend front_nodes_http
- bind *:80
- mode tcp
- default_backend back_http_nodes
- timeout client 1m
-
-frontend front_nodes_https
- bind *:443
- mode tcp
- default_backend back_https_nodes
- timeout client 1m
-
-backend back_http_nodes
- mode tcp
- balance roundrobin
- server kubernetes-node1 10.2.2.40:41080
- server kubernetes-node2 10.2.2.41:41080
- server kubernetes-node3 10.2.2.42:41080
- timeout connect 10s
- timeout server 1m
-
-backend back_https_nodes
- mode tcp
- balance roundrobin
- server kubernetes-node1 10.2.2.40:41443
- server kubernetes-node2 10.2.2.41:41443
- server kubernetes-node3 10.2.2.42:41443
- timeout connect 10s
- timeout server 1m
- - nginx.example
-
- -
-
stream {
-
- upstream back_http_nodes {
- zone back_http_nodes 64k;
- server 10.2.2.40:41080 max_fails=1;
- server 10.2.2.41:41080 max_fails=1;
- server 10.2.2.42:41080 max_fails=1;
- }
-
- upstream back_https_nodes {
- zone back_https_nodes 64k;
- server 10.2.2.40:41443 max_fails=1;
- server 10.2.2.41:41443 max_fails=1;
- server 10.2.2.42:41443 max_fails=1;
- }
-
- server {
- listen 80;
- proxy_pass back_http_nodes;
- health_check;
- }
-
- server {
- listen 443;
- proxy_pass back_https_nodes;
- health_check;
- }
-
-}
- - dns.example
-
- -
-
# if your loadbalancer is at ip address 10.2.2.50
-
-otoroshi.your.otoroshi.domain IN A 10.2.2.50
-otoroshi-api.your.otoroshi.domain IN A 10.2.2.50
-privateapps.your.otoroshi.domain IN A 10.2.2.50
-api1.another.domain IN A 10.2.2.50
-api2.another.domain IN A 10.2.2.50
-*.api.the.api.domain IN A 10.2.2.50
-
-
Deploy an otoroshi cluster on a cloud provider managed kubernetes cluster
-
Here we have 2 replicas of an otoroshi leader connected to a redis instance and 2 replicas of an otoroshi worker connected to the leader. We use a service of type LoadBalancer
to expose otoroshi leader/worker to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the LoadBalancer
external CNAME
(see the example below)
-
- - deployment.yaml
-
- -
-
---
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: otoroshi-leader-deployment
-spec:
- selector:
- matchLabels:
- run: otoroshi-leader-deployment
- template:
- metadata:
- labels:
- run: otoroshi-leader-deployment
- replicas: 2
- strategy:
- type: RollingUpdate
- rollingUpdate:
- maxUnavailable: 1
- maxSurge: 1
- spec:
- serviceAccountName: otoroshi-admin-user
- terminationGracePeriodSeconds: 60
- hostNetwork: false
- restartPolicy: Always
- containers:
- - image: maif/otoroshi:1.5.0-beta.6-jdk11
- imagePullPolicy: IfNotPresent
- name: otoroshi-leader
- ports:
- - containerPort: 8080
- name: "http"
- protocol: TCP
- - containerPort: 8443
- name: "https"
- protocol: TCP
- env:
- - name: APP_STORAGE_ROOT
- value: otoroshi
- - name: OTOROSHI_INITIAL_ADMIN_PASSWORD
- value: ${password}
- - name: APP_DOMAIN
- value: ${domain}
- - name: APP_STORAGE
- value: lettuce
- - name: REDIS_URL
- value: ${redisUrl}
- # value: redis://redis-leader-service:6379/0
- - name: CLUSTER_MODE
- value: Leader
- - name: CLUSTER_AUTO_UPDATE_STATE
- value: 'true'
- - name: CLUSTER_MTLS_ENABLED
- value: 'true'
- - name: CLUSTER_MTLS_LOOSE
- value: 'true'
- - name: CLUSTER_MTLS_TRUST_ALL
- value: 'true'
- - name: CLUSTER_LEADER_URL
- value: https://otoroshi-leader-api-service.${namespace}.svc.cluster.local:8443
- - name: CLUSTER_LEADER_HOST
- value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- - name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
- value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- - name: ADMIN_API_CLIENT_ID
- value: ${clientId}
- - name: CLUSTER_LEADER_CLIENT_ID
- value: ${clientId}
- - name: ADMIN_API_CLIENT_SECRET
- value: ${clientSecret}
- - name: CLUSTER_LEADER_CLIENT_SECRET
- value: ${clientSecret}
- - name: OTOROSHI_SECRET
- value: ${otoroshiSecret}
- - name: HEALTH_LIMIT
- value: "5000"
- - name: SSL_OUTSIDE_CLIENT_AUTH
- value: Want
- - name: HTTPS_WANT_CLIENT_AUTH
- value: "true"
- - name: OTOROSHI_INITIAL_CUSTOMIZATION
- value: >
- {
- "config":{
- "tlsSettings": {
- "defaultDomain": "www.${domain}",
- "randomIfNotFound": false
- },
- "scripts":{
- "enabled":true,
- "sinkRefs":[
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator",
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector"
- ],
- "sinkConfig": {},
- "jobRefs":[
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesOtoroshiCRDsControllerJob"
- ],
- "jobConfig":{
- "KubernetesConfig": {
- "trust": false,
- "namespaces": [
- "*"
- ],
- "labels": {},
- "namespacesLabels": {},
- "ingressClasses": [
- "otoroshi"
- ],
- "defaultGroup": "default",
- "ingresses": false,
- "crds": true,
- "coreDnsIntegration": false,
- "coreDnsIntegrationDryRun": false,
- "kubeLeader": false,
- "restartDependantDeployments": false,
- "watch": false,
- "syncDaikokuApikeysOnly": false,
- "kubeSystemNamespace": "kube-system",
- "coreDnsConfigMapName": "coredns",
- "coreDnsDeploymentName": "coredns",
- "corednsPort": 53,
- "otoroshiServiceName": "otoroshi-worker-service",
- "otoroshiNamespace": "${namespace}",
- "clusterDomain": "cluster.local",
- "syncIntervalSeconds": 60,
- "coreDnsEnv": null,
- "watchTimeoutSeconds": 60,
- "watchGracePeriodSeconds": 5,
- "mutatingWebhookName": "otoroshi-admission-webhook-injector",
- "validatingWebhookName": "otoroshi-admission-webhook-validation",
- "templates": {
- "service-group": {},
- "service-descriptor": {},
- "apikeys": {},
- "global-config": {},
- "jwt-verifier": {},
- "tcp-service": {},
- "certificate": {},
- "auth-module": {},
- "script": {},
- "organizations": {},
- "teams": {},
- "webhooks": {
- "flags": {
- "requestCert": true,
- "originCheck": true,
- "tokensCheck": true,
- "displayEnv": false,
- "tlsTrace": false
- }
- }
- }
- }
- }
- }
- }
- }
- - name: JAVA_OPTS
- value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
- readinessProbe:
- httpGet:
- path: /ready
- port: 8080
- failureThreshold: 1
- initialDelaySeconds: 10
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
- livenessProbe:
- httpGet:
- path: /live
- port: 8080
- failureThreshold: 3
- initialDelaySeconds: 10
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: otoroshi-worker-deployment
-spec:
- selector:
- matchLabels:
- run: otoroshi-worker-deployment
- template:
- metadata:
- labels:
- run: otoroshi-worker-deployment
- replicas: 2
- strategy:
- type: RollingUpdate
- rollingUpdate:
- maxUnavailable: 1
- maxSurge: 1
- spec:
- serviceAccountName: otoroshi-admin-user
- terminationGracePeriodSeconds: 60
- hostNetwork: false
- restartPolicy: Always
- containers:
- - image: maif/otoroshi:1.4.23-dev-jdk11
- imagePullPolicy: IfNotPresent
- name: otoroshi-worker
- ports:
- - containerPort: 8080
- name: "http"
- protocol: TCP
- - containerPort: 8443
- name: "https"
- protocol: TCP
- env:
- - name: APP_STORAGE_ROOT
- value: otoroshi
- - name: OTOROSHI_INITIAL_ADMIN_PASSWORD
- value: ${password}
- - name: APP_DOMAIN
- value: ${domain}
- - name: CLUSTER_MODE
- value: Worker
- - name: CLUSTER_AUTO_UPDATE_STATE
- value: 'true'
- - name: CLUSTER_MTLS_ENABLED
- value: 'true'
- - name: CLUSTER_MTLS_LOOSE
- value: 'true'
- - name: CLUSTER_MTLS_TRUST_ALL
- value: 'true'
- - name: CLUSTER_LEADER_URL
- value: https://otoroshi-leader-api-service.${namespace}.svc.cluster.local:8443
- - name: CLUSTER_LEADER_HOST
- value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- - name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
- value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- - name: ADMIN_API_CLIENT_ID
- value: ${clientId}
- - name: CLUSTER_LEADER_CLIENT_ID
- value: ${clientId}
- - name: ADMIN_API_CLIENT_SECRET
- value: ${clientSecret}
- - name: CLUSTER_LEADER_CLIENT_SECRET
- value: ${clientSecret}
- - name: HEALTH_LIMIT
- value: "5000"
- - name: SSL_OUTSIDE_CLIENT_AUTH
- value: Want
- - name: HTTPS_WANT_CLIENT_AUTH
- value: "true"
- - name: JAVA_OPTS
- value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
- readinessProbe:
- httpGet:
- path: /ready
- port: 8080
- failureThreshold: 1
- initialDelaySeconds: 60
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
- livenessProbe:
- httpGet:
- path: /live
- port: 8080
- failureThreshold: 3
- initialDelaySeconds: 60
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-leader-api-service
-spec:
- selector:
- run: otoroshi-leader-deployment
- ports:
- - port: 8080
- name: "http"
- targetPort: "http"
- - port: 8443
- name: "https"
- targetPort: "https"
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-leader-service
-spec:
- selector:
- run: otoroshi-leader-deployment
- ports:
- - port: 8080
- name: "http"
- targetPort: "http"
- - port: 8443
- name: "https"
- targetPort: "https"
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-worker-service
-spec:
- selector:
- run: otoroshi-worker-deployment
- ports:
- - port: 8080
- name: "http"
- targetPort: "http"
- - port: 8443
- name: "https"
- targetPort: "https"
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-leader-external-service
-spec:
- type: LoadBalancer
- selector:
- run: otoroshi-leader-deployment
- ports:
- - port: 80
- name: "http"
- targetPort: "http"
- - port: 443
- name: "https"
- targetPort: "https"
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-worker-external-service
-spec:
- type: LoadBalancer
- selector:
- run: otoroshi-worker-deployment
- ports:
- - port: 80
- name: "http"
- targetPort: "http"
- - port: 443
- name: "https"
- targetPort: "https"
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Certificate
-metadata:
- name: otoroshi-service-certificate
-spec:
- description: certificate for otoroshi-service
- autoRenew: true
- csr:
- issuer: CN=Otoroshi Root
- hosts:
- - otoroshi-service
- - otoroshi-service.${namespace}.svc.cluster.local
- - otoroshi-api-service.${namespace}.svc.cluster.local
- - otoroshi.${domain}
- - otoroshi-api.${domain}
- - privateapps.${domain}
- key:
- algo: rsa
- size: 2048
- subject: uid=otoroshi-service-cert, O=Otoroshi
- client: false
- ca: false
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
- - dns.example
-
- -
-
otoroshi.your.otoroshi.domain IN CNAME generated.cname.for.leader.of.your.cluster.loadbalancer
-otoroshi-api.your.otoroshi.domain IN CNAME generated.cname.for.leader.of.your.cluster.loadbalancer
-privateapps.your.otoroshi.domain IN CNAME generated.cname.for.leader.of.your.cluster.loadbalancer
-
-api1.another.domain IN CNAME generated.cname.for.worker.of.your.cluster.loadbalancer
-api2.another.domain IN CNAME generated.cname.for.worker.of.your.cluster.loadbalancer
-*.api.the.api.domain IN CNAME generated.cname.for.worker.of.your.cluster.loadbalancer
-
-
Deploy an otoroshi cluster on a bare metal kubernetes cluster
-
Here we have 2 replicas of otoroshi leader connected to the same redis instance and 2 replicas for otoroshi worker. The otoroshi instances are exposed as nodePort
so you’ll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below).
-
- - deployment.yaml
-
- -
-
---
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: otoroshi-leader-deployment
-spec:
- selector:
- matchLabels:
- run: otoroshi-leader-deployment
- template:
- metadata:
- labels:
- run: otoroshi-leader-deployment
- replicas: 2
- strategy:
- type: RollingUpdate
- rollingUpdate:
- maxUnavailable: 1
- maxSurge: 1
- spec:
- serviceAccountName: otoroshi-admin-user
- terminationGracePeriodSeconds: 60
- hostNetwork: false
- restartPolicy: Always
- containers:
- - image: maif/otoroshi:1.5.0-beta.6-jdk11
- imagePullPolicy: IfNotPresent
- name: otoroshi-leader
- ports:
- - containerPort: 8080
- name: "http"
- protocol: TCP
- - containerPort: 8443
- name: "https"
- protocol: TCP
- env:
- - name: APP_STORAGE_ROOT
- value: otoroshi
- - name: OTOROSHI_INITIAL_ADMIN_PASSWORD
- value: ${password}
- - name: APP_DOMAIN
- value: ${domain}
- - name: APP_STORAGE
- value: lettuce
- - name: REDIS_URL
- value: ${redisUrl}
- # value: redis://redis-leader-service:6379/0
- - name: CLUSTER_MODE
- value: Leader
- - name: CLUSTER_AUTO_UPDATE_STATE
- value: 'true'
- - name: CLUSTER_MTLS_ENABLED
- value: 'true'
- - name: CLUSTER_MTLS_LOOSE
- value: 'true'
- - name: CLUSTER_MTLS_TRUST_ALL
- value: 'true'
- - name: CLUSTER_LEADER_URL
- value: https://otoroshi-leader-api-service.${namespace}.svc.cluster.local:8443
- - name: CLUSTER_LEADER_HOST
- value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- - name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
- value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- - name: ADMIN_API_CLIENT_ID
- value: ${clientId}
- - name: CLUSTER_LEADER_CLIENT_ID
- value: ${clientId}
- - name: ADMIN_API_CLIENT_SECRET
- value: ${clientSecret}
- - name: CLUSTER_LEADER_CLIENT_SECRET
- value: ${clientSecret}
- - name: OTOROSHI_SECRET
- value: ${otoroshiSecret}
- - name: HEALTH_LIMIT
- value: "5000"
- - name: SSL_OUTSIDE_CLIENT_AUTH
- value: Want
- - name: HTTPS_WANT_CLIENT_AUTH
- value: "true"
- - name: OTOROSHI_INITIAL_CUSTOMIZATION
- value: >
- {
- "config":{
- "tlsSettings": {
- "defaultDomain": "www.${domain}",
- "randomIfNotFound": false
- },
- "scripts":{
- "enabled":true,
- "sinkRefs":[
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator",
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector"
- ],
- "sinkConfig": {},
- "jobRefs":[
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesOtoroshiCRDsControllerJob"
- ],
- "jobConfig":{
- "KubernetesConfig": {
- "trust": false,
- "namespaces": [
- "*"
- ],
- "labels": {},
- "namespacesLabels": {},
- "ingressClasses": [
- "otoroshi"
- ],
- "defaultGroup": "default",
- "ingresses": false,
- "crds": true,
- "coreDnsIntegration": false,
- "coreDnsIntegrationDryRun": false,
- "kubeLeader": false,
- "restartDependantDeployments": false,
- "watch": false,
- "syncDaikokuApikeysOnly": false,
- "kubeSystemNamespace": "kube-system",
- "coreDnsConfigMapName": "coredns",
- "coreDnsDeploymentName": "coredns",
- "corednsPort": 53,
- "otoroshiServiceName": "otoroshi-worker-service",
- "otoroshiNamespace": "${namespace}",
- "clusterDomain": "cluster.local",
- "syncIntervalSeconds": 60,
- "coreDnsEnv": null,
- "watchTimeoutSeconds": 60,
- "watchGracePeriodSeconds": 5,
- "mutatingWebhookName": "otoroshi-admission-webhook-injector",
- "validatingWebhookName": "otoroshi-admission-webhook-validation",
- "templates": {
- "service-group": {},
- "service-descriptor": {},
- "apikeys": {},
- "global-config": {},
- "jwt-verifier": {},
- "tcp-service": {},
- "certificate": {},
- "auth-module": {},
- "script": {},
- "organizations": {},
- "teams": {},
- "webhooks": {
- "flags": {
- "requestCert": true,
- "originCheck": true,
- "tokensCheck": true,
- "displayEnv": false,
- "tlsTrace": false
- }
- }
- }
- }
- }
- }
- }
- }
- - name: JAVA_OPTS
- value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
- readinessProbe:
- httpGet:
- path: /ready
- port: 8080
- failureThreshold: 1
- initialDelaySeconds: 10
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
- livenessProbe:
- httpGet:
- path: /live
- port: 8080
- failureThreshold: 3
- initialDelaySeconds: 10
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: otoroshi-worker-deployment
-spec:
- selector:
- matchLabels:
- run: otoroshi-worker-deployment
- template:
- metadata:
- labels:
- run: otoroshi-worker-deployment
- replicas: 2
- strategy:
- type: RollingUpdate
- rollingUpdate:
- maxUnavailable: 1
- maxSurge: 1
- spec:
- serviceAccountName: otoroshi-admin-user
- terminationGracePeriodSeconds: 60
- hostNetwork: false
- restartPolicy: Always
- containers:
- - image: maif/otoroshi:1.4.23-dev-jdk11
- imagePullPolicy: IfNotPresent
- name: otoroshi-worker
- ports:
- - containerPort: 8080
- name: "http"
- protocol: TCP
- - containerPort: 8443
- name: "https"
- protocol: TCP
- env:
- - name: APP_STORAGE_ROOT
- value: otoroshi
- - name: OTOROSHI_INITIAL_ADMIN_PASSWORD
- value: ${password}
- - name: APP_DOMAIN
- value: ${domain}
- - name: CLUSTER_MODE
- value: Worker
- - name: CLUSTER_AUTO_UPDATE_STATE
- value: 'true'
- - name: CLUSTER_MTLS_ENABLED
- value: 'true'
- - name: CLUSTER_MTLS_LOOSE
- value: 'true'
- - name: CLUSTER_MTLS_TRUST_ALL
- value: 'true'
- - name: CLUSTER_LEADER_URL
- value: https://otoroshi-leader-api-service.${namespace}.svc.cluster.local:8443
- - name: CLUSTER_LEADER_HOST
- value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- - name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
- value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- - name: ADMIN_API_CLIENT_ID
- value: ${clientId}
- - name: CLUSTER_LEADER_CLIENT_ID
- value: ${clientId}
- - name: ADMIN_API_CLIENT_SECRET
- value: ${clientSecret}
- - name: CLUSTER_LEADER_CLIENT_SECRET
- value: ${clientSecret}
- - name: HEALTH_LIMIT
- value: "5000"
- - name: SSL_OUTSIDE_CLIENT_AUTH
- value: Want
- - name: HTTPS_WANT_CLIENT_AUTH
- value: "true"
- - name: JAVA_OPTS
- value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
- readinessProbe:
- httpGet:
- path: /ready
- port: 8080
- failureThreshold: 1
- initialDelaySeconds: 60
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
- livenessProbe:
- httpGet:
- path: /live
- port: 8080
- failureThreshold: 3
- initialDelaySeconds: 60
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-leader-api-service
-spec:
- selector:
- run: otoroshi-leader-deployment
- ports:
- - port: 8080
- name: "http"
- targetPort: "http"
- - port: 8443
- name: "https"
- targetPort: "https"
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-leader-service
-spec:
- selector:
- run: otoroshi-leader-deployment
- ports:
- - port: 8080
- nodePort: 31080
- name: "http"
- targetPort: "http"
- - port: 8443
- nodePort: 31443
- name: "https"
- targetPort: "https"
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-worker-service
-spec:
- selector:
- run: otoroshi-worker-deployment
- ports:
- - port: 8080
- nodePort: 32080
- name: "http"
- targetPort: "http"
- - port: 8443
- nodePort: 32443
- name: "https"
- targetPort: "https"
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Certificate
-metadata:
- name: otoroshi-service-certificate
-spec:
- description: certificate for otoroshi-service
- autoRenew: true
- csr:
- issuer: CN=Otoroshi Root
- hosts:
- - otoroshi-service
- - otoroshi-service.${namespace}.svc.cluster.local
- - otoroshi-api-service.${namespace}.svc.cluster.local
- - otoroshi.${domain}
- - otoroshi-api.${domain}
- - privateapps.${domain}
- key:
- algo: rsa
- size: 2048
- subject: uid=otoroshi-service-cert, O=Otoroshi
- client: false
- ca: false
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
- - nginx.example
-
- -
-
stream {
-
- upstream worker_http_nodes {
- zone worker_http_nodes 64k;
- server 10.2.2.40:32080 max_fails=1;
- server 10.2.2.41:32080 max_fails=1;
- server 10.2.2.42:32080 max_fails=1;
- }
-
- upstream worker_https_nodes {
- zone worker_https_nodes 64k;
- server 10.2.2.40:32443 max_fails=1;
- server 10.2.2.41:32443 max_fails=1;
- server 10.2.2.42:32443 max_fails=1;
- }
-
- upstream leader_http_nodes {
- zone leader_http_nodes 64k;
- server 10.2.2.40:31080 max_fails=1;
- server 10.2.2.41:31080 max_fails=1;
- server 10.2.2.42:31080 max_fails=1;
- }
-
- upstream leader_https_nodes {
- zone leader_https_nodes 64k;
- server 10.2.2.40:31443 max_fails=1;
- server 10.2.2.41:31443 max_fails=1;
- server 10.2.2.42:31443 max_fails=1;
- }
-
- server {
- listen 80;
- proxy_pass worker_http_nodes;
- health_check;
- }
-
- server {
- listen 443;
- proxy_pass worker_https_nodes;
- health_check;
- }
-
- server {
- listen 81;
- proxy_pass leader_http_nodes;
- health_check;
- }
-
- server {
- listen 444;
- proxy_pass leader_https_nodes;
- health_check;
- }
-
-}
- - dns.example
-
- -
-
# if your loadbalancer is at ip address 10.2.2.50
-
-otoroshi.your.otoroshi.domain IN A 10.2.2.50
-otoroshi-api.your.otoroshi.domain IN A 10.2.2.50
-privateapps.your.otoroshi.domain IN A 10.2.2.50
-api1.another.domain IN A 10.2.2.50
-api2.another.domain IN A 10.2.2.50
-*.api.the.api.domain IN A 10.2.2.50
- - dns.example
-
- -
-
# if your loadbalancer is at ip address 10.2.2.50
-
-otoroshi.your.otoroshi.domain IN A 10.2.2.50
-otoroshi-api.your.otoroshi.domain IN A 10.2.2.50
-privateapps.your.otoroshi.domain IN A 10.2.2.50
-api1.another.domain IN A 10.2.2.50
-api2.another.domain IN A 10.2.2.50
-*.api.the.api.domain IN A 10.2.2.50
-
-
Deploy an otoroshi cluster on a bare metal kubernetes cluster using DaemonSet
-
Here we have 1 otoroshi leader instance on each kubernetes node (with the otoroshi-kind: leader
label) connected to the same redis instance and 1 otoroshi worker instance on each kubernetes node (with the otoroshi-kind: worker
label). The otoroshi instances are exposed as nodePort
so you’ll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below).
-
- - deployment.yaml
-
- -
-
---
-apiVersion: apps/v1
-kind: DaemonSet
-metadata:
- name: otoroshi-leader-deployment
-spec:
- selector:
- matchLabels:
- run: otoroshi-leader-deployment
- template:
- metadata:
- labels:
- run: otoroshi-leader-deployment
- strategy:
- type: RollingUpdate
- rollingUpdate:
- maxUnavailable: 1
- maxSurge: 1
- spec:
- affinity:
- nodeAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- nodeSelectorTerms:
- - matchExpressions:
- - key: otoroshi-kind
- operator: In
- values:
- - leader
- tolerations:
- - key: node-role.kubernetes.io/master
- effect: NoSchedule
- serviceAccountName: otoroshi-admin-user
- terminationGracePeriodSeconds: 60
- hostNetwork: false
- restartPolicy: Always
- containers:
- - image: maif/otoroshi:1.5.0-beta.6-jdk11
- imagePullPolicy: IfNotPresent
- name: otoroshi-leader
- ports:
- - containerPort: 8080
- hostPort: 41080
- name: "http"
- protocol: TCP
- - containerPort: 8443
- hostPort: 41443
- name: "https"
- protocol: TCP
- env:
- - name: APP_STORAGE_ROOT
- value: otoroshi
- - name: OTOROSHI_INITIAL_ADMIN_PASSWORD
- value: ${password}
- - name: APP_DOMAIN
- value: ${domain}
- - name: APP_STORAGE
- value: lettuce
- - name: REDIS_URL
- value: ${redisUrl}
- # value: redis://redis-leader-service:6379/0
- - name: CLUSTER_MODE
- value: Leader
- - name: CLUSTER_AUTO_UPDATE_STATE
- value: 'true'
- - name: CLUSTER_MTLS_ENABLED
- value: 'true'
- - name: CLUSTER_MTLS_LOOSE
- value: 'true'
- - name: CLUSTER_MTLS_TRUST_ALL
- value: 'true'
- - name: CLUSTER_LEADER_URL
- value: https://otoroshi-leader-api-service.${namespace}.svc.cluster.local:8443
- - name: CLUSTER_LEADER_HOST
- value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- - name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
- value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- - name: ADMIN_API_CLIENT_ID
- value: ${clientId}
- - name: CLUSTER_LEADER_CLIENT_ID
- value: ${clientId}
- - name: ADMIN_API_CLIENT_SECRET
- value: ${clientSecret}
- - name: CLUSTER_LEADER_CLIENT_SECRET
- value: ${clientSecret}
- - name: OTOROSHI_SECRET
- value: ${otoroshiSecret}
- - name: HEALTH_LIMIT
- value: "5000"
- - name: SSL_OUTSIDE_CLIENT_AUTH
- value: Want
- - name: HTTPS_WANT_CLIENT_AUTH
- value: "true"
- - name: OTOROSHI_INITIAL_CUSTOMIZATION
- value: >
- {
- "config":{
- "tlsSettings": {
- "defaultDomain": "www.${domain}",
- "randomIfNotFound": false
- },
- "scripts":{
- "enabled":true,
- "sinkRefs":[
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator",
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector"
- ],
- "sinkConfig": {},
- "jobRefs":[
- "cp:otoroshi.plugins.jobs.kubernetes.KubernetesOtoroshiCRDsControllerJob"
- ],
- "jobConfig":{
- "KubernetesConfig": {
- "trust": false,
- "namespaces": [
- "*"
- ],
- "labels": {},
- "namespacesLabels": {},
- "ingressClasses": [
- "otoroshi"
- ],
- "defaultGroup": "default",
- "ingresses": false,
- "crds": true,
- "coreDnsIntegration": false,
- "coreDnsIntegrationDryRun": false,
- "kubeLeader": false,
- "restartDependantDeployments": false,
- "watch": false,
- "syncDaikokuApikeysOnly": false,
- "kubeSystemNamespace": "kube-system",
- "coreDnsConfigMapName": "coredns",
- "coreDnsDeploymentName": "coredns",
- "corednsPort": 53,
- "otoroshiServiceName": "otoroshi-worker-service",
- "otoroshiNamespace": "${namespace}",
- "clusterDomain": "cluster.local",
- "syncIntervalSeconds": 60,
- "coreDnsEnv": null,
- "watchTimeoutSeconds": 60,
- "watchGracePeriodSeconds": 5,
- "mutatingWebhookName": "otoroshi-admission-webhook-injector",
- "validatingWebhookName": "otoroshi-admission-webhook-validation",
- "templates": {
- "service-group": {},
- "service-descriptor": {},
- "apikeys": {},
- "global-config": {},
- "jwt-verifier": {},
- "tcp-service": {},
- "certificate": {},
- "auth-module": {},
- "script": {},
- "organizations": {},
- "teams": {},
- "webhooks": {
- "flags": {
- "requestCert": true,
- "originCheck": true,
- "tokensCheck": true,
- "displayEnv": false,
- "tlsTrace": false
- }
- }
- }
- }
- }
- }
- }
- }
- - name: JAVA_OPTS
- value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
- readinessProbe:
- httpGet:
- path: /ready
- port: 8080
- failureThreshold: 1
- initialDelaySeconds: 60
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
- livenessProbe:
- httpGet:
- path: /live
- port: 8080
- failureThreshold: 3
- initialDelaySeconds: 60
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
----
-apiVersion: apps/v1
-kind: DaemonSet
-metadata:
- name: otoroshi-worker-deployment
-spec:
- selector:
- matchLabels:
- run: otoroshi-worker-deployment
- template:
- metadata:
- labels:
- run: otoroshi-worker-deployment
- replicas: 2
- strategy:
- type: RollingUpdate
- rollingUpdate:
- maxUnavailable: 1
- maxSurge: 1
- spec:
- affinity:
- nodeAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- nodeSelectorTerms:
- - matchExpressions:
- - key: otoroshi-kind
- operator: In
- values:
- - worker
- tolerations:
- - key: node-role.kubernetes.io/master
- effect: NoSchedule
- serviceAccountName: otoroshi-admin-user
- terminationGracePeriodSeconds: 60
- hostNetwork: false
- restartPolicy: Always
- containers:
- - image: maif/otoroshi:1.4.23-dev-jdk11
- imagePullPolicy: IfNotPresent
- name: otoroshi-worker
- ports:
- - containerPort: 8080
- hostPort: 42080
- name: "http"
- protocol: TCP
- - containerPort: 8443
- hostPort: 42443
- name: "https"
- protocol: TCP
- env:
- - name: APP_STORAGE_ROOT
- value: otoroshi
- - name: OTOROSHI_INITIAL_ADMIN_PASSWORD
- value: ${password}
- - name: APP_DOMAIN
- value: ${domain}
- - name: CLUSTER_MODE
- value: Worker
- - name: CLUSTER_AUTO_UPDATE_STATE
- value: 'true'
- - name: CLUSTER_MTLS_ENABLED
- value: 'true'
- - name: CLUSTER_MTLS_LOOSE
- value: 'true'
- - name: CLUSTER_MTLS_TRUST_ALL
- value: 'true'
- - name: CLUSTER_LEADER_URL
- value: https://otoroshi-leader-api-service.${namespace}.svc.cluster.local:8443
- - name: CLUSTER_LEADER_HOST
- value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- - name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
- value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- - name: ADMIN_API_CLIENT_ID
- value: ${clientId}
- - name: CLUSTER_LEADER_CLIENT_ID
- value: ${clientId}
- - name: ADMIN_API_CLIENT_SECRET
- value: ${clientSecret}
- - name: CLUSTER_LEADER_CLIENT_SECRET
- value: ${clientSecret}
- - name: HEALTH_LIMIT
- value: "5000"
- - name: SSL_OUTSIDE_CLIENT_AUTH
- value: Want
- - name: HTTPS_WANT_CLIENT_AUTH
- value: "true"
- - name: JAVA_OPTS
- value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
- readinessProbe:
- httpGet:
- path: /ready
- port: 8080
- failureThreshold: 1
- initialDelaySeconds: 60
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
- livenessProbe:
- httpGet:
- path: /live
- port: 8080
- failureThreshold: 3
- initialDelaySeconds: 60
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 2
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-leader-api-service
-spec:
- selector:
- run: otoroshi-leader-deployment
- ports:
- - port: 8080
- name: "http"
- targetPort: "http"
- - port: 8443
- name: "https"
- targetPort: "https"
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-leader-service
-spec:
- selector:
- run: otoroshi-leader-deployment
- ports:
- - port: 8080
- name: "http"
- targetPort: "http"
- - port: 8443
- name: "https"
- targetPort: "https"
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-worker-service
-spec:
- selector:
- run: otoroshi-worker-deployment
- ports:
- - port: 8080
- name: "http"
- targetPort: "http"
- - port: 8443
- name: "https"
- targetPort: "https"
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Certificate
-metadata:
- name: otoroshi-service-certificate
-spec:
- description: certificate for otoroshi-service
- autoRenew: true
- csr:
- issuer: CN=Otoroshi Root
- hosts:
- - otoroshi-service
- - otoroshi-service.${namespace}.svc.cluster.local
- - otoroshi-api-service.${namespace}.svc.cluster.local
- - otoroshi.${domain}
- - otoroshi-api.${domain}
- - privateapps.${domain}
- key:
- algo: rsa
- size: 2048
- subject: uid=otoroshi-service-cert, O=Otoroshi
- client: false
- ca: false
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
- - nginx.example
-
- -
-
stream {
-
- upstream worker_http_nodes {
- zone worker_http_nodes 64k;
- server 10.2.2.40:42080 max_fails=1;
- server 10.2.2.41:42080 max_fails=1;
- server 10.2.2.42:42080 max_fails=1;
- }
-
- upstream worker_https_nodes {
- zone worker_https_nodes 64k;
- server 10.2.2.40:42443 max_fails=1;
- server 10.2.2.41:42443 max_fails=1;
- server 10.2.2.42:42443 max_fails=1;
- }
-
- upstream leader_http_nodes {
- zone leader_http_nodes 64k;
- server 10.2.2.40:41080 max_fails=1;
- server 10.2.2.41:41080 max_fails=1;
- server 10.2.2.42:41080 max_fails=1;
- }
-
- upstream leader_https_nodes {
- zone leader_https_nodes 64k;
- server 10.2.2.40:41443 max_fails=1;
- server 10.2.2.41:41443 max_fails=1;
- server 10.2.2.42:41443 max_fails=1;
- }
-
- server {
- listen 80;
- proxy_pass worker_http_nodes;
- health_check;
- }
-
- server {
- listen 443;
- proxy_pass worker_https_nodes;
- health_check;
- }
-
- server {
- listen 81;
- proxy_pass leader_http_nodes;
- health_check;
- }
-
- server {
- listen 444;
- proxy_pass leader_https_nodes;
- health_check;
- }
-
-}
- - dns.example
-
- -
-
# if your loadbalancer is at ip address 10.2.2.50
-
-otoroshi.your.otoroshi.domain IN A 10.2.2.50
-otoroshi-api.your.otoroshi.domain IN A 10.2.2.50
-privateapps.your.otoroshi.domain IN A 10.2.2.50
-api1.another.domain IN A 10.2.2.50
-api2.another.domain IN A 10.2.2.50
-*.api.the.api.domain IN A 10.2.2.50
- - dns.example
-
- -
-
# if your loadbalancer is at ip address 10.2.2.50
-
-otoroshi.your.otoroshi.domain IN A 10.2.2.50
-otoroshi-api.your.otoroshi.domain IN A 10.2.2.50
-privateapps.your.otoroshi.domain IN A 10.2.2.50
-api1.another.domain IN A 10.2.2.50
-api2.another.domain IN A 10.2.2.50
-*.api.the.api.domain IN A 10.2.2.50
-
-
Using Otoroshi as an Ingress Controller
-
If you want to use Otoroshi as an Ingress Controller, just go to the danger zone, and in Global scripts
add the job named Kubernetes Ingress Controller
.
-
Then add the following configuration for the job (with your own tweaks of course)
-
{
- "KubernetesConfig": {
- "enabled": true,
- "endpoint": "https://127.0.0.1:6443",
- "token": "eyJhbGciOiJSUzI....F463SrpOehQRaQ",
- "namespaces": [
- "*"
- ]
- }
-}
-
-
the configuration can have the following values
-
{
- "KubernetesConfig": {
- "endpoint": "https://127.0.0.1:6443", // the endpoint to talk to the kubernetes api, optional
- "token": "xxxx", // the bearer token to talk to the kubernetes api, optional
- "userPassword": "user:password", // the user password tuple to talk to the kubernetes api, optional
- "caCert": "/etc/ca.cert", // the ca cert file path to talk to the kubernetes api, optional
- "trust": false, // trust any cert to talk to the kubernetes api, optional
- "namespaces": ["*"], // the watched namespaces
- "labels": ["label"], // the watched namespaces
- "ingressClasses": ["otoroshi"], // the watched kubernetes.io/ingress.class annotations, can be *
- "defaultGroup": "default", // the group to put services in otoroshi
- "ingresses": true, // sync ingresses
- "crds": false, // sync crds
- "kubeLeader": false, // delegate leader election to kubernetes, to know where the sync job should run
- "restartDependantDeployments": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments
- "templates": { // template for entities that will be merged with kubernetes entities. can be "default" to use otoroshi default templates
- "service-group": {},
- "service-descriptor": {},
- "apikeys": {},
- "global-config": {},
- "jwt-verifier": {},
- "tcp-service": {},
- "certificate": {},
- "auth-module": {},
- "data-exporter": {},
- "script": {},
- "organization": {},
- "team": {},
- "data-exporter": {}
- }
- }
-}
-
-
If endpoint
is not defined, Otoroshi will try to get it from $KUBERNETES_SERVICE_HOST
and $KUBERNETES_SERVICE_PORT
. If token
is not defined, Otoroshi will try to get it from the file at /var/run/secrets/kubernetes.io/serviceaccount/token
. If caCert
is not defined, Otoroshi will try to get it from the file at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
. If $KUBECONFIG
is defined, endpoint
, token
and caCert
will be read from the current context of the file referenced by it.
-
Now you can deploy your first service ;)
-
Deploy an ingress route
-
now let’s say you want to deploy an http service and route to the outside world through otoroshi
-
---
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: http-app-deployment
-spec:
- selector:
- matchLabels:
- run: http-app-deployment
- replicas: 1
- template:
- metadata:
- labels:
- run: http-app-deployment
- spec:
- containers:
- - image: kennethreitz/httpbin
- imagePullPolicy: IfNotPresent
- name: otoroshi
- ports:
- - containerPort: 80
- name: "http"
----
-apiVersion: v1
-kind: Service
-metadata:
- name: http-app-service
-spec:
- ports:
- - port: 8080
- targetPort: http
- name: http
- selector:
- run: http-app-deployment
----
-apiVersion: networking.k8s.io/v1beta1
-kind: Ingress
-metadata:
- name: http-app-ingress
- annotations:
- kubernetes.io/ingress.class: otoroshi
-spec:
- tls:
- - hosts:
- - httpapp.foo.bar
- secretName: http-app-cert
- rules:
- - host: httpapp.foo.bar
- http:
- paths:
- - path: /
- backend:
- serviceName: http-app-service
- servicePort: 8080
-
-
once deployed, otoroshi will sync with kubernetes and create the corresponding service to route your app. You will be able to access your app with
-
curl -X GET https://httpapp.foo.bar/get
-
-
Support for Ingress Classes
-
Since Kubernetes 1.18, you can use IngressClass
type of manifest to specify which ingress controller you want to use for a deployment (https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#extended-configuration-with-ingress-classes). Otoroshi is fully compatible with this new manifest kind
. To use it, configure the Ingress job to match your controller
-
{
- "KubernetesConfig": {
- ...
- "ingressClasses": ["otoroshi.io/ingress-controller"],
- ...
- }
-}
-
-
then you have to deploy an IngressClass
to declare Otoroshi as an ingress controller
-
apiVersion: "networking.k8s.io/v1beta1"
-kind: "IngressClass"
-metadata:
- name: "otoroshi-ingress-controller"
-spec:
- controller: "otoroshi.io/ingress-controller"
- parameters:
- apiGroup: "proxy.otoroshi.io/v1alpha"
- kind: "IngressParameters"
- name: "otoroshi-ingress-controller"
-
-
and use it in your Ingress
-
apiVersion: networking.k8s.io/v1beta1
-kind: Ingress
-metadata:
- name: http-app-ingress
-spec:
- ingressClassName: otoroshi-ingress-controller
- tls:
- - hosts:
- - httpapp.foo.bar
- secretName: http-app-cert
- rules:
- - host: httpapp.foo.bar
- http:
- paths:
- - path: /
- backend:
- serviceName: http-app-service
- servicePort: 8080
-
-
Use multiple ingress controllers
-
It is of course possible to use multiple ingress controller at the same time (https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#using-multiple-ingress-controllers) using the annotation kubernetes.io/ingress.class
. By default, otoroshi reacts to the class otoroshi
, but you can make it the default ingress controller with the following config
-
{
- "KubernetesConfig": {
- ...
- "ingressClass": "*",
- ...
- }
-}
-
-
Supported annotations
-
if you need to customize the service descriptor behind an ingress rule, you can use some annotations. If you need better customisation, just go to the CRDs part. The following annotations are supported :
-
- ingress.otoroshi.io/groups
- ingress.otoroshi.io/group
- ingress.otoroshi.io/groupId
- ingress.otoroshi.io/name
- ingress.otoroshi.io/targetsLoadBalancing
- ingress.otoroshi.io/stripPath
- ingress.otoroshi.io/enabled
- ingress.otoroshi.io/userFacing
- ingress.otoroshi.io/privateApp
- ingress.otoroshi.io/forceHttps
- ingress.otoroshi.io/maintenanceMode
- ingress.otoroshi.io/buildMode
- ingress.otoroshi.io/strictlyPrivate
- ingress.otoroshi.io/sendOtoroshiHeadersBack
- ingress.otoroshi.io/readOnly
- ingress.otoroshi.io/xForwardedHeaders
- ingress.otoroshi.io/overrideHost
- ingress.otoroshi.io/allowHttp10
- ingress.otoroshi.io/logAnalyticsOnServer
- ingress.otoroshi.io/useAkkaHttpClient
- ingress.otoroshi.io/useNewWSClient
- ingress.otoroshi.io/tcpUdpTunneling
- ingress.otoroshi.io/detectApiKeySooner
- ingress.otoroshi.io/letsEncrypt
- ingress.otoroshi.io/publicPatterns
- ingress.otoroshi.io/privatePatterns
- ingress.otoroshi.io/additionalHeaders
- ingress.otoroshi.io/additionalHeadersOut
- ingress.otoroshi.io/missingOnlyHeadersIn
- ingress.otoroshi.io/missingOnlyHeadersOut
- ingress.otoroshi.io/removeHeadersIn
- ingress.otoroshi.io/removeHeadersOut
- ingress.otoroshi.io/headersVerification
- ingress.otoroshi.io/matchingHeaders
- ingress.otoroshi.io/ipFiltering.whitelist
- ingress.otoroshi.io/ipFiltering.blacklist
- ingress.otoroshi.io/api.exposeApi
- ingress.otoroshi.io/api.openApiDescriptorUrl
- ingress.otoroshi.io/healthCheck.enabled
- ingress.otoroshi.io/healthCheck.url
- ingress.otoroshi.io/jwtVerifier.ids
- ingress.otoroshi.io/jwtVerifier.enabled
- ingress.otoroshi.io/jwtVerifier.excludedPatterns
- ingress.otoroshi.io/authConfigRef
- ingress.otoroshi.io/redirection.enabled
- ingress.otoroshi.io/redirection.code
- ingress.otoroshi.io/redirection.to
- ingress.otoroshi.io/clientValidatorRef
- ingress.otoroshi.io/transformerRefs
- ingress.otoroshi.io/transformerConfig
- ingress.otoroshi.io/accessValidator.enabled
- ingress.otoroshi.io/accessValidator.excludedPatterns
- ingress.otoroshi.io/accessValidator.refs
- ingress.otoroshi.io/accessValidator.config
- ingress.otoroshi.io/preRouting.enabled
- ingress.otoroshi.io/preRouting.excludedPatterns
- ingress.otoroshi.io/preRouting.refs
- ingress.otoroshi.io/preRouting.config
- ingress.otoroshi.io/issueCert
- ingress.otoroshi.io/issueCertCA
- ingress.otoroshi.io/gzip.enabled
- ingress.otoroshi.io/gzip.excludedPatterns
- ingress.otoroshi.io/gzip.whiteList
- ingress.otoroshi.io/gzip.blackList
- ingress.otoroshi.io/gzip.bufferSize
- ingress.otoroshi.io/gzip.chunkedThreshold
- ingress.otoroshi.io/gzip.compressionLevel
- ingress.otoroshi.io/cors.enabled
- ingress.otoroshi.io/cors.allowOrigin
- ingress.otoroshi.io/cors.exposeHeaders
- ingress.otoroshi.io/cors.allowHeaders
- ingress.otoroshi.io/cors.allowMethods
- ingress.otoroshi.io/cors.excludedPatterns
- ingress.otoroshi.io/cors.maxAge
- ingress.otoroshi.io/cors.allowCredentials
- ingress.otoroshi.io/clientConfig.useCircuitBreaker
- ingress.otoroshi.io/clientConfig.retries
- ingress.otoroshi.io/clientConfig.maxErrors
- ingress.otoroshi.io/clientConfig.retryInitialDelay
- ingress.otoroshi.io/clientConfig.backoffFactor
- ingress.otoroshi.io/clientConfig.connectionTimeout
- ingress.otoroshi.io/clientConfig.idleTimeout
- ingress.otoroshi.io/clientConfig.callAndStreamTimeout
- ingress.otoroshi.io/clientConfig.callTimeout
- ingress.otoroshi.io/clientConfig.globalTimeout
- ingress.otoroshi.io/clientConfig.sampleInterval
- ingress.otoroshi.io/enforceSecureCommunication
- ingress.otoroshi.io/sendInfoToken
- ingress.otoroshi.io/sendStateChallenge
- ingress.otoroshi.io/secComHeaders.claimRequestName
- ingress.otoroshi.io/secComHeaders.stateRequestName
- ingress.otoroshi.io/secComHeaders.stateResponseName
- ingress.otoroshi.io/secComTtl
- ingress.otoroshi.io/secComVersion
- ingress.otoroshi.io/secComInfoTokenVersion
- ingress.otoroshi.io/secComExcludedPatterns
- ingress.otoroshi.io/secComSettings.size
- ingress.otoroshi.io/secComSettings.secret
- ingress.otoroshi.io/secComSettings.base64
- ingress.otoroshi.io/secComUseSameAlgo
- ingress.otoroshi.io/secComAlgoChallengeOtoToBack.size
- ingress.otoroshi.io/secComAlgoChallengeOtoToBack.secret
- ingress.otoroshi.io/secComAlgoChallengeOtoToBack.base64
- ingress.otoroshi.io/secComAlgoChallengeBackToOto.size
- ingress.otoroshi.io/secComAlgoChallengeBackToOto.secret
- ingress.otoroshi.io/secComAlgoChallengeBackToOto.base64
- ingress.otoroshi.io/secComAlgoInfoToken.size
- ingress.otoroshi.io/secComAlgoInfoToken.secret
- ingress.otoroshi.io/secComAlgoInfoToken.base64
- ingress.otoroshi.io/securityExcludedPatterns
-
-
for more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html
-
with the previous example, the ingress does not define any apikey, so the route is public. If you want to enable apikeys on it, you can deploy the following descriptor
-
apiVersion: networking.k8s.io/v1beta1
-kind: Ingress
-metadata:
- name: http-app-ingress
- annotations:
- kubernetes.io/ingress.class: otoroshi
- ingress.otoroshi.io/group: http-app-group
- ingress.otoroshi.io/forceHttps: 'true'
- ingress.otoroshi.io/sendOtoroshiHeadersBack: 'true'
- ingress.otoroshi.io/overrideHost: 'true'
- ingress.otoroshi.io/allowHttp10: 'false'
- ingress.otoroshi.io/publicPatterns: ''
-spec:
- tls:
- - hosts:
- - httpapp.foo.bar
- secretName: http-app-cert
- rules:
- - host: httpapp.foo.bar
- http:
- paths:
- - path: /
- backend:
- serviceName: http-app-service
- servicePort: 8080
-
-
now you can use an existing apikey in the http-app-group
to access your app
-
curl -X GET https://httpapp.foo.bar/get -u existing-apikey-1:secret-1
-
-
Use Otoroshi CRDs for a better/full integration
-
Otoroshi provides some Custom Resource Definitions for kubernetes in order to manage Otoroshi related entities in kubernetes
-
- service-groups
- service-descriptors
- apikeys
- certificates
- global-configs
- jwt-verifiers
- auth-modules
- scripts
- tcp-services
- data-exporters
- admins
- teams
- organizations
-
-
using CRDs, you will be able to deploy and manager those entities from kubectl or the kubernetes api like
-
sudo kubectl get apikeys --all-namespaces
-sudo kubectl get service-descriptors --all-namespaces
-curl -X GET \
- -H 'Authorization: Bearer eyJhbGciOiJSUzI....F463SrpOehQRaQ' \
- -H 'Accept: application/json' -k \
- https://127.0.0.1:6443/apis/proxy.otoroshi.io/v1alpha1/apikeys | jq
-
-
You can see this as better Ingress
resources. Like any Ingress
resource can define which controller it uses (using the kubernetes.io/ingress.class
annotation), you can chose another kind of resource instead of Ingress
. With Otoroshi CRDs you can even define resources like Certificate
, Apikey
, AuthModules
, JwtVerifier
, etc. It will help you to use all the power of Otoroshi while using the deployment model of kubernetes.
Warning
-
when using Otoroshi CRDs, Kubernetes becomes the single source of truth for the synced entities. It means that any value in the descriptors deployed will overrides the one in Otoroshi datastore each time it’s synced. So be careful if you use the Otoroshi UI or the API, some changes in configuration may be overriden by CRDs sync job.
-
Resources examples
-
- - group.yaml
-
- -
-
apiVersion: proxy.otoroshi.io/v1alpha1
-kind: ServiceGroup
-metadata:
- name: http-app-group
- annotations:
- io.otoroshi/id: http-app-group
-spec:
- description: a group to hold services about the http-app
- - apikey.yaml
-
- -
-
apiVersion: proxy.otoroshi.io/v1alpha1
-kind: ApiKey
-metadata:
- name: http-app-2-apikey-1
-# this apikey can be used to access another app in a different group
-spec:
- # a secret name secret-1 will be created by otoroshi and can be used by containers
- exportSecret: true
- secretName: secret-2
- authorizedEntities:
- - http-app-2-group
- metadata:
- foo: bar
- rotation: # not mandatory
- enabled: true
- rotationEvery: 720 # hours
- gracePeriod: 168 # hours
- - service-descriptor.yaml
-
- -
-
apiVersion: proxy.otoroshi.io/v1alpha1
-kind: ServiceDescriptor
-metadata:
- name: http-app-service-descriptor
-spec:
- description: the service descriptor for the http app
- groups:
- - http-app-group
- forceHttps: true
- hosts:
- - httpapp.foo.bar
- matchingRoot: /
- targets:
- - url: 'https://http-app-service:8443'
- # you can also use serviceName and servicePort to use pods ip addresses. Can be used without or in combination with url
- # serviceName: http-app-service
- # servicePort: https
- mtlsConfig: # not mandatory
- # use mtls to contact the backend
- mtls: true
- certs:
- # reference the DN for the client cert
- - UID=httpapp-client, O=OtoroshiApps
- trustedCerts:
- # reference the DN for the CA cert
- - CN=Otoroshi Root
- sendOtoroshiHeadersBack: true
- xForwardedHeaders: true
- overrideHost: true
- allowHttp10: false
- publicPatterns:
- - /health
- additionalHeaders:
- x-foo: bar
- - certificate.yaml
-
- -
-
apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Certificate
-metadata:
- name: http-app-certificate-client
-spec:
- description: certificate for the http-app
- autoRenew: true
- csr:
- issuer: CN=Otoroshi Root
- key:
- algo: rsa
- size: 2048
- subject: UID=httpapp-client, O=OtoroshiApps
- client: false
- ca: false
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
- - jwt.yaml
-
- -
-
apiVersion: proxy.otoroshi.io/v1alpha1
-kind: JwtVerifier
-metadata:
- name: http-app-verifier
- annotations:
- io.otoroshi/id: http-app-verifier
-spec:
- desc: verify that the jwt token in header jwt is ok
- strict: true
- source:
- type: InHeader
- name: jwt
- remove: ''
- algoSettings:
- type: HSAlgoSettings
- size: 512
- secret: secret
- strategy:
- type: PassThrough
- verificationSettings:
- fields:
- foo: bar
- arrayFields: {}
- - auth.yaml
-
- -
-
apiVersion: proxy.otoroshi.io/v1alpha1
-kind: AuthModule
-metadata:
- name: http-app-auth
- annotations:
- io.otoroshi/id: http-app-auth
-spec:
- type: oauth2
- desc: Keycloak mTLS
- sessionMaxAge: 86400
- clientId: otoroshi
- clientSecret: ''
- authorizeUrl: 'https://keycloak.foo.bar/auth/realms/master/protocol/openid-connect/auth'
- tokenUrl: 'https://keycloak.foo.bar/auth/realms/master/protocol/openid-connect/token'
- userInfoUrl: 'https://keycloak.foo.bar/auth/realms/master/protocol/openid-connect/userinfo'
- introspectionUrl: 'https://keycloak.foo.bar/auth/realms/master/protocol/openid-connect/token/introspect'
- loginUrl: 'https://keycloak.foo.bar/auth/realms/master/protocol/openid-connect/auth'
- logoutUrl: 'https://keycloak.foo.bar/auth/realms/master/protocol/openid-connect/logout'
- scope: openid address email microprofile-jwt offline_access phone profile roles web-origins
- claims: ''
- useCookie: false
- useJson: false
- readProfileFromToken: false
- accessTokenField: access_token
- jwtVerifier:
- type: JWKSAlgoSettings
- url: 'http://keycloak.foo.bar/auth/realms/master/protocol/openid-connect/certs'
- timeout: 2000
- headers: {}
- ttl: 3600000
- kty: RSA
- proxy:
- mtlsConfig:
- certs: []
- trustedCerts: []
- mtls: false
- loose: false
- trustAll: false
- nameField: email
- emailField: email
- apiKeyMetaField: apkMeta
- apiKeyTagsField: apkTags
- otoroshiDataField: app_metadata|otoroshi_data
- callbackUrl: 'https://privateapps.oto.tools/privateapps/generic/callback'
- oidConfig: 'http://keycloak.foo.bar/auth/realms/master/.well-known/openid-configuration'
- mtlsConfig:
- certs:
- - UID=httpapp-client, O=OtoroshiApps
- trustedCerts:
- - UID=httpapp-client, O=OtoroshiApps
- mtls: true
- loose: false
- trustAll: false
- proxy:
- extraMetadata: {}
- refreshTokens: false
- - organization.yaml
-
- -
-
apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Tenant
-metadata:
- name: default-organization
-spec:
- id: default
- name: Default organization
- description: Default organization created for any otoroshi instance
- metadata: {}
- - team.yaml
-
- -
-
apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Team
-metadata:
- name: default-team
-spec:
- id: default
- tenant: default
- name: Default team
- description: Default team created for any otoroshi instance
- metadata: {}
-
-
Configuration
-
To configure it, just go to the danger zone, and in Global scripts
add the job named Kubernetes Otoroshi CRDs Controller
. Then add the following configuration for the job (with your own tweak of course)
-
{
- "KubernetesConfig": {
- "enabled": true,
- "crds": true,
- "endpoint": "https://127.0.0.1:6443",
- "token": "eyJhbGciOiJSUzI....F463SrpOehQRaQ",
- "namespaces": [
- "*"
- ]
- }
-}
-
-
the configuration can have the following values
-
{
- "KubernetesConfig": {
- "endpoint": "https://127.0.0.1:6443", // the endpoint to talk to the kubernetes api, optional
- "token": "xxxx", // the bearer token to talk to the kubernetes api, optional
- "userPassword": "user:password", // the user password tuple to talk to the kubernetes api, optional
- "caCert": "/etc/ca.cert", // the ca cert file path to talk to the kubernetes api, optional
- "trust": false, // trust any cert to talk to the kubernetes api, optional
- "namespaces": ["*"], // the watched namespaces
- "labels": ["label"], // the watched namespaces
- "ingressClasses": ["otoroshi"], // the watched kubernetes.io/ingress.class annotations, can be *
- "defaultGroup": "default", // the group to put services in otoroshi
- "ingresses": false, // sync ingresses
- "crds": true, // sync crds
- "kubeLeader": false, // delegate leader election to kubernetes, to know where the sync job should run
- "restartDependantDeployments": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments
- "templates": { // template for entities that will be merged with kubernetes entities. can be "default" to use otoroshi default templates
- "service-group": {},
- "service-descriptor": {},
- "apikeys": {},
- "global-config": {},
- "jwt-verifier": {},
- "tcp-service": {},
- "certificate": {},
- "auth-module": {},
- "data-exporter": {},
- "script": {},
- "organization": {},
- "team": {},
- "data-exporter": {}
- }
- }
-}
-
-
If endpoint
is not defined, Otoroshi will try to get it from $KUBERNETES_SERVICE_HOST
and $KUBERNETES_SERVICE_PORT
. If token
is not defined, Otoroshi will try to get it from the file at /var/run/secrets/kubernetes.io/serviceaccount/token
. If caCert
is not defined, Otoroshi will try to get it from the file at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
. If $KUBECONFIG
is defined, endpoint
, token
and caCert
will be read from the current context of the file referenced by it.
-
you can find a more complete example of the configuration object here
-
Note about apikeys
and certificates
resources
-
Apikeys and Certificates are a little bit different than the other resources. They have ability to be defined without their secret part, but with an export setting so otoroshi will generate the secret parts and export the apikey or the certificate to kubernetes secret. Then any app will be able to mount them as volumes (see the full example below)
-
In those resources you can define
-
exportSecret: true
-secretName: the-secret-name
-
-
and omit clientSecret
for apikey or publicKey
, privateKey
for certificates. For certificate you will have to provide a csr
for the certificate in order to generate it
-
csr:
- issuer: CN=Otoroshi Root
- hosts:
- - httpapp.foo.bar
- - httpapps.foo.bar
- key:
- algo: rsa
- size: 2048
- subject: UID=httpapp-front, O=OtoroshiApps
- client: false
- ca: false
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
-
-
when apikeys are exported as kubernetes secrets, they will have the type otoroshi.io/apikey-secret
with values clientId
and clientSecret
-
apiVersion: v1
-kind: Secret
-metadata:
- name: apikey-1
-type: otoroshi.io/apikey-secret
-data:
- clientId: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==
- clientSecret: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==
-
-
when certificates are exported as kubernetes secrets, they will have the type kubernetes.io/tls
with the standard values tls.crt
(the full cert chain) and tls.key
(the private key). For more convenience, they will also have a cert.crt
value containing the actual certificate without the ca chain and ca-chain.crt
containing the ca chain without the certificate.
-
apiVersion: v1
-kind: Secret
-metadata:
- name: certificate-1
-type: kubernetes.io/tls
-data:
- tls.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==
- tls.key: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==
- cert.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==
- ca-chain.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==
-
-
Full CRD example
-
then you can deploy the previous example with better configuration level, and using mtls, apikeys, etc
-
Let say the app looks like :
-
const fs = require('fs');
-const https = require('https');
-
-// here we read the apikey to access http-app-2 from files mounted from secrets
-const clientId = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientId').toString('utf8')
-const clientSecret = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientSecret').toString('utf8')
-
-const backendKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/tls.key').toString('utf8')
-const backendCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/cert.crt').toString('utf8')
-const backendCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/ca-chain.crt').toString('utf8')
-
-const clientKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/tls.key').toString('utf8')
-const clientCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/cert.crt').toString('utf8')
-const clientCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/ca-chain.crt').toString('utf8')
-
-function callApi2() {
- return new Promise((success, failure) => {
- const options = {
- // using the implicit internal name (*.global.otoroshi.mesh) of the other service descriptor passing through otoroshi
- hostname: 'http-app-service-descriptor-2.global.otoroshi.mesh',
- port: 433,
- path: '/',
- method: 'GET',
- headers: {
- 'Accept': 'application/json',
- 'Otoroshi-Client-Id': clientId,
- 'Otoroshi-Client-Secret': clientSecret,
- },
- cert: clientCert,
- key: clientKey,
- ca: clientCa
- };
- let data = '';
- const req = https.request(options, (res) => {
- res.on('data', (d) => {
- data = data + d.toString('utf8');
- });
- res.on('end', () => {
- success({ body: JSON.parse(data), res });
- });
- res.on('error', (e) => {
- failure(e);
- });
- });
- req.end();
- })
-}
-
-const options = {
- key: backendKey,
- cert: backendCert,
- ca: backendCa,
- // we want mtls behavior
- requestCert: true,
- rejectUnauthorized: true
-};
-https.createServer(options, (req, res) => {
- res.writeHead(200, {'Content-Type': 'application/json'});
- callApi2().then(resp => {
- res.write(JSON.stringify{ ("message": `Hello to ${req.socket.getPeerCertificate().subject.CN}`, api2: resp.body }));
- });
-}).listen(433);
-
-
then, the descriptors will be :
-
---
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: http-app-deployment
-spec:
- selector:
- matchLabels:
- run: http-app-deployment
- replicas: 1
- template:
- metadata:
- labels:
- run: http-app-deployment
- spec:
- containers:
- - image: foo/http-app
- imagePullPolicy: IfNotPresent
- name: otoroshi
- ports:
- - containerPort: 443
- name: "https"
- volumeMounts:
- - name: apikey-volume
- # here you will be able to read apikey from files
- # - /var/run/secrets/kubernetes.io/apikeys/clientId
- # - /var/run/secrets/kubernetes.io/apikeys/clientSecret
- mountPath: "/var/run/secrets/kubernetes.io/apikeys"
- readOnly: true
- volumeMounts:
- - name: backend-cert-volume
- # here you will be able to read app cert from files
- # - /var/run/secrets/kubernetes.io/certs/backend/tls.crt
- # - /var/run/secrets/kubernetes.io/certs/backend/tls.key
- mountPath: "/var/run/secrets/kubernetes.io/certs/backend"
- readOnly: true
- - name: client-cert-volume
- # here you will be able to read app cert from files
- # - /var/run/secrets/kubernetes.io/certs/client/tls.crt
- # - /var/run/secrets/kubernetes.io/certs/client/tls.key
- mountPath: "/var/run/secrets/kubernetes.io/certs/client"
- readOnly: true
- volumes:
- - name: apikey-volume
- secret:
- # here we reference the secret name from apikey http-app-2-apikey-1
- secretName: secret-2
- - name: backend-cert-volume
- secret:
- # here we reference the secret name from cert http-app-certificate-backend
- secretName: http-app-certificate-backend-secret
- - name: client-cert-volume
- secret:
- # here we reference the secret name from cert http-app-certificate-client
- secretName: http-app-certificate-client-secret
----
-apiVersion: v1
-kind: Service
-metadata:
- name: http-app-service
-spec:
- ports:
- - port: 8443
- targetPort: https
- name: https
- selector:
- run: http-app-deployment
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: ServiceGroup
-metadata:
- name: http-app-group
- annotations:
- otoroshi.io/id: http-app-group
-spec:
- description: a group to hold services about the http-app
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: ApiKey
-metadata:
- name: http-app-apikey-1
-# this apikey can be used to access the app
-spec:
- # a secret name secret-1 will be created by otoroshi and can be used by containers
- exportSecret: true
- secretName: secret-1
- authorizedEntities:
- - group_http-app-group
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: ApiKey
-metadata:
- name: http-app-2-apikey-1
-# this apikey can be used to access another app in a different group
-spec:
- # a secret name secret-1 will be created by otoroshi and can be used by containers
- exportSecret: true
- secretName: secret-2
- authorizedEntities:
- - group_http-app-2-group
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Certificate
-metadata:
- name: http-app-certificate-frontend
-spec:
- description: certificate for the http-app on otorshi frontend
- autoRenew: true
- csr:
- issuer: CN=Otoroshi Root
- hosts:
- - httpapp.foo.bar
- key:
- algo: rsa
- size: 2048
- subject: UID=httpapp-front, O=OtoroshiApps
- client: false
- ca: false
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Certificate
-metadata:
- name: http-app-certificate-backend
-spec:
- description: certificate for the http-app deployed on pods
- autoRenew: true
- # a secret name http-app-certificate-backend-secret will be created by otoroshi and can be used by containers
- exportSecret: true
- secretName: http-app-certificate-backend-secret
- csr:
- issuer: CN=Otoroshi Root
- hosts:
- - http-app-service
- key:
- algo: rsa
- size: 2048
- subject: UID=httpapp-back, O=OtoroshiApps
- client: false
- ca: false
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Certificate
-metadata:
- name: http-app-certificate-client
-spec:
- description: certificate for the http-app
- autoRenew: true
- secretName: http-app-certificate-client-secret
- csr:
- issuer: CN=Otoroshi Root
- key:
- algo: rsa
- size: 2048
- subject: UID=httpapp-client, O=OtoroshiApps
- client: false
- ca: false
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: ServiceDescriptor
-metadata:
- name: http-app-service-descriptor
-spec:
- description: the service descriptor for the http app
- groups:
- - http-app-group
- forceHttps: true
- hosts:
- - httpapp.foo.bar # hostname exposed oustide of the kubernetes cluster
- # - http-app-service-descriptor.global.otoroshi.mesh # implicit internal name inside the kubernetes cluster
- matchingRoot: /
- targets:
- - url: https://http-app-service:8443
- # alternatively, you can use serviceName and servicePort to use pods ip addresses
- # serviceName: http-app-service
- # servicePort: https
- mtlsConfig:
- # use mtls to contact the backend
- mtls: true
- certs:
- # reference the DN for the client cert
- - UID=httpapp-client, O=OtoroshiApps
- trustedCerts:
- # reference the DN for the CA cert
- - CN=Otoroshi Root
- sendOtoroshiHeadersBack: true
- xForwardedHeaders: true
- overrideHost: true
- allowHttp10: false
- publicPatterns:
- - /health
- additionalHeaders:
- x-foo: bar
-# here you can specify everything supported by otoroshi like jwt-verifiers, auth config, etc ... for more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html
-
-
now with this descriptor deployed, you can access your app with a command like
-
CLIENT_ID=`kubectl get secret secret-1 -o jsonpath="{.data.clientId}" | base64 --decode`
-CLIENT_SECRET=`kubectl get secret secret-1 -o jsonpath="{.data.clientSecret}" | base64 --decode`
-curl -X GET https://httpapp.foo.bar/get -u "$CLIENT_ID:$CLIENT_SECRET"
-
-
Expose Otoroshi to outside world
-
If you deploy Otoroshi on a kubernetes cluster, the Otoroshi service is deployed as a loadbalancer (service type: LoadBalancer
). You’ll need to declare in your DNS settings any name that can be routed by otoroshi going to the loadbalancer endpoint (CNAME or ip addresses) of your kubernetes distribution. If you use a managed kubernetes cluster from a cloud provider, it will work seamlessly as they will provide external loadbalancers out of the box. However, if you use a bare metal kubernetes cluster, id doesn’t come with support for external loadbalancers (service of type LoadBalancer
). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like MetalLB that provide software LoadBalancer
services to bare metal clusters or you can use and customize examples in the installation section.
Warning
-
We don’t recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.
-
Access a service from inside the k8s cluster
-
Using host header overriding
-
You can access any service referenced in otoroshi, through otoroshi from inside the kubernetes cluster by using the otoroshi service name (if you use a template based on https://github.com/MAIF/otoroshi/tree/master/kubernetes/base deployed in the otoroshi namespace) and the host header with the service domain like :
-
CLIENT_ID="xxx"
-CLIENT_SECRET="xxx"
-curl -X GET -H 'Host: httpapp.foo.bar' https://otoroshi-service.otoroshi.svc.cluster.local:8443/get -u "$CLIENT_ID:$CLIENT_SECRET"
-
-
Using dedicated services
-
it’s also possible to define services that targets otoroshi deployment (or otoroshi workers deployment) and use then as valid hosts in otoroshi services
-
apiVersion: v1
-kind: Service
-metadata:
- name: my-awesome-service
-spec:
- selector:
- # run: otoroshi-deployment
- # or in cluster mode
- run: otoroshi-worker-deployment
- ports:
- - port: 8080
- name: "http"
- targetPort: "http"
- - port: 8443
- name: "https"
- targetPort: "https"
-
-
and access it like
-
CLIENT_ID="xxx"
-CLIENT_SECRET="xxx"
-curl -X GET https://my-awesome-service.my-namspace.svc.cluster.local:8443/get -u "$CLIENT_ID:$CLIENT_SECRET"
-
-
Using coredns integration
-
You can also enable the coredns integration to simplify the flow. You can use the the following keys in the plugin config :
-
{
- "KubernetesConfig": {
- ...
- "coreDnsIntegration": true, // enable coredns integration for intra cluster calls
- "kubeSystemNamespace": "kube-system", // the namespace where coredns is deployed
- "corednsConfigMap": "coredns", // the name of the coredns configmap
- "otoroshiServiceName": "otoroshi-service", // the name of the otoroshi service, could be otoroshi-workers-service
- "otoroshiNamespace": "otoroshi", // the namespace where otoroshi is deployed
- "clusterDomain": "cluster.local", // the domain for cluster services
- ...
- }
-}
-
-
otoroshi will patch coredns config at startup then you can call your services like
-
CLIENT_ID="xxx"
-CLIENT_SECRET="xxx"
-curl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u "$CLIENT_ID:$CLIENT_SECRET"
-
-
By default, all services created from CRDs service descriptors are exposed as ${service-name}.${service-namespace}.otoroshi.mesh
or ${service-name}.${service-namespace}.svc.otoroshi.local
-
Using coredns with manual patching
-
you can also patch the coredns config manually
-
kubectl edit configmaps coredns -n kube-system # or your own custom config map
-
-
and change the Corefile
data to add the following snippet in at the end of the file
-
otoroshi.mesh:53 {
- errors
- health
- ready
- kubernetes cluster.local in-addr.arpa ip6.arpa {
- pods insecure
- upstream
- fallthrough in-addr.arpa ip6.arpa
- }
- rewrite name regex (.*)\.otoroshi\.mesh otoroshi-worker-service.otoroshi.svc.cluster.local
- forward . /etc/resolv.conf
- cache 30
- loop
- reload
- loadbalance
-}
-
-
you can also define simpler rewrite if it suits you use case better
-
rewrite name my-service.otoroshi.mesh otoroshi-worker-service.otoroshi.svc.cluster.local
-
-
do not hesitate to change otoroshi-worker-service.otoroshi
according to your own setup. If otoroshi is not in cluster mode, change it to otoroshi-service.otoroshi
. If otoroshi is not deployed in the otoroshi
namespace, change it to otoroshi-service.the-namespace
, etc.
-
By default, all services created from CRDs service descriptors are exposed as ${service-name}.${service-namespace}.otoroshi.mesh
-
then you can call your service like
-
CLIENT_ID="xxx"
-CLIENT_SECRET="xxx"
-
-curl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u "$CLIENT_ID:$CLIENT_SECRET"
-
-
Using old kube-dns system
-
if your stuck with an old version of kubernetes, it uses kube-dns that is not supported by otoroshi, so you will have to provide your own coredns deployment and declare it as a stubDomain in the old kube-dns system.
-
Here is an example of coredns deployment with otoroshi domain config
-
- - coredns.yaml
-
- -
-
---
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: otoroshi-dns
- labels:
- app: otoroshi
- component: coredns
-data:
- Corefile: |
- otoroshi.mesh:5353 {
- errors
- health
- ready
- kubernetes cluster.local in-addr.arpa ip6.arpa {
- pods insecure
- fallthrough in-addr.arpa ip6.arpa
- }
- rewrite name regex (.*)\.otoroshi\.mesh otoroshi-service.otoroshi.svc.cluster.local
- forward . /etc/resolv.conf
- cache 30
- loop
- reload
- loadbalance
- }
- .:5353 {
- errors
- health
- kubernetes cluster.local in-addr.arpa ip6.arpa {
- pods insecure
- fallthrough in-addr.arpa ip6.arpa
- }
- forward . /etc/resolv.conf
- cache 30
- loop
- reload
- loadbalance
- }
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-dns
- labels:
- app: otoroshi
- component: coredns
-spec:
- # clusterIP: 1.1.1.1
- selector:
- app: otoroshi
- component: coredns
- type: ClusterIP
- ports:
- - name: dns
- port: 5353
- protocol: UDP
- - name: dns-tcp
- port: 5353
- protocol: TCP
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: otoroshi-dns
- labels:
- app: otoroshi
- component: coredns
-spec:
- replicas: 2
- strategy:
- type: RollingUpdate
- rollingUpdate:
- maxUnavailable: 1
- selector:
- matchLabels:
- app: otoroshi
- component: coredns
- template:
- metadata:
- labels:
- app: otoroshi
- component: coredns
- spec:
- serviceAccountName: otoroshi-admin-user
- affinity:
- podAntiAffinity:
- preferredDuringSchedulingIgnoredDuringExecution:
- - weight: 100
- podAffinityTerm:
- labelSelector:
- matchExpressions:
- - key: app
- operator: In
- values:
- - otoroshi
- - key: component
- operator: In
- values:
- - coredns
- topologyKey: "kubernetes.io/hostname"
- tolerations:
- - key: "CriticalAddonsOnly"
- operator: "Exists"
- containers:
- - name: coredns
- image: coredns/coredns:1.8.0
- imagePullPolicy: IfNotPresent
- resources:
- limits:
- memory: 170Mi
- requests:
- cpu: 100m
- memory: 70Mi
- args: [ "-conf", "/etc/coredns/Corefile" ]
- volumeMounts:
- - name: config-volume
- mountPath: /etc/coredns
- readOnly: true
- ports:
- - containerPort: 5353
- name: dns
- protocol: UDP
- - containerPort: 5353
- name: dns-tcp
- protocol: TCP
- securityContext:
- allowPrivilegeEscalation: false
- capabilities:
- add:
- - NET_BIND_SERVICE
- drop:
- - all
- readOnlyRootFilesystem: true
- livenessProbe:
- httpGet:
- path: /health
- port: 8080
- scheme: HTTP
- initialDelaySeconds: 30
- timeoutSeconds: 5
- successThreshold: 1
- failureThreshold: 5
- dnsPolicy: Default
- volumes:
- - name: config-volume
- configMap:
- name: otoroshi-dns
- items:
- - key: Corefile
- path: Corefile
-
-
-
then you can enable the kube-dns integration in the otoroshi kubernetes job
-
{
- "KubernetesConfig": {
- ...
- "kubeDnsOperatorIntegration": true, // enable kube-dns integration for intra cluster calls
- "kubeDnsOperatorCoreDnsNamespace": "otoroshi", // namespace where coredns is installed
- "kubeDnsOperatorCoreDnsName": "otoroshi-dns", // name of the coredns service
- "kubeDnsOperatorCoreDnsPort": 5353, // port of the coredns service
- ...
- }
-}
-
-
Using Openshift DNS operator
-
Openshift DNS operator does not allow to customize DNS configuration a lot, so you will have to provide your own coredns deployment and declare it as a stub in the Openshift DNS operator.
-
Here is an example of coredns deployment with otoroshi domain config
-
- - coredns.yaml
-
- -
-
---
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: otoroshi-dns
- labels:
- app: otoroshi
- component: coredns
-data:
- Corefile: |
- otoroshi.mesh:5353 {
- errors
- health
- ready
- kubernetes cluster.local in-addr.arpa ip6.arpa {
- pods insecure
- fallthrough in-addr.arpa ip6.arpa
- }
- rewrite name regex (.*)\.otoroshi\.mesh otoroshi-service.otoroshi.svc.cluster.local
- forward . /etc/resolv.conf
- cache 30
- loop
- reload
- loadbalance
- }
- .:5353 {
- errors
- health
- kubernetes cluster.local in-addr.arpa ip6.arpa {
- pods insecure
- fallthrough in-addr.arpa ip6.arpa
- }
- forward . /etc/resolv.conf
- cache 30
- loop
- reload
- loadbalance
- }
----
-apiVersion: v1
-kind: Service
-metadata:
- name: otoroshi-dns
- labels:
- app: otoroshi
- component: coredns
-spec:
- # clusterIP: 1.1.1.1
- selector:
- app: otoroshi
- component: coredns
- type: ClusterIP
- ports:
- - name: dns
- port: 5353
- protocol: UDP
- - name: dns-tcp
- port: 5353
- protocol: TCP
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: otoroshi-dns
- labels:
- app: otoroshi
- component: coredns
-spec:
- replicas: 2
- strategy:
- type: RollingUpdate
- rollingUpdate:
- maxUnavailable: 1
- selector:
- matchLabels:
- app: otoroshi
- component: coredns
- template:
- metadata:
- labels:
- app: otoroshi
- component: coredns
- spec:
- serviceAccountName: otoroshi-admin-user
- affinity:
- podAntiAffinity:
- preferredDuringSchedulingIgnoredDuringExecution:
- - weight: 100
- podAffinityTerm:
- labelSelector:
- matchExpressions:
- - key: app
- operator: In
- values:
- - otoroshi
- - key: component
- operator: In
- values:
- - coredns
- topologyKey: "kubernetes.io/hostname"
- tolerations:
- - key: "CriticalAddonsOnly"
- operator: "Exists"
- containers:
- - name: coredns
- image: coredns/coredns:1.8.0
- imagePullPolicy: IfNotPresent
- resources:
- limits:
- memory: 170Mi
- requests:
- cpu: 100m
- memory: 70Mi
- args: [ "-conf", "/etc/coredns/Corefile" ]
- volumeMounts:
- - name: config-volume
- mountPath: /etc/coredns
- readOnly: true
- ports:
- - containerPort: 5353
- name: dns
- protocol: UDP
- - containerPort: 5353
- name: dns-tcp
- protocol: TCP
- securityContext:
- allowPrivilegeEscalation: false
- capabilities:
- add:
- - NET_BIND_SERVICE
- drop:
- - all
- readOnlyRootFilesystem: true
- livenessProbe:
- httpGet:
- path: /health
- port: 8080
- scheme: HTTP
- initialDelaySeconds: 30
- timeoutSeconds: 5
- successThreshold: 1
- failureThreshold: 5
- dnsPolicy: Default
- volumes:
- - name: config-volume
- configMap:
- name: otoroshi-dns
- items:
- - key: Corefile
- path: Corefile
-
-
-
then you can enable the Openshift DNS operator integration in the otoroshi kubernetes job
-
{
- "KubernetesConfig": {
- ...
- "openshiftDnsOperatorIntegration": true, // enable openshift dns operator integration for intra cluster calls
- "openshiftDnsOperatorCoreDnsNamespace": "otoroshi", // namespace where coredns is installed
- "openshiftDnsOperatorCoreDnsName": "otoroshi-dns", // name of the coredns service
- "openshiftDnsOperatorCoreDnsPort": 5353, // port of the coredns service
- ...
- }
-}
-
-
don’t forget to update the otoroshi ClusterRole
-
- apiGroups:
- - operator.openshift.io
- resources:
- - dnses
- verbs:
- - get
- - list
- - watch
- - update
-
-
CRD validation in kubectl
-
In order to get CRD validation before manifest deployments right inside kubectl, you can deploy a validation webhook that will do the trick. Also check that you have otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator
request sink enabled.
-
- - validation-webhook.yaml
-
- -
-
apiVersion: admissionregistration.k8s.io/v1
-kind: ValidatingWebhookConfiguration
-metadata:
- name: otoroshi-admission-webhook-validation
- labels:
- app: otoroshi
- component: otoroshi-validation-webhook
-webhooks:
- - name: otoroshi-admission-webhook.otoroshi.io
- rules:
- - operations:
- - "CREATE"
- - "UPDATE"
- apiGroups:
- - "proxy.otoroshi.io"
- apiVersions:
- - "*"
- resources:
- - "*"
- scope: "Namespaced"
- clientConfig:
- # url: "https://otoroshi-kubernetes-admission-webhook.otoroshi.svc.cluster.local:8443/apis/webhooks/validation"
- service:
- name: otoroshi-service
- namespace: otoroshi
- path: "/apis/webhooks/validation"
- port: 8443
- caBundle: "" # injected at runtime
- failurePolicy: Ignore # inject at runtime
- sideEffects: None
- admissionReviewVersions:
- - "v1"
-
-
Easier integration with otoroshi-sidecar
-
Otoroshi can help you to easily use existing services without modifications while gettings all the perks of otoroshi like apikeys, mTLS, exchange protocol, etc. To do so, otoroshi will inject a sidecar container in the pod of your deployment that will handle call coming from otoroshi and going to otoroshi. To enable otoroshi-sidecar, you need to deploy the following admission webhook. Also check that you have otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector
request sink enabled.
-
- - sidecar-webhook.yaml
-
- -
-
apiVersion: admissionregistration.k8s.io/v1
-kind: MutatingWebhookConfiguration
-metadata:
- name: otoroshi-admission-webhook-injector
- labels:
- app: otoroshi
- component: otoroshi-validation-webhook
-webhooks:
- - name: otoroshi-admission-webhook-injector.otoroshi.io
- rules:
- - operations:
- - "CREATE"
- apiGroups:
- - ""
- apiVersions:
- - "v1"
- resources:
- - "pods"
- scope: "Namespaced"
- # namespaceSelector:
- # matchLabels:
- # otoroshi.io/sidecar: inject
- objectSelector:
- matchLabels:
- otoroshi.io/sidecar: inject
- clientConfig:
- # url: "https://otoroshi-kubernetes-admission-webhook.otoroshi.svc.cluster.local:8443/apis/webhooks/inject"
- service:
- name: otoroshi-service
- namespace: otoroshi
- path: "/apis/webhooks/inject"
- port: 8443
- caBundle: "" # inject at runtime
- failurePolicy: Ignore # inject at runtime
- sideEffects: None
- admissionReviewVersions:
- - "v1"
-
-
then it’s quite easy to add the sidecar, just add the following label to your pod otoroshi.io/sidecar: inject
and some annotations to tell otoroshi what certificates and apikeys to use.
-
annotations:
- otoroshi.io/sidecar-apikey: backend-apikey
- otoroshi.io/sidecar-backend-cert: backend-cert
- otoroshi.io/sidecar-client-cert: oto-client-cert
- otoroshi.io/token-secret: secret
- otoroshi.io/expected-dn: UID=oto-client-cert, O=OtoroshiApps
-
-
now you can just call you otoroshi handled apis from inside your pod like curl http://my-service.namespace.otoroshi.mesh/api
without passing any apikey or client certificate and the sidecar will handle everything for you. Same thing for call from otoroshi to your pod, everything will be done in mTLS fashion with apikeys and otoroshi exchange protocol
-
here is a full example
-
- - sidecar.yaml
-
- -
-
---
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: app-deployment
-spec:
- selector:
- matchLabels:
- run: app-deployment
- app: node
- replicas: 1
- template:
- metadata:
- labels:
- run: app-deployment
- app: node
- foo: bar
- otoroshi.io/sidecar: inject
- annotations:
- otoroshi.io/sidecar-apikey: backend-apikey
- otoroshi.io/sidecar-backend-cert: backend-cert
- otoroshi.io/sidecar-client-cert: oto-client-cert
- otoroshi.io/token-secret: secret
- otoroshi.io/expected-dn: UID=oto-client-cert, O=OtoroshiApps
- spec:
- containers:
- - image: containous/whoami:latest
- name: whoami
- args: ["--port", "8081"]
- ports:
- - name: main-port
- containerPort: 8081
----
-apiVersion: v1
-kind: Service
-metadata:
- name: app-service
-spec:
- selector:
- run: app-deployment
- ports:
- - port: 8443
- name: "https"
- targetPort: "https"
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Certificate
-metadata:
- name: backend-cert
-spec:
- description: backend-cert
- autoRenew: true
- exportSecret: true
- secretName: backend-cert
- csr:
- hosts:
- - app-service.default.svc.cluster.local
- issuer: otoroshi-intermediate-ca
- key:
- algo: rsa
- size: 2048
- subject: UID=backend-cert, O=OtoroshiApps
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Certificate
-metadata:
- name: client-cert
- annotations:
- otoroshi.io/id: client-cert
-spec:
- description: client-cert
- autoRenew: true
- exportSecret: true
- client: true
- secretName: client-cert
- csr:
- client: true
- issuer: otoroshi-intermediate-ca
- key:
- algo: rsa
- size: 2048
- subject: UID=client-cert, O=OtoroshiApps
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Certificate
-metadata:
- name: oto-client-cert
- annotations:
- otoroshi.io/id: oto-client-cert
-spec:
- description: oto-client-cert
- autoRenew: true
- exportSecret: true
- client: true
- secretName: oto-client-cert
- csr:
- client: true
- issuer: otoroshi-intermediate-ca
- key:
- algo: rsa
- size: 2048
- subject: UID=oto-client-cert, O=OtoroshiApps
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Certificate
-metadata:
- name: frontend-cert
-spec:
- description: frontend-cert
- autoRenew: true
- csr:
- issuer: otoroshi-intermediate-ca
- hosts:
- - backend.oto.tools
- key:
- algo: rsa
- size: 2048
- subject: UID=frontend-cert, O=OtoroshiApps
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: Certificate
-metadata:
- name: mesh-cert
-spec:
- description: mesh-cert
- autoRenew: true
- csr:
- issuer: otoroshi-intermediate-ca
- hosts:
- - '*.default.otoroshi.mesh'
- key:
- algo: rsa
- size: 2048
- subject: O=Otoroshi, OU=Otoroshi Certificates, CN=kubernetes-mesh
- duration: 31536000000
- signatureAlg: SHA256WithRSAEncryption
- digestAlg: SHA-256
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: ApiKey
-metadata:
- name: backend-apikey
-spec:
- exportSecret: true
- secretName: backend-apikey
- authorizedEntities:
- - group_default
----
-apiVersion: proxy.otoroshi.io/v1alpha1
-kind: ServiceDescriptor
-metadata:
- name: backend
-spec:
- description: backend
- groups:
- - default
- forceHttps: false
- hosts:
- - backend.oto.tools
- matchingRoot: /
- publicPatterns:
- - /.*
- secComUseSameAlgo: true
- secComVersion: 2
- secComInfoTokenVersion: Latest
- secComSettings:
- type: HSAlgoSettings
- size: 512
- secret: secret
- base64: false
- secComAlgoChallengeOtoToBack:
- type: HSAlgoSettings
- size: 512
- secret: secret
- base64: false
- secComAlgoChallengeBackToOto:
- type: HSAlgoSettings
- size: 512
- secret: secret
- base64: false
- secComAlgoInfoToken:
- type: HSAlgoSettings
- size: 512
- secret: secret
- base64: false
- targets:
- - url: https://app-service.default.svc.cluster.local:8443
- mtlsConfig:
- mtls: true
- certs:
- - UID=oto-client-cert, O=OtoroshiApps
- trustedCerts:
- - otoroshi-intermediate-ca
-
Warning
-
Please avoid to use port 80
for your pod as it’s the default port to access otoroshi from your pod and the call will be redirect to the sidecar via an iptables rule
-
Daikoku integration
-
It is possible to easily integrate daikoku generated apikeys without any human interaction with the actual apikey secret. To do that, create a plan in Daikoku and setup the integration mode to Automatic
-
-
then when a user subscribe for an apikey, he will only see an integration token
-
-
then just create an ApiKey manifest with this token and your good to go
-
apiVersion: proxy.otoroshi.io/v1alpha1
-kind: ApiKey
-metadata:
- name: http-app-2-apikey-3
-spec:
- exportSecret: true
- secretName: secret-3
- daikokuToken: RShQrvINByiuieiaCBwIZfGFgdPu7tIJEN5gdV8N8YeH4RI9ErPYJzkuFyAkZ2xy
-
-
-