diff --git a/manual/src/main/paradox/content-pretty.json b/manual/src/main/paradox/content-pretty.json
index 891618e508..0523805407 100644
--- a/manual/src/main/paradox/content-pretty.json
+++ b/manual/src/main/paradox/content-pretty.json
@@ -46,7 +46,7 @@
"id": "/deploy/kubernetes.md",
"url": "/deploy/kubernetes.html",
"title": "Kubernetes",
- "content": "# Kubernetes\n\nStarting at version 1.5.0, Otoroshi provides a native Kubernetes support. Multiple otoroshi jobs (that are actually kubernetes controllers) are provided in order to\n\n- sync kubernetes secrets of type `kubernetes.io/tls` to otoroshi certificates\n- act as a standard ingress controller (supporting `Ingress` objects)\n- provide Custom Resource Definitions (CRDs) to manage Otoroshi entities from Kubernetes and act as an ingress controller with its own resources\n\n## Installing otoroshi on your kubernetes cluster\n\n@@@ warning\nYou need to have cluster admin privileges to install otoroshi and its service account, role mapping and CRDs on a kubernetes cluster. We also advise you to create a dedicated namespace (you can name it `otoroshi` for example) to install otoroshi\n@@@\n\nIf you want to deploy otoroshi into your kubernetes cluster, you can download the deployment descriptors from https://github.com/MAIF/otoroshi/tree/master/kubernetes and use kustomize to create your own overlay.\n\nYou can also create a `kustomization.yaml` file with a remote base\n\n```yaml\nbases:\n- github.com/MAIF/otoroshi/kubernetes/kustomize/overlays/simple/?ref=v1.5.0-dev\n```\n\nThen deploy it with `kubectl apply -k ./overlays/myoverlay`. \n\nYou can also use Helm to deploy a simple otoroshi cluster on your kubernetes cluster\n\n```sh\nhelm repo add otoroshi https://maif.github.io/otoroshi/helm\nhelm install my-otoroshi otoroshi/otoroshi\n```\n\nBelow, you will find example of deployment. Do not hesitate to adapt them to your needs. Those descriptors have value placeholders that you will need to replace with actual values like \n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: ${domain}\n```\n\nyou will have to edit it to make it look like\n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: 'apis.my.domain'\n```\n\nif you don't want to use placeholders and environment variables, you can create a secret containing the configuration file of otoroshi\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: otoroshi-config\ntype: Opaque\nstringData:\n oto.conf: >\n include \"application.conf\"\n app {\n storage = \"redis\"\n domain = \"apis.my.domain\"\n }\n```\n\nand mount it in the otoroshi container\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: otoroshi-deployment\nspec:\n selector:\n matchLabels:\n run: otoroshi-deployment\n template:\n metadata:\n labels:\n run: otoroshi-deployment\n spec:\n serviceAccountName: otoroshi-admin-user\n terminationGracePeriodSeconds: 60\n hostNetwork: false\n containers:\n - image: maif/otoroshi:1.5.0-dev-jdk11\n imagePullPolicy: IfNotPresent\n name: otoroshi\n args: ['-Dconfig.file=/usr/app/otoroshi/conf/oto.conf']\n ports:\n - containerPort: 8080\n name: \"http\"\n protocol: TCP\n - containerPort: 8443\n name: \"https\"\n protocol: TCP\n volumeMounts:\n - name: otoroshi-config\n mountPath: \"/usr/app/otoroshi/conf\"\n readOnly: true\n volumes:\n - name: otoroshi-config\n secret:\n secretName: otoroshi-config\n ...\n```\n\nYou can also create several secrets for each placeholder, mount them to the otoroshi container then use their file path as value\n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: 'file:///the/path/of/the/secret/file'\n```\n\nyou can use the same trick in the config. file itself\n\n### Note on bare metal kubernetes cluster installation\n\n@@@ note\nBare metal kubernetes clusters don't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples below.\n@@@\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@\n\n### Common manifests\n\nthe following manifests are always needed. They create otoroshi CRDs, tokens, role, etc. Redis deployment is not mandatory, it's just an example. You can use your own existing setup.\n\nrbac.yaml\n: @@snip [rbac.yaml](../snippets/kubernetes/kustomize/base/rbac.yaml) \n\ncrds.yaml\n: @@snip [crds.yaml](../snippets/kubernetes/kustomize/base/crds.yaml) \n\nredis.yaml\n: @@snip [redis.yaml](../snippets/kubernetes/kustomize/base/redis.yaml) \n\n\n### Deploy a simple otoroshi instanciation on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. We use a service of type `LoadBalancer` to expose otoroshi to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple/dns.example) \n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. The otoroshi instance are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/dns.example) \n\n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster using a DaemonSet\n\nHere we have one otoroshi instance on each kubernetes node (with the `otoroshi-kind: instance` label) with redis persistance. The otoroshi instances are exposed as `hostPort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/dns.example) \n\n### Deploy an otoroshi cluster on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas of an otoroshi leader connected to a redis instance and 2 replicas of an otoroshi worker connected to the leader. We use a service of type `LoadBalancer` to expose otoroshi leader/worker to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster\n\nHere we have 2 replicas of otoroshi leader connected to the same redis instance and 2 replicas for otoroshi worker. The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster using DaemonSet\n\nHere we have 1 otoroshi leader instance on each kubernetes node (with the `otoroshi-kind: leader` label) connected to the same redis instance and 1 otoroshi worker instance on each kubernetes node (with the `otoroshi-kind: worker` label). The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\n## Using Otoroshi as an Ingress Controller\n\nIf you want to use Otoroshi as an [Ingress Controller](https://kubernetes.io/fr/docs/concepts/services-networking/ingress/), just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Ingress Controller`.\n\nThen add the following configuration for the job (with your own tweaks of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": true, // sync ingresses\n \"crds\": false, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nNow you can deploy your first service ;)\n\n### Deploy an ingress route\n\nnow let's say you want to deploy an http service and route to the outside world through otoroshi\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: kennethreitz/httpbin\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 80\n name: \"http\"\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8080\n targetPort: http\n name: http\n selector:\n run: http-app-deployment\n---\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nonce deployed, otoroshi will sync with kubernetes and create the corresponding service to route your app. You will be able to access your app with\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get\n```\n\n### Support for Ingress Classes\n\nSince Kubernetes 1.18, you can use `IngressClass` type of manifest to specify which ingress controller you want to use for a deployment (https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#extended-configuration-with-ingress-classes). Otoroshi is fully compatible with this new manifest `kind`. To use it, configure the Ingress job to match your controller\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClasses\": [\"otoroshi.io/ingress-controller\"],\n ...\n }\n}\n```\n\nthen you have to deploy an `IngressClass` to declare Otoroshi as an ingress controller\n\n```yaml\napiVersion: \"networking.k8s.io/v1beta1\"\nkind: \"IngressClass\"\nmetadata:\n name: \"otoroshi-ingress-controller\"\nspec:\n controller: \"otoroshi.io/ingress-controller\"\n parameters:\n apiGroup: \"proxy.otoroshi.io/v1alpha\"\n kind: \"IngressParameters\"\n name: \"otoroshi-ingress-controller\"\n```\n\nand use it in your `Ingress`\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\nspec:\n ingressClassName: otoroshi-ingress-controller\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\n### Use multiple ingress controllers\n\nIt is of course possible to use multiple ingress controller at the same time (https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#using-multiple-ingress-controllers) using the annotation `kubernetes.io/ingress.class`. By default, otoroshi reacts to the class `otoroshi`, but you can make it the default ingress controller with the following config\n\n```json\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClass\": \"*\",\n ...\n }\n}\n```\n\n### Supported annotations\n\nif you need to customize the service descriptor behind an ingress rule, you can use some annotations. If you need better customisation, just go to the CRDs part. The following annotations are supported :\n\n- `ingress.otoroshi.io/groups`\n- `ingress.otoroshi.io/group`\n- `ingress.otoroshi.io/groupId`\n- `ingress.otoroshi.io/name`\n- `ingress.otoroshi.io/targetsLoadBalancing`\n- `ingress.otoroshi.io/stripPath`\n- `ingress.otoroshi.io/enabled`\n- `ingress.otoroshi.io/userFacing`\n- `ingress.otoroshi.io/privateApp`\n- `ingress.otoroshi.io/forceHttps`\n- `ingress.otoroshi.io/maintenanceMode`\n- `ingress.otoroshi.io/buildMode`\n- `ingress.otoroshi.io/strictlyPrivate`\n- `ingress.otoroshi.io/sendOtoroshiHeadersBack`\n- `ingress.otoroshi.io/readOnly`\n- `ingress.otoroshi.io/xForwardedHeaders`\n- `ingress.otoroshi.io/overrideHost`\n- `ingress.otoroshi.io/allowHttp10`\n- `ingress.otoroshi.io/logAnalyticsOnServer`\n- `ingress.otoroshi.io/useAkkaHttpClient`\n- `ingress.otoroshi.io/useNewWSClient`\n- `ingress.otoroshi.io/tcpUdpTunneling`\n- `ingress.otoroshi.io/detectApiKeySooner`\n- `ingress.otoroshi.io/letsEncrypt`\n- `ingress.otoroshi.io/publicPatterns`\n- `ingress.otoroshi.io/privatePatterns`\n- `ingress.otoroshi.io/additionalHeaders`\n- `ingress.otoroshi.io/additionalHeadersOut`\n- `ingress.otoroshi.io/missingOnlyHeadersIn`\n- `ingress.otoroshi.io/missingOnlyHeadersOut`\n- `ingress.otoroshi.io/removeHeadersIn`\n- `ingress.otoroshi.io/removeHeadersOut`\n- `ingress.otoroshi.io/headersVerification`\n- `ingress.otoroshi.io/matchingHeaders`\n- `ingress.otoroshi.io/ipFiltering.whitelist`\n- `ingress.otoroshi.io/ipFiltering.blacklist`\n- `ingress.otoroshi.io/api.exposeApi`\n- `ingress.otoroshi.io/api.openApiDescriptorUrl`\n- `ingress.otoroshi.io/healthCheck.enabled`\n- `ingress.otoroshi.io/healthCheck.url`\n- `ingress.otoroshi.io/jwtVerifier.ids`\n- `ingress.otoroshi.io/jwtVerifier.enabled`\n- `ingress.otoroshi.io/jwtVerifier.excludedPatterns`\n- `ingress.otoroshi.io/authConfigRef`\n- `ingress.otoroshi.io/redirection.enabled`\n- `ingress.otoroshi.io/redirection.code`\n- `ingress.otoroshi.io/redirection.to`\n- `ingress.otoroshi.io/clientValidatorRef`\n- `ingress.otoroshi.io/transformerRefs`\n- `ingress.otoroshi.io/transformerConfig`\n- `ingress.otoroshi.io/accessValidator.enabled`\n- `ingress.otoroshi.io/accessValidator.excludedPatterns`\n- `ingress.otoroshi.io/accessValidator.refs`\n- `ingress.otoroshi.io/accessValidator.config`\n- `ingress.otoroshi.io/preRouting.enabled`\n- `ingress.otoroshi.io/preRouting.excludedPatterns`\n- `ingress.otoroshi.io/preRouting.refs`\n- `ingress.otoroshi.io/preRouting.config`\n- `ingress.otoroshi.io/issueCert`\n- `ingress.otoroshi.io/issueCertCA`\n- `ingress.otoroshi.io/gzip.enabled`\n- `ingress.otoroshi.io/gzip.excludedPatterns`\n- `ingress.otoroshi.io/gzip.whiteList`\n- `ingress.otoroshi.io/gzip.blackList`\n- `ingress.otoroshi.io/gzip.bufferSize`\n- `ingress.otoroshi.io/gzip.chunkedThreshold`\n- `ingress.otoroshi.io/gzip.compressionLevel`\n- `ingress.otoroshi.io/cors.enabled`\n- `ingress.otoroshi.io/cors.allowOrigin`\n- `ingress.otoroshi.io/cors.exposeHeaders`\n- `ingress.otoroshi.io/cors.allowHeaders`\n- `ingress.otoroshi.io/cors.allowMethods`\n- `ingress.otoroshi.io/cors.excludedPatterns`\n- `ingress.otoroshi.io/cors.maxAge`\n- `ingress.otoroshi.io/cors.allowCredentials`\n- `ingress.otoroshi.io/clientConfig.useCircuitBreaker`\n- `ingress.otoroshi.io/clientConfig.retries`\n- `ingress.otoroshi.io/clientConfig.maxErrors`\n- `ingress.otoroshi.io/clientConfig.retryInitialDelay`\n- `ingress.otoroshi.io/clientConfig.backoffFactor`\n- `ingress.otoroshi.io/clientConfig.connectionTimeout`\n- `ingress.otoroshi.io/clientConfig.idleTimeout`\n- `ingress.otoroshi.io/clientConfig.callAndStreamTimeout`\n- `ingress.otoroshi.io/clientConfig.callTimeout`\n- `ingress.otoroshi.io/clientConfig.globalTimeout`\n- `ingress.otoroshi.io/clientConfig.sampleInterval`\n- `ingress.otoroshi.io/enforceSecureCommunication`\n- `ingress.otoroshi.io/sendInfoToken`\n- `ingress.otoroshi.io/sendStateChallenge`\n- `ingress.otoroshi.io/secComHeaders.claimRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateResponseName`\n- `ingress.otoroshi.io/secComTtl`\n- `ingress.otoroshi.io/secComVersion`\n- `ingress.otoroshi.io/secComInfoTokenVersion`\n- `ingress.otoroshi.io/secComExcludedPatterns`\n- `ingress.otoroshi.io/secComSettings.size`\n- `ingress.otoroshi.io/secComSettings.secret`\n- `ingress.otoroshi.io/secComSettings.base64`\n- `ingress.otoroshi.io/secComUseSameAlgo`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.size`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.base64`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.size`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.base64`\n- `ingress.otoroshi.io/secComAlgoInfoToken.size`\n- `ingress.otoroshi.io/secComAlgoInfoToken.secret`\n- `ingress.otoroshi.io/secComAlgoInfoToken.base64`\n- `ingress.otoroshi.io/securityExcludedPatterns`\n\nfor more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n\nwith the previous example, the ingress does not define any apikey, so the route is public. If you want to enable apikeys on it, you can deploy the following descriptor\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\n ingress.otoroshi.io/group: http-app-group\n ingress.otoroshi.io/forceHttps: 'true'\n ingress.otoroshi.io/sendOtoroshiHeadersBack: 'true'\n ingress.otoroshi.io/overrideHost: 'true'\n ingress.otoroshi.io/allowHttp10: 'false'\n ingress.otoroshi.io/publicPatterns: ''\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nnow you can use an existing apikey in the `http-app-group` to access your app\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get -u existing-apikey-1:secret-1\n```\n\n## Use Otoroshi CRDs for a better/full integration\n\nOtoroshi provides some Custom Resource Definitions for kubernetes in order to manage Otoroshi related entities in kubernetes\n\n- `service-groups`\n- `service-descriptors`\n- `apikeys`\n- `certificates`\n- `global-configs`\n- `jwt-verifiers`\n- `auth-modules`\n- `scripts`\n- `tcp-services`\n- `data-exporters`\n- `admins`\n- `teams`\n- `organizations`\n\nusing CRDs, you will be able to deploy and manager those entities from kubectl or the kubernetes api like\n\n```sh\nsudo kubectl get apikeys --all-namespaces\nsudo kubectl get service-descriptors --all-namespaces\ncurl -X GET \\\n -H 'Authorization: Bearer eyJhbGciOiJSUzI....F463SrpOehQRaQ' \\\n -H 'Accept: application/json' -k \\\n https://127.0.0.1:6443/apis/proxy.otoroshi.io/v1alpha1/apikeys | jq\n```\n\nYou can see this as better `Ingress` resources. Like any `Ingress` resource can define which controller it uses (using the `kubernetes.io/ingress.class` annotation), you can chose another kind of resource instead of `Ingress`. With Otoroshi CRDs you can even define resources like `Certificate`, `Apikey`, `AuthModules`, `JwtVerifier`, etc. It will help you to use all the power of Otoroshi while using the deployment model of kubernetes.\n \n@@@ warning\nwhen using Otoroshi CRDs, Kubernetes becomes the single source of truth for the synced entities. It means that any value in the descriptors deployed will overrides the one in Otoroshi datastore each time it's synced. So be careful if you use the Otoroshi UI or the API, some changes in configuration may be overriden by CRDs sync job.\n@@@\n\n### Resources examples\n\ngroup.yaml\n: @@snip [group.yaml](../snippets/crds/group.yaml) \n\napikey.yaml\n: @@snip [apikey.yaml](../snippets/crds/apikey.yaml) \n\nservice-descriptor.yaml\n: @@snip [service.yaml](../snippets/crds/service-descriptor.yaml) \n\ncertificate.yaml\n: @@snip [cert.yaml](../snippets/crds/certificate.yaml) \n\njwt.yaml\n: @@snip [jwt.yaml](../snippets/crds/jwt.yaml) \n\nauth.yaml\n: @@snip [auth.yaml](../snippets/crds/auth.yaml) \n\norganization.yaml\n: @@snip [orga.yaml](../snippets/crds/organization.yaml) \n\nteam.yaml\n: @@snip [team.yaml](../snippets/crds/team.yaml) \n\n\n### Configuration\n\nTo configure it, just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Otoroshi CRDs Controller`. Then add the following configuration for the job (with your own tweak of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"crds\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": false, // sync ingresses\n \"crds\": true, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n \"organization\": {},\n \"team\": {},\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nyou can find a more complete example of the configuration object [here](https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/plugins/jobs/kubernetes/config.scala#L134-L163)\n\n### Note about `apikeys` and `certificates` resources\n\nApikeys and Certificates are a little bit different than the other resources. They have ability to be defined without their secret part, but with an export setting so otoroshi will generate the secret parts and export the apikey or the certificate to kubernetes secret. Then any app will be able to mount them as volumes (see the full example below)\n\nIn those resources you can define \n\n```yaml\nexportSecret: true \nsecretName: the-secret-name\n```\n\nand omit `clientSecret` for apikey or `publicKey`, `privateKey` for certificates. For certificate you will have to provide a `csr` for the certificate in order to generate it\n\n```yaml\ncsr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n - httpapps.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n```\n\nwhen apikeys are exported as kubernetes secrets, they will have the type `otoroshi.io/apikey-secret` with values `clientId` and `clientSecret`\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: apikey-1\ntype: otoroshi.io/apikey-secret\ndata:\n clientId: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n clientSecret: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n```\n\nwhen certificates are exported as kubernetes secrets, they will have the type `kubernetes.io/tls` with the standard values `tls.crt` (the full cert chain) and `tls.key` (the private key). For more convenience, they will also have a `cert.crt` value containing the actual certificate without the ca chain and `ca-chain.crt` containing the ca chain without the certificate.\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: certificate-1\ntype: kubernetes.io/tls\ndata:\n tls.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n tls.key: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n cert.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n ca-chain.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA== \n```\n\n## Full CRD example\n\nthen you can deploy the previous example with better configuration level, and using mtls, apikeys, etc\n\nLet say the app looks like :\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\n// here we read the apikey to access http-app-2 from files mounted from secrets\nconst clientId = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientId').toString('utf8')\nconst clientSecret = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientSecret').toString('utf8')\n\nconst backendKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/tls.key').toString('utf8')\nconst backendCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/cert.crt').toString('utf8')\nconst backendCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/ca-chain.crt').toString('utf8')\n\nconst clientKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/tls.key').toString('utf8')\nconst clientCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/cert.crt').toString('utf8')\nconst clientCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/ca-chain.crt').toString('utf8')\n\nfunction callApi2() {\n return new Promise((success, failure) => {\n const options = { \n // using the implicit internal name (*.global.otoroshi.mesh) of the other service descriptor passing through otoroshi\n hostname: 'http-app-service-descriptor-2.global.otoroshi.mesh', \n port: 433, \n path: '/', \n method: 'GET',\n headers: {\n 'Accept': 'application/json',\n 'Otoroshi-Client-Id': clientId,\n 'Otoroshi-Client-Secret': clientSecret,\n },\n cert: clientCert,\n key: clientKey,\n ca: clientCa\n }; \n let data = '';\n const req = https.request(options, (res) => { \n res.on('data', (d) => { \n data = data + d.toString('utf8');\n }); \n res.on('end', () => { \n success({ body: JSON.parse(data), res });\n }); \n res.on('error', (e) => { \n failure(e);\n }); \n }); \n req.end();\n })\n}\n\nconst options = { \n key: backendKey, \n cert: backendCert, \n ca: backendCa, \n // we want mtls behavior\n requestCert: true, \n rejectUnauthorized: true\n}; \nhttps.createServer(options, (req, res) => { \n res.writeHead(200, {'Content-Type': 'application/json'});\n callApi2().then(resp => {\n res.write(JSON.stringify{ (\"message\": `Hello to ${req.socket.getPeerCertificate().subject.CN}`, api2: resp.body })); \n });\n}).listen(433);\n```\n\nthen, the descriptors will be :\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: foo/http-app\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 443\n name: \"https\"\n volumeMounts:\n - name: apikey-volume\n # here you will be able to read apikey from files \n # - /var/run/secrets/kubernetes.io/apikeys/clientId\n # - /var/run/secrets/kubernetes.io/apikeys/clientSecret\n mountPath: \"/var/run/secrets/kubernetes.io/apikeys\"\n readOnly: true\n volumeMounts:\n - name: backend-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/backend/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/backend/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/backend\"\n readOnly: true\n - name: client-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/client/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/client/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/client\"\n readOnly: true\n volumes:\n - name: apikey-volume\n secret:\n # here we reference the secret name from apikey http-app-2-apikey-1\n secretName: secret-2\n - name: backend-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-backend\n secretName: http-app-certificate-backend-secret\n - name: client-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-client\n secretName: http-app-certificate-client-secret\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8443\n targetPort: https\n name: https\n selector:\n run: http-app-deployment\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ServiceGroup\nmetadata:\n name: http-app-group\n annotations:\n otoroshi.io/id: http-app-group\nspec:\n description: a group to hold services about the http-app\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ApiKey\nmetadata:\n name: http-app-apikey-1\n# this apikey can be used to access the app\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-1\n authorizedEntities: \n - group_http-app-group\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-1\n# this apikey can be used to access another app in a different group\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-2\n authorizedEntities: \n - group_http-app-2-group\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: Certificate\nmetadata:\n name: http-app-certificate-frontend\nspec:\n description: certificate for the http-app on otorshi frontend\n autoRenew: true\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: Certificate\nmetadata:\n name: http-app-certificate-backend\nspec:\n description: certificate for the http-app deployed on pods\n autoRenew: true\n # a secret name http-app-certificate-backend-secret will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: http-app-certificate-backend-secret\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - http-app-service \n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-back, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: Certificate\nmetadata:\n name: http-app-certificate-client\nspec:\n description: certificate for the http-app\n autoRenew: true\n secretName: http-app-certificate-client-secret\n csr:\n issuer: CN=Otoroshi Root\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-client, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ServiceDescriptor\nmetadata:\n name: http-app-service-descriptor\nspec:\n description: the service descriptor for the http app\n groups: \n - http-app-group\n forceHttps: true\n hosts:\n - httpapp.foo.bar # hostname exposed oustide of the kubernetes cluster\n # - http-app-service-descriptor.global.otoroshi.mesh # implicit internal name inside the kubernetes cluster \n matchingRoot: /\n targets:\n - url: https://http-app-service:8443\n # alternatively, you can use serviceName and servicePort to use pods ip addresses\n # serviceName: http-app-service\n # servicePort: https\n mtlsConfig:\n # use mtls to contact the backend\n mtls: true\n certs: \n # reference the DN for the client cert\n - UID=httpapp-client, O=OtoroshiApps\n trustedCerts: \n # reference the DN for the CA cert \n - CN=Otoroshi Root\n sendOtoroshiHeadersBack: true\n xForwardedHeaders: true\n overrideHost: true\n allowHttp10: false\n publicPatterns:\n - /health\n additionalHeaders:\n x-foo: bar\n# here you can specify everything supported by otoroshi like jwt-verifiers, auth config, etc ... for more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n```\n\nnow with this descriptor deployed, you can access your app with a command like \n\n```sh\nCLIENT_ID=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientId}\" | base64 --decode`\nCLIENT_SECRET=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientSecret}\" | base64 --decode`\ncurl -X GET https://httpapp.foo.bar/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n## Expose Otoroshi to outside world\n\nIf you deploy Otoroshi on a kubernetes cluster, the Otoroshi service is deployed as a loadbalancer (service type: `LoadBalancer`). You'll need to declare in your DNS settings any name that can be routed by otoroshi going to the loadbalancer endpoint (CNAME or ip addresses) of your kubernetes distribution. If you use a managed kubernetes cluster from a cloud provider, it will work seamlessly as they will provide external loadbalancers out of the box. However, if you use a bare metal kubernetes cluster, id doesn't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples in the installation section.\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@ \n\n## Access a service from inside the k8s cluster\n\n### Using host header overriding\n\nYou can access any service referenced in otoroshi, through otoroshi from inside the kubernetes cluster by using the otoroshi service name (if you use a template based on https://github.com/MAIF/otoroshi/tree/master/kubernetes/base deployed in the otoroshi namespace) and the host header with the service domain like :\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET -H 'Host: httpapp.foo.bar' https://otoroshi-service.otoroshi.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using dedicated services\n\nit's also possible to define services that targets otoroshi deployment (or otoroshi workers deployment) and use then as valid hosts in otoroshi services \n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-awesome-service\nspec:\n selector:\n # run: otoroshi-deployment\n # or in cluster mode\n run: otoroshi-worker-deployment\n ports:\n - port: 8080\n name: \"http\"\n targetPort: \"http\"\n - port: 8443\n name: \"https\"\n targetPort: \"https\"\n```\n\nand access it like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-namspace.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using coredns integration\n\nYou can also enable the coredns integration to simplify the flow. You can use the the following keys in the plugin config :\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"coreDnsIntegration\": true, // enable coredns integration for intra cluster calls\n \"kubeSystemNamespace\": \"kube-system\", // the namespace where coredns is deployed\n \"corednsConfigMap\": \"coredns\", // the name of the coredns configmap\n \"otoroshiServiceName\": \"otoroshi-service\", // the name of the otoroshi service, could be otoroshi-workers-service\n \"otoroshiNamespace\": \"otoroshi\", // the namespace where otoroshi is deployed\n \"clusterDomain\": \"cluster.local\", // the domain for cluster services\n ...\n }\n}\n```\n\notoroshi will patch coredns config at startup then you can call your services like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh` or `${service-name}.${service-namespace}.svc.otoroshi.local`\n\n### Using coredns with manual patching\n\nyou can also patch the coredns config manually\n\n```sh\nkubectl edit configmaps coredns -n kube-system # or your own custom config map\n```\n\nand change the `Corefile` data to add the following snippet in at the end of the file\n\n```yaml\notoroshi.mesh:53 {\n errors\n health\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.arpa\n }\n rewrite name regex (.*)\\.otoroshi\\.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}\n```\n\nyou can also define simpler rewrite if it suits you use case better\n\n```\nrewrite name my-service.otoroshi.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n```\n\ndo not hesitate to change `otoroshi-worker-service.otoroshi` according to your own setup. If otoroshi is not in cluster mode, change it to `otoroshi-service.otoroshi`. If otoroshi is not deployed in the `otoroshi` namespace, change it to `otoroshi-service.the-namespace`, etc.\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh`\n\nthen you can call your service like \n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\n\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using old kube-dns system\n\nif your stuck with an old version of kubernetes, it uses kube-dns that is not supported by otoroshi, so you will have to provide your own coredns deployment and declare it as a stubDomain in the old kube-dns system. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the kube-dns integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"kubeDnsOperatorIntegration\": true, // enable kube-dns integration for intra cluster calls\n \"kubeDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"kubeDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"kubeDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\n### Using Openshift DNS operator\n\nOpenshift DNS operator does not allow to customize DNS configuration a lot, so you will have to provide your own coredns deployment and declare it as a stub in the Openshift DNS operator. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the Openshift DNS operator integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"openshiftDnsOperatorIntegration\": true, // enable openshift dns operator integration for intra cluster calls\n \"openshiftDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"openshiftDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"openshiftDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\ndon't forget to update the otoroshi `ClusterRole`\n\n```yaml\n- apiGroups:\n - operator.openshift.io\n resources:\n - dnses\n verbs:\n - get\n - list\n - watch\n - update\n```\n\n## Easier integration with otoroshi-sidecar\n\nOtoroshi can help you to easily use existing services without modifications while gettings all the perks of otoroshi like apikeys, mTLS, exchange protocol, etc. To do so, otoroshi will inject a sidecar container in the pod of your deployment that will handle call coming from otoroshi and going to otoroshi. To enable otoroshi-sidecar, you need to deploy the following admission webhooks\n\nwebhooks.yaml\n: @@snip [webhooks.yaml](../snippets/kubernetes/kustomize/base/webhooks.yaml)\n\nthen it's quite easy to add the sidecar, just add the following label to your pod `otoroshi.io/sidecar: inject` and some annotations to tell otoroshi what certificates and apikeys to use.\n\n```yaml\nannotations:\n otoroshi.io/sidecar-apikey: backend-apikey\n otoroshi.io/sidecar-backend-cert: backend-cert\n otoroshi.io/sidecar-client-cert: oto-client-cert\n otoroshi.io/token-secret: secret\n otoroshi.io/expected-dn: UID=oto-client-cert, O=OtoroshiApps\n```\n\nnow you can just call you otoroshi handled apis from inside your pod like `curl http://my-service.namespace.otoroshi.mesh/api` without passing any apikey or client certificate and the sidecar will handle everything for you. Same thing for call from otoroshi to your pod, everything will be done in mTLS fashion with apikeys and otoroshi exchange protocol\n\nhere is a full example\n\nsidecar.yaml\n: @@snip [sidecar.yaml](../snippets/kubernetes/kustomize/base/sidecar.yaml)\n\n@@@ warning\nPlease avoid to use port `80` for your pod as it's the default port to access otoroshi from your pod and the call will be redirect to the sidecar via an iptables rule\n@@@\n\n## Daikoku integration\n\nIt is possible to easily integrate daikoku generated apikeys without any human interaction with the actual apikey secret. To do that, create a plan in Daikoku and setup the integration mode to `Automatic`\n\n@@@ div { .centered-img }\n\n@@@\n\nthen when a user subscribe for an apikey, he will only see an integration token\n\n@@@ div { .centered-img }\n\n@@@\n\nthen just create an ApiKey manifest with this token and your good to go \n\n```yaml\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-3\nspec:\n exportSecret: true \n secretName: secret-3\n daikokuToken: RShQrvINByiuieiaCBwIZfGFgdPu7tIJEN5gdV8N8YeH4RI9ErPYJzkuFyAkZ2xy\n```\n\n"
+ "content": "# Kubernetes\n\nStarting at version 1.5.0, Otoroshi provides a native Kubernetes support. Multiple otoroshi jobs (that are actually kubernetes controllers) are provided in order to\n\n- sync kubernetes secrets of type `kubernetes.io/tls` to otoroshi certificates\n- act as a standard ingress controller (supporting `Ingress` objects)\n- provide Custom Resource Definitions (CRDs) to manage Otoroshi entities from Kubernetes and act as an ingress controller with its own resources\n\n## Installing otoroshi on your kubernetes cluster\n\n@@@ warning\nYou need to have cluster admin privileges to install otoroshi and its service account, role mapping and CRDs on a kubernetes cluster. We also advise you to create a dedicated namespace (you can name it `otoroshi` for example) to install otoroshi\n@@@\n\nIf you want to deploy otoroshi into your kubernetes cluster, you can download the deployment descriptors from https://github.com/MAIF/otoroshi/tree/master/kubernetes and use kustomize to create your own overlay.\n\nYou can also create a `kustomization.yaml` file with a remote base\n\n```yaml\nbases:\n- github.com/MAIF/otoroshi/kubernetes/kustomize/overlays/simple/?ref=v1.5.0-alpha.6\n```\n\nThen deploy it with `kubectl apply -k ./overlays/myoverlay`. \n\nYou can also use Helm to deploy a simple otoroshi cluster on your kubernetes cluster\n\n```sh\nhelm repo add otoroshi https://maif.github.io/otoroshi/helm\nhelm install my-otoroshi otoroshi/otoroshi\n```\n\nBelow, you will find example of deployment. Do not hesitate to adapt them to your needs. Those descriptors have value placeholders that you will need to replace with actual values like \n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: ${domain}\n```\n\nyou will have to edit it to make it look like\n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: 'apis.my.domain'\n```\n\nif you don't want to use placeholders and environment variables, you can create a secret containing the configuration file of otoroshi\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: otoroshi-config\ntype: Opaque\nstringData:\n oto.conf: >\n include \"application.conf\"\n app {\n storage = \"redis\"\n domain = \"apis.my.domain\"\n }\n```\n\nand mount it in the otoroshi container\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: otoroshi-deployment\nspec:\n selector:\n matchLabels:\n run: otoroshi-deployment\n template:\n metadata:\n labels:\n run: otoroshi-deployment\n spec:\n serviceAccountName: otoroshi-admin-user\n terminationGracePeriodSeconds: 60\n hostNetwork: false\n containers:\n - image: maif/otoroshi:1.5.0-alpha.6-jdk11\n imagePullPolicy: IfNotPresent\n name: otoroshi\n args: ['-Dconfig.file=/usr/app/otoroshi/conf/oto.conf']\n ports:\n - containerPort: 8080\n name: \"http\"\n protocol: TCP\n - containerPort: 8443\n name: \"https\"\n protocol: TCP\n volumeMounts:\n - name: otoroshi-config\n mountPath: \"/usr/app/otoroshi/conf\"\n readOnly: true\n volumes:\n - name: otoroshi-config\n secret:\n secretName: otoroshi-config\n ...\n```\n\nYou can also create several secrets for each placeholder, mount them to the otoroshi container then use their file path as value\n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: 'file:///the/path/of/the/secret/file'\n```\n\nyou can use the same trick in the config. file itself\n\n### Note on bare metal kubernetes cluster installation\n\n@@@ note\nBare metal kubernetes clusters don't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples below.\n@@@\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@\n\n### Common manifests\n\nthe following manifests are always needed. They create otoroshi CRDs, tokens, role, etc. Redis deployment is not mandatory, it's just an example. You can use your own existing setup.\n\nrbac.yaml\n: @@snip [rbac.yaml](../snippets/kubernetes/kustomize/base/rbac.yaml) \n\ncrds.yaml\n: @@snip [crds.yaml](../snippets/kubernetes/kustomize/base/crds.yaml) \n\nredis.yaml\n: @@snip [redis.yaml](../snippets/kubernetes/kustomize/base/redis.yaml) \n\n\n### Deploy a simple otoroshi instanciation on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. We use a service of type `LoadBalancer` to expose otoroshi to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple/dns.example) \n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. The otoroshi instance are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/dns.example) \n\n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster using a DaemonSet\n\nHere we have one otoroshi instance on each kubernetes node (with the `otoroshi-kind: instance` label) with redis persistance. The otoroshi instances are exposed as `hostPort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/dns.example) \n\n### Deploy an otoroshi cluster on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas of an otoroshi leader connected to a redis instance and 2 replicas of an otoroshi worker connected to the leader. We use a service of type `LoadBalancer` to expose otoroshi leader/worker to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster\n\nHere we have 2 replicas of otoroshi leader connected to the same redis instance and 2 replicas for otoroshi worker. The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster using DaemonSet\n\nHere we have 1 otoroshi leader instance on each kubernetes node (with the `otoroshi-kind: leader` label) connected to the same redis instance and 1 otoroshi worker instance on each kubernetes node (with the `otoroshi-kind: worker` label). The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\n## Using Otoroshi as an Ingress Controller\n\nIf you want to use Otoroshi as an [Ingress Controller](https://kubernetes.io/fr/docs/concepts/services-networking/ingress/), just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Ingress Controller`.\n\nThen add the following configuration for the job (with your own tweaks of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": true, // sync ingresses\n \"crds\": false, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nNow you can deploy your first service ;)\n\n### Deploy an ingress route\n\nnow let's say you want to deploy an http service and route to the outside world through otoroshi\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: kennethreitz/httpbin\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 80\n name: \"http\"\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8080\n targetPort: http\n name: http\n selector:\n run: http-app-deployment\n---\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nonce deployed, otoroshi will sync with kubernetes and create the corresponding service to route your app. You will be able to access your app with\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get\n```\n\n### Support for Ingress Classes\n\nSince Kubernetes 1.18, you can use `IngressClass` type of manifest to specify which ingress controller you want to use for a deployment (https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#extended-configuration-with-ingress-classes). Otoroshi is fully compatible with this new manifest `kind`. To use it, configure the Ingress job to match your controller\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClasses\": [\"otoroshi.io/ingress-controller\"],\n ...\n }\n}\n```\n\nthen you have to deploy an `IngressClass` to declare Otoroshi as an ingress controller\n\n```yaml\napiVersion: \"networking.k8s.io/v1beta1\"\nkind: \"IngressClass\"\nmetadata:\n name: \"otoroshi-ingress-controller\"\nspec:\n controller: \"otoroshi.io/ingress-controller\"\n parameters:\n apiGroup: \"proxy.otoroshi.io/v1alpha\"\n kind: \"IngressParameters\"\n name: \"otoroshi-ingress-controller\"\n```\n\nand use it in your `Ingress`\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\nspec:\n ingressClassName: otoroshi-ingress-controller\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\n### Use multiple ingress controllers\n\nIt is of course possible to use multiple ingress controller at the same time (https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#using-multiple-ingress-controllers) using the annotation `kubernetes.io/ingress.class`. By default, otoroshi reacts to the class `otoroshi`, but you can make it the default ingress controller with the following config\n\n```json\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClass\": \"*\",\n ...\n }\n}\n```\n\n### Supported annotations\n\nif you need to customize the service descriptor behind an ingress rule, you can use some annotations. If you need better customisation, just go to the CRDs part. The following annotations are supported :\n\n- `ingress.otoroshi.io/groups`\n- `ingress.otoroshi.io/group`\n- `ingress.otoroshi.io/groupId`\n- `ingress.otoroshi.io/name`\n- `ingress.otoroshi.io/targetsLoadBalancing`\n- `ingress.otoroshi.io/stripPath`\n- `ingress.otoroshi.io/enabled`\n- `ingress.otoroshi.io/userFacing`\n- `ingress.otoroshi.io/privateApp`\n- `ingress.otoroshi.io/forceHttps`\n- `ingress.otoroshi.io/maintenanceMode`\n- `ingress.otoroshi.io/buildMode`\n- `ingress.otoroshi.io/strictlyPrivate`\n- `ingress.otoroshi.io/sendOtoroshiHeadersBack`\n- `ingress.otoroshi.io/readOnly`\n- `ingress.otoroshi.io/xForwardedHeaders`\n- `ingress.otoroshi.io/overrideHost`\n- `ingress.otoroshi.io/allowHttp10`\n- `ingress.otoroshi.io/logAnalyticsOnServer`\n- `ingress.otoroshi.io/useAkkaHttpClient`\n- `ingress.otoroshi.io/useNewWSClient`\n- `ingress.otoroshi.io/tcpUdpTunneling`\n- `ingress.otoroshi.io/detectApiKeySooner`\n- `ingress.otoroshi.io/letsEncrypt`\n- `ingress.otoroshi.io/publicPatterns`\n- `ingress.otoroshi.io/privatePatterns`\n- `ingress.otoroshi.io/additionalHeaders`\n- `ingress.otoroshi.io/additionalHeadersOut`\n- `ingress.otoroshi.io/missingOnlyHeadersIn`\n- `ingress.otoroshi.io/missingOnlyHeadersOut`\n- `ingress.otoroshi.io/removeHeadersIn`\n- `ingress.otoroshi.io/removeHeadersOut`\n- `ingress.otoroshi.io/headersVerification`\n- `ingress.otoroshi.io/matchingHeaders`\n- `ingress.otoroshi.io/ipFiltering.whitelist`\n- `ingress.otoroshi.io/ipFiltering.blacklist`\n- `ingress.otoroshi.io/api.exposeApi`\n- `ingress.otoroshi.io/api.openApiDescriptorUrl`\n- `ingress.otoroshi.io/healthCheck.enabled`\n- `ingress.otoroshi.io/healthCheck.url`\n- `ingress.otoroshi.io/jwtVerifier.ids`\n- `ingress.otoroshi.io/jwtVerifier.enabled`\n- `ingress.otoroshi.io/jwtVerifier.excludedPatterns`\n- `ingress.otoroshi.io/authConfigRef`\n- `ingress.otoroshi.io/redirection.enabled`\n- `ingress.otoroshi.io/redirection.code`\n- `ingress.otoroshi.io/redirection.to`\n- `ingress.otoroshi.io/clientValidatorRef`\n- `ingress.otoroshi.io/transformerRefs`\n- `ingress.otoroshi.io/transformerConfig`\n- `ingress.otoroshi.io/accessValidator.enabled`\n- `ingress.otoroshi.io/accessValidator.excludedPatterns`\n- `ingress.otoroshi.io/accessValidator.refs`\n- `ingress.otoroshi.io/accessValidator.config`\n- `ingress.otoroshi.io/preRouting.enabled`\n- `ingress.otoroshi.io/preRouting.excludedPatterns`\n- `ingress.otoroshi.io/preRouting.refs`\n- `ingress.otoroshi.io/preRouting.config`\n- `ingress.otoroshi.io/issueCert`\n- `ingress.otoroshi.io/issueCertCA`\n- `ingress.otoroshi.io/gzip.enabled`\n- `ingress.otoroshi.io/gzip.excludedPatterns`\n- `ingress.otoroshi.io/gzip.whiteList`\n- `ingress.otoroshi.io/gzip.blackList`\n- `ingress.otoroshi.io/gzip.bufferSize`\n- `ingress.otoroshi.io/gzip.chunkedThreshold`\n- `ingress.otoroshi.io/gzip.compressionLevel`\n- `ingress.otoroshi.io/cors.enabled`\n- `ingress.otoroshi.io/cors.allowOrigin`\n- `ingress.otoroshi.io/cors.exposeHeaders`\n- `ingress.otoroshi.io/cors.allowHeaders`\n- `ingress.otoroshi.io/cors.allowMethods`\n- `ingress.otoroshi.io/cors.excludedPatterns`\n- `ingress.otoroshi.io/cors.maxAge`\n- `ingress.otoroshi.io/cors.allowCredentials`\n- `ingress.otoroshi.io/clientConfig.useCircuitBreaker`\n- `ingress.otoroshi.io/clientConfig.retries`\n- `ingress.otoroshi.io/clientConfig.maxErrors`\n- `ingress.otoroshi.io/clientConfig.retryInitialDelay`\n- `ingress.otoroshi.io/clientConfig.backoffFactor`\n- `ingress.otoroshi.io/clientConfig.connectionTimeout`\n- `ingress.otoroshi.io/clientConfig.idleTimeout`\n- `ingress.otoroshi.io/clientConfig.callAndStreamTimeout`\n- `ingress.otoroshi.io/clientConfig.callTimeout`\n- `ingress.otoroshi.io/clientConfig.globalTimeout`\n- `ingress.otoroshi.io/clientConfig.sampleInterval`\n- `ingress.otoroshi.io/enforceSecureCommunication`\n- `ingress.otoroshi.io/sendInfoToken`\n- `ingress.otoroshi.io/sendStateChallenge`\n- `ingress.otoroshi.io/secComHeaders.claimRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateResponseName`\n- `ingress.otoroshi.io/secComTtl`\n- `ingress.otoroshi.io/secComVersion`\n- `ingress.otoroshi.io/secComInfoTokenVersion`\n- `ingress.otoroshi.io/secComExcludedPatterns`\n- `ingress.otoroshi.io/secComSettings.size`\n- `ingress.otoroshi.io/secComSettings.secret`\n- `ingress.otoroshi.io/secComSettings.base64`\n- `ingress.otoroshi.io/secComUseSameAlgo`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.size`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.base64`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.size`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.base64`\n- `ingress.otoroshi.io/secComAlgoInfoToken.size`\n- `ingress.otoroshi.io/secComAlgoInfoToken.secret`\n- `ingress.otoroshi.io/secComAlgoInfoToken.base64`\n- `ingress.otoroshi.io/securityExcludedPatterns`\n\nfor more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n\nwith the previous example, the ingress does not define any apikey, so the route is public. If you want to enable apikeys on it, you can deploy the following descriptor\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\n ingress.otoroshi.io/group: http-app-group\n ingress.otoroshi.io/forceHttps: 'true'\n ingress.otoroshi.io/sendOtoroshiHeadersBack: 'true'\n ingress.otoroshi.io/overrideHost: 'true'\n ingress.otoroshi.io/allowHttp10: 'false'\n ingress.otoroshi.io/publicPatterns: ''\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nnow you can use an existing apikey in the `http-app-group` to access your app\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get -u existing-apikey-1:secret-1\n```\n\n## Use Otoroshi CRDs for a better/full integration\n\nOtoroshi provides some Custom Resource Definitions for kubernetes in order to manage Otoroshi related entities in kubernetes\n\n- `service-groups`\n- `service-descriptors`\n- `apikeys`\n- `certificates`\n- `global-configs`\n- `jwt-verifiers`\n- `auth-modules`\n- `scripts`\n- `tcp-services`\n- `data-exporters`\n- `admins`\n- `teams`\n- `organizations`\n\nusing CRDs, you will be able to deploy and manager those entities from kubectl or the kubernetes api like\n\n```sh\nsudo kubectl get apikeys --all-namespaces\nsudo kubectl get service-descriptors --all-namespaces\ncurl -X GET \\\n -H 'Authorization: Bearer eyJhbGciOiJSUzI....F463SrpOehQRaQ' \\\n -H 'Accept: application/json' -k \\\n https://127.0.0.1:6443/apis/proxy.otoroshi.io/v1alpha1/apikeys | jq\n```\n\nYou can see this as better `Ingress` resources. Like any `Ingress` resource can define which controller it uses (using the `kubernetes.io/ingress.class` annotation), you can chose another kind of resource instead of `Ingress`. With Otoroshi CRDs you can even define resources like `Certificate`, `Apikey`, `AuthModules`, `JwtVerifier`, etc. It will help you to use all the power of Otoroshi while using the deployment model of kubernetes.\n \n@@@ warning\nwhen using Otoroshi CRDs, Kubernetes becomes the single source of truth for the synced entities. It means that any value in the descriptors deployed will overrides the one in Otoroshi datastore each time it's synced. So be careful if you use the Otoroshi UI or the API, some changes in configuration may be overriden by CRDs sync job.\n@@@\n\n### Resources examples\n\ngroup.yaml\n: @@snip [group.yaml](../snippets/crds/group.yaml) \n\napikey.yaml\n: @@snip [apikey.yaml](../snippets/crds/apikey.yaml) \n\nservice-descriptor.yaml\n: @@snip [service.yaml](../snippets/crds/service-descriptor.yaml) \n\ncertificate.yaml\n: @@snip [cert.yaml](../snippets/crds/certificate.yaml) \n\njwt.yaml\n: @@snip [jwt.yaml](../snippets/crds/jwt.yaml) \n\nauth.yaml\n: @@snip [auth.yaml](../snippets/crds/auth.yaml) \n\norganization.yaml\n: @@snip [orga.yaml](../snippets/crds/organization.yaml) \n\nteam.yaml\n: @@snip [team.yaml](../snippets/crds/team.yaml) \n\n\n### Configuration\n\nTo configure it, just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Otoroshi CRDs Controller`. Then add the following configuration for the job (with your own tweak of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"crds\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": false, // sync ingresses\n \"crds\": true, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n \"organization\": {},\n \"team\": {},\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nyou can find a more complete example of the configuration object [here](https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/plugins/jobs/kubernetes/config.scala#L134-L163)\n\n### Note about `apikeys` and `certificates` resources\n\nApikeys and Certificates are a little bit different than the other resources. They have ability to be defined without their secret part, but with an export setting so otoroshi will generate the secret parts and export the apikey or the certificate to kubernetes secret. Then any app will be able to mount them as volumes (see the full example below)\n\nIn those resources you can define \n\n```yaml\nexportSecret: true \nsecretName: the-secret-name\n```\n\nand omit `clientSecret` for apikey or `publicKey`, `privateKey` for certificates. For certificate you will have to provide a `csr` for the certificate in order to generate it\n\n```yaml\ncsr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n - httpapps.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n```\n\nwhen apikeys are exported as kubernetes secrets, they will have the type `otoroshi.io/apikey-secret` with values `clientId` and `clientSecret`\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: apikey-1\ntype: otoroshi.io/apikey-secret\ndata:\n clientId: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n clientSecret: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n```\n\nwhen certificates are exported as kubernetes secrets, they will have the type `kubernetes.io/tls` with the standard values `tls.crt` (the full cert chain) and `tls.key` (the private key). For more convenience, they will also have a `cert.crt` value containing the actual certificate without the ca chain and `ca-chain.crt` containing the ca chain without the certificate.\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: certificate-1\ntype: kubernetes.io/tls\ndata:\n tls.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n tls.key: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n cert.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n ca-chain.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA== \n```\n\n## Full CRD example\n\nthen you can deploy the previous example with better configuration level, and using mtls, apikeys, etc\n\nLet say the app looks like :\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\n// here we read the apikey to access http-app-2 from files mounted from secrets\nconst clientId = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientId').toString('utf8')\nconst clientSecret = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientSecret').toString('utf8')\n\nconst backendKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/tls.key').toString('utf8')\nconst backendCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/cert.crt').toString('utf8')\nconst backendCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/ca-chain.crt').toString('utf8')\n\nconst clientKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/tls.key').toString('utf8')\nconst clientCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/cert.crt').toString('utf8')\nconst clientCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/ca-chain.crt').toString('utf8')\n\nfunction callApi2() {\n return new Promise((success, failure) => {\n const options = { \n // using the implicit internal name (*.global.otoroshi.mesh) of the other service descriptor passing through otoroshi\n hostname: 'http-app-service-descriptor-2.global.otoroshi.mesh', \n port: 433, \n path: '/', \n method: 'GET',\n headers: {\n 'Accept': 'application/json',\n 'Otoroshi-Client-Id': clientId,\n 'Otoroshi-Client-Secret': clientSecret,\n },\n cert: clientCert,\n key: clientKey,\n ca: clientCa\n }; \n let data = '';\n const req = https.request(options, (res) => { \n res.on('data', (d) => { \n data = data + d.toString('utf8');\n }); \n res.on('end', () => { \n success({ body: JSON.parse(data), res });\n }); \n res.on('error', (e) => { \n failure(e);\n }); \n }); \n req.end();\n })\n}\n\nconst options = { \n key: backendKey, \n cert: backendCert, \n ca: backendCa, \n // we want mtls behavior\n requestCert: true, \n rejectUnauthorized: true\n}; \nhttps.createServer(options, (req, res) => { \n res.writeHead(200, {'Content-Type': 'application/json'});\n callApi2().then(resp => {\n res.write(JSON.stringify{ (\"message\": `Hello to ${req.socket.getPeerCertificate().subject.CN}`, api2: resp.body })); \n });\n}).listen(433);\n```\n\nthen, the descriptors will be :\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: foo/http-app\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 443\n name: \"https\"\n volumeMounts:\n - name: apikey-volume\n # here you will be able to read apikey from files \n # - /var/run/secrets/kubernetes.io/apikeys/clientId\n # - /var/run/secrets/kubernetes.io/apikeys/clientSecret\n mountPath: \"/var/run/secrets/kubernetes.io/apikeys\"\n readOnly: true\n volumeMounts:\n - name: backend-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/backend/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/backend/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/backend\"\n readOnly: true\n - name: client-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/client/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/client/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/client\"\n readOnly: true\n volumes:\n - name: apikey-volume\n secret:\n # here we reference the secret name from apikey http-app-2-apikey-1\n secretName: secret-2\n - name: backend-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-backend\n secretName: http-app-certificate-backend-secret\n - name: client-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-client\n secretName: http-app-certificate-client-secret\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8443\n targetPort: https\n name: https\n selector:\n run: http-app-deployment\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ServiceGroup\nmetadata:\n name: http-app-group\n annotations:\n otoroshi.io/id: http-app-group\nspec:\n description: a group to hold services about the http-app\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ApiKey\nmetadata:\n name: http-app-apikey-1\n# this apikey can be used to access the app\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-1\n authorizedEntities: \n - group_http-app-group\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-1\n# this apikey can be used to access another app in a different group\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-2\n authorizedEntities: \n - group_http-app-2-group\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: Certificate\nmetadata:\n name: http-app-certificate-frontend\nspec:\n description: certificate for the http-app on otorshi frontend\n autoRenew: true\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: Certificate\nmetadata:\n name: http-app-certificate-backend\nspec:\n description: certificate for the http-app deployed on pods\n autoRenew: true\n # a secret name http-app-certificate-backend-secret will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: http-app-certificate-backend-secret\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - http-app-service \n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-back, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: Certificate\nmetadata:\n name: http-app-certificate-client\nspec:\n description: certificate for the http-app\n autoRenew: true\n secretName: http-app-certificate-client-secret\n csr:\n issuer: CN=Otoroshi Root\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-client, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ServiceDescriptor\nmetadata:\n name: http-app-service-descriptor\nspec:\n description: the service descriptor for the http app\n groups: \n - http-app-group\n forceHttps: true\n hosts:\n - httpapp.foo.bar # hostname exposed oustide of the kubernetes cluster\n # - http-app-service-descriptor.global.otoroshi.mesh # implicit internal name inside the kubernetes cluster \n matchingRoot: /\n targets:\n - url: https://http-app-service:8443\n # alternatively, you can use serviceName and servicePort to use pods ip addresses\n # serviceName: http-app-service\n # servicePort: https\n mtlsConfig:\n # use mtls to contact the backend\n mtls: true\n certs: \n # reference the DN for the client cert\n - UID=httpapp-client, O=OtoroshiApps\n trustedCerts: \n # reference the DN for the CA cert \n - CN=Otoroshi Root\n sendOtoroshiHeadersBack: true\n xForwardedHeaders: true\n overrideHost: true\n allowHttp10: false\n publicPatterns:\n - /health\n additionalHeaders:\n x-foo: bar\n# here you can specify everything supported by otoroshi like jwt-verifiers, auth config, etc ... for more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n```\n\nnow with this descriptor deployed, you can access your app with a command like \n\n```sh\nCLIENT_ID=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientId}\" | base64 --decode`\nCLIENT_SECRET=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientSecret}\" | base64 --decode`\ncurl -X GET https://httpapp.foo.bar/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n## Expose Otoroshi to outside world\n\nIf you deploy Otoroshi on a kubernetes cluster, the Otoroshi service is deployed as a loadbalancer (service type: `LoadBalancer`). You'll need to declare in your DNS settings any name that can be routed by otoroshi going to the loadbalancer endpoint (CNAME or ip addresses) of your kubernetes distribution. If you use a managed kubernetes cluster from a cloud provider, it will work seamlessly as they will provide external loadbalancers out of the box. However, if you use a bare metal kubernetes cluster, id doesn't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples in the installation section.\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@ \n\n## Access a service from inside the k8s cluster\n\n### Using host header overriding\n\nYou can access any service referenced in otoroshi, through otoroshi from inside the kubernetes cluster by using the otoroshi service name (if you use a template based on https://github.com/MAIF/otoroshi/tree/master/kubernetes/base deployed in the otoroshi namespace) and the host header with the service domain like :\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET -H 'Host: httpapp.foo.bar' https://otoroshi-service.otoroshi.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using dedicated services\n\nit's also possible to define services that targets otoroshi deployment (or otoroshi workers deployment) and use then as valid hosts in otoroshi services \n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-awesome-service\nspec:\n selector:\n # run: otoroshi-deployment\n # or in cluster mode\n run: otoroshi-worker-deployment\n ports:\n - port: 8080\n name: \"http\"\n targetPort: \"http\"\n - port: 8443\n name: \"https\"\n targetPort: \"https\"\n```\n\nand access it like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-namspace.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using coredns integration\n\nYou can also enable the coredns integration to simplify the flow. You can use the the following keys in the plugin config :\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"coreDnsIntegration\": true, // enable coredns integration for intra cluster calls\n \"kubeSystemNamespace\": \"kube-system\", // the namespace where coredns is deployed\n \"corednsConfigMap\": \"coredns\", // the name of the coredns configmap\n \"otoroshiServiceName\": \"otoroshi-service\", // the name of the otoroshi service, could be otoroshi-workers-service\n \"otoroshiNamespace\": \"otoroshi\", // the namespace where otoroshi is deployed\n \"clusterDomain\": \"cluster.local\", // the domain for cluster services\n ...\n }\n}\n```\n\notoroshi will patch coredns config at startup then you can call your services like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh` or `${service-name}.${service-namespace}.svc.otoroshi.local`\n\n### Using coredns with manual patching\n\nyou can also patch the coredns config manually\n\n```sh\nkubectl edit configmaps coredns -n kube-system # or your own custom config map\n```\n\nand change the `Corefile` data to add the following snippet in at the end of the file\n\n```yaml\notoroshi.mesh:53 {\n errors\n health\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.arpa\n }\n rewrite name regex (.*)\\.otoroshi\\.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}\n```\n\nyou can also define simpler rewrite if it suits you use case better\n\n```\nrewrite name my-service.otoroshi.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n```\n\ndo not hesitate to change `otoroshi-worker-service.otoroshi` according to your own setup. If otoroshi is not in cluster mode, change it to `otoroshi-service.otoroshi`. If otoroshi is not deployed in the `otoroshi` namespace, change it to `otoroshi-service.the-namespace`, etc.\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh`\n\nthen you can call your service like \n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\n\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using old kube-dns system\n\nif your stuck with an old version of kubernetes, it uses kube-dns that is not supported by otoroshi, so you will have to provide your own coredns deployment and declare it as a stubDomain in the old kube-dns system. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the kube-dns integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"kubeDnsOperatorIntegration\": true, // enable kube-dns integration for intra cluster calls\n \"kubeDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"kubeDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"kubeDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\n### Using Openshift DNS operator\n\nOpenshift DNS operator does not allow to customize DNS configuration a lot, so you will have to provide your own coredns deployment and declare it as a stub in the Openshift DNS operator. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the Openshift DNS operator integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"openshiftDnsOperatorIntegration\": true, // enable openshift dns operator integration for intra cluster calls\n \"openshiftDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"openshiftDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"openshiftDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\ndon't forget to update the otoroshi `ClusterRole`\n\n```yaml\n- apiGroups:\n - operator.openshift.io\n resources:\n - dnses\n verbs:\n - get\n - list\n - watch\n - update\n```\n\n## Easier integration with otoroshi-sidecar\n\nOtoroshi can help you to easily use existing services without modifications while gettings all the perks of otoroshi like apikeys, mTLS, exchange protocol, etc. To do so, otoroshi will inject a sidecar container in the pod of your deployment that will handle call coming from otoroshi and going to otoroshi. To enable otoroshi-sidecar, you need to deploy the following admission webhooks\n\nwebhooks.yaml\n: @@snip [webhooks.yaml](../snippets/kubernetes/kustomize/base/webhooks.yaml)\n\nthen it's quite easy to add the sidecar, just add the following label to your pod `otoroshi.io/sidecar: inject` and some annotations to tell otoroshi what certificates and apikeys to use.\n\n```yaml\nannotations:\n otoroshi.io/sidecar-apikey: backend-apikey\n otoroshi.io/sidecar-backend-cert: backend-cert\n otoroshi.io/sidecar-client-cert: oto-client-cert\n otoroshi.io/token-secret: secret\n otoroshi.io/expected-dn: UID=oto-client-cert, O=OtoroshiApps\n```\n\nnow you can just call you otoroshi handled apis from inside your pod like `curl http://my-service.namespace.otoroshi.mesh/api` without passing any apikey or client certificate and the sidecar will handle everything for you. Same thing for call from otoroshi to your pod, everything will be done in mTLS fashion with apikeys and otoroshi exchange protocol\n\nhere is a full example\n\nsidecar.yaml\n: @@snip [sidecar.yaml](../snippets/kubernetes/kustomize/base/sidecar.yaml)\n\n@@@ warning\nPlease avoid to use port `80` for your pod as it's the default port to access otoroshi from your pod and the call will be redirect to the sidecar via an iptables rule\n@@@\n\n## Daikoku integration\n\nIt is possible to easily integrate daikoku generated apikeys without any human interaction with the actual apikey secret. To do that, create a plan in Daikoku and setup the integration mode to `Automatic`\n\n@@@ div { .centered-img }\n\n@@@\n\nthen when a user subscribe for an apikey, he will only see an integration token\n\n@@@ div { .centered-img }\n\n@@@\n\nthen just create an ApiKey manifest with this token and your good to go \n\n```yaml\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-3\nspec:\n exportSecret: true \n secretName: secret-3\n daikokuToken: RShQrvINByiuieiaCBwIZfGFgdPu7tIJEN5gdV8N8YeH4RI9ErPYJzkuFyAkZ2xy\n```\n\n"
},
{
"name": "other.md",
@@ -137,7 +137,7 @@
"id": "/getotoroshi/fromdocker.md",
"url": "/getotoroshi/fromdocker.html",
"title": "From docker",
- "content": "# From docker\n\nIf you're a Docker aficionado, Otoroshi is provided as a Docker image that your can pull directly from Official repos.\n\nfirst, fetch the last Docker image of Otoroshi :\n\n```sh\ndocker pull maif/otoroshi:1.5.0-dev\n# or \ndocker pull maif/otoroshi:latest\n# or \ndocker pull maif/otoroshi:jdk8-1.5.0-dev\n# or \ndocker pull maif/otoroshi:jdk11-1.5.0-dev\n# or \ndocker pull maif/otoroshi:jdk12-1.5.0-dev\n# or \ndocker pull maif/otoroshi:jdk13-1.5.0-dev\n# or \ndocker pull maif/otoroshi:jdk14-1.5.0-dev\n```"
+ "content": "# From docker\n\nIf you're a Docker aficionado, Otoroshi is provided as a Docker image that your can pull directly from Official repos.\n\nfirst, fetch the last Docker image of Otoroshi :\n\n```sh\ndocker pull maif/otoroshi:1.5.0-alpha.6\n# or \ndocker pull maif/otoroshi:latest\n# or \ndocker pull maif/otoroshi:jdk8-1.5.0-alpha.6\n# or \ndocker pull maif/otoroshi:jdk11-1.5.0-alpha.6\n# or \ndocker pull maif/otoroshi:jdk12-1.5.0-alpha.6\n# or \ndocker pull maif/otoroshi:jdk13-1.5.0-alpha.6\n# or \ndocker pull maif/otoroshi:jdk14-1.5.0-alpha.6\n```"
},
{
"name": "fromsources.md",
@@ -158,7 +158,7 @@
"id": "/index.md",
"url": "/index.html",
"title": "Otoroshi",
- "content": "# Otoroshi\n\n**Otoroshi** is a layer of lightweight api management on top of a modern http reverse proxy written in Scala and developped by the MAIF OSS team that can handle all the calls to and between your microservices without service locator and let you change configuration dynamicaly at runtime.\n\n\n> *The Otoroshi is a large hairy monster that tends to lurk on the top of the torii gate in front of Shinto shrines. It's a hostile creature, but also said to be the guardian of the shrine and is said to leap down from the top of the gate to devour those who approach the shrine for only self-serving purposes.*\n\n@@@ div { .centered-img }\n[![Build Status](https://travis-ci.org/MAIF/otoroshi.svg?branch=master)](https://travis-ci.org/MAIF/otoroshi) [![Join the chat at https://gitter.im/MAIF/otoroshi](https://badges.gitter.im/MAIF/otoroshi.svg)](https://gitter.im/MAIF/otoroshi?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [ ![Download](https://img.shields.io/github/release/MAIF/otoroshi.svg) ](hhttps://github.com/MAIF/otoroshi/releases/download/v1.5.0-dev/otoroshi.jar)\n@@@\n\n@@@ div { .centered-img }\n\n@@@\n\n## Installation\n\nYou can download the latest build of Otoroshi as a [fat jar](https://github.com/MAIF/otoroshi/releases/download/v1.5.0-dev/otoroshi.jar), as a [zip package](https://github.com/MAIF/otoroshi/releases/download/v1.5.0-dev/otoroshi-dist.zip) or as a @ref:[docker image](./getotoroshi/fromdocker.md).\n\nYou can install and run Otoroshi with this little bash snippet\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v1.5.0-dev/otoroshi.jar'\njava -jar otoroshi.jar\n```\n\nor using docker\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi:1.5.0-dev\n```\n\nnow open your browser to http://otoroshi.oto.tools:8080/, **log in with the credential generated in the logs** and explore by yourself, if you want better instructions, just go to the @ref:[Quick Start](./quickstart.md) or directly to the @ref:[installation instructions](./getotoroshi/index.md)\n\n## Documentation\n\n* @ref:[About Otoroshi](./about.md)\n* @ref:[Architecture](./archi.md)\n* @ref:[Features](./features.md)\n* @ref:[Try Otoroshi in 5 minutes](./quickstart.md)\n* @ref:[Get Otoroshi](./getotoroshi/index.md)\n* @ref:[First run](./firstrun/index.md)\n* @ref:[Setup Otoroshi](./setup/index.md)\n* @ref:[Using Otoroshi](./usage/index.md)\n* @ref:[Third party Integrations](./integrations/index.md)\n* @ref:[Detailed topics](./topics/index.md)\n* @ref:[Admin REST API](./api.md)\n* @ref:[Deploy to production](./deploy/index.md)\n* @ref:[Developing Otoroshi](./dev.md)\n\n## Discussion\n\nJoin the [Otoroshi](https://gitter.im/MAIF/otoroshi) channel on the [MAIF Gitter](https://gitter.im/MAIF)\n\n## Sources\n\nThe sources of Otoroshi are available on [Github](https://github.com/MAIF/otoroshi).\n\n## Logo\n\nYou can find the official Otoroshi logo [on GitHub](https://github.com/MAIF/otoroshi/blob/master/resources/otoroshi-logo.png). The Otoroshi logo has been created by François Galioto ([@fgalioto](https://twitter.com/fgalioto))\n\n## Changelog\n\nEvery release, along with the migration instructions, is documented on the [Github Releases](https://github.com/MAIF/otoroshi/releases) page.\n\n## Patrons\n\nThe work on Otoroshi was funded by MAIF with the help of the community.\n\n## Licence\n\nOtoroshi is Open Source and available under the [Apache 2 License](https://opensource.org/licenses/Apache-2.0)\n\n@@@ index\n\n* [About Otoroshi](about.md)\n* [Architecture](archi.md)\n* [Features](features.md)\n* [Quickstart](quickstart.md)\n* [Get otoroshi](getotoroshi/index.md)\n* [First run](firstrun/index.md)\n* [Setup](setup/index.md)\n* [Using Otoroshi](usage/index.md)\n* [Integrations](integrations/index.md)\n* [Detailed topics](topics/index.md)\n* [Admin REST API](api.md)\n* [Deploy to production](deploy/index.md)\n* [Developing Otoroshi](./dev.md)\n\n@@@\n"
+ "content": "# Otoroshi\n\n**Otoroshi** is a layer of lightweight api management on top of a modern http reverse proxy written in Scala and developped by the MAIF OSS team that can handle all the calls to and between your microservices without service locator and let you change configuration dynamicaly at runtime.\n\n\n> *The Otoroshi is a large hairy monster that tends to lurk on the top of the torii gate in front of Shinto shrines. It's a hostile creature, but also said to be the guardian of the shrine and is said to leap down from the top of the gate to devour those who approach the shrine for only self-serving purposes.*\n\n@@@ div { .centered-img }\n[![Build Status](https://travis-ci.org/MAIF/otoroshi.svg?branch=master)](https://travis-ci.org/MAIF/otoroshi) [![Join the chat at https://gitter.im/MAIF/otoroshi](https://badges.gitter.im/MAIF/otoroshi.svg)](https://gitter.im/MAIF/otoroshi?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [ ![Download](https://img.shields.io/github/release/MAIF/otoroshi.svg) ](hhttps://github.com/MAIF/otoroshi/releases/download/v1.5.0-alpha.6/otoroshi.jar)\n@@@\n\n@@@ div { .centered-img }\n\n@@@\n\n## Installation\n\nYou can download the latest build of Otoroshi as a [fat jar](https://github.com/MAIF/otoroshi/releases/download/v1.5.0-alpha.6/otoroshi.jar), as a [zip package](https://github.com/MAIF/otoroshi/releases/download/v1.5.0-alpha.6/otoroshi-dist.zip) or as a @ref:[docker image](./getotoroshi/fromdocker.md).\n\nYou can install and run Otoroshi with this little bash snippet\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v1.5.0-alpha.6/otoroshi.jar'\njava -jar otoroshi.jar\n```\n\nor using docker\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi:1.5.0-alpha.6\n```\n\nnow open your browser to http://otoroshi.oto.tools:8080/, **log in with the credential generated in the logs** and explore by yourself, if you want better instructions, just go to the @ref:[Quick Start](./quickstart.md) or directly to the @ref:[installation instructions](./getotoroshi/index.md)\n\n## Documentation\n\n* @ref:[About Otoroshi](./about.md)\n* @ref:[Architecture](./archi.md)\n* @ref:[Features](./features.md)\n* @ref:[Try Otoroshi in 5 minutes](./quickstart.md)\n* @ref:[Get Otoroshi](./getotoroshi/index.md)\n* @ref:[First run](./firstrun/index.md)\n* @ref:[Setup Otoroshi](./setup/index.md)\n* @ref:[Using Otoroshi](./usage/index.md)\n* @ref:[Third party Integrations](./integrations/index.md)\n* @ref:[Detailed topics](./topics/index.md)\n* @ref:[Admin REST API](./api.md)\n* @ref:[Deploy to production](./deploy/index.md)\n* @ref:[Developing Otoroshi](./dev.md)\n\n## Discussion\n\nJoin the [Otoroshi](https://gitter.im/MAIF/otoroshi) channel on the [MAIF Gitter](https://gitter.im/MAIF)\n\n## Sources\n\nThe sources of Otoroshi are available on [Github](https://github.com/MAIF/otoroshi).\n\n## Logo\n\nYou can find the official Otoroshi logo [on GitHub](https://github.com/MAIF/otoroshi/blob/master/resources/otoroshi-logo.png). The Otoroshi logo has been created by François Galioto ([@fgalioto](https://twitter.com/fgalioto))\n\n## Changelog\n\nEvery release, along with the migration instructions, is documented on the [Github Releases](https://github.com/MAIF/otoroshi/releases) page.\n\n## Patrons\n\nThe work on Otoroshi was funded by MAIF with the help of the community.\n\n## Licence\n\nOtoroshi is Open Source and available under the [Apache 2 License](https://opensource.org/licenses/Apache-2.0)\n\n@@@ index\n\n* [About Otoroshi](about.md)\n* [Architecture](archi.md)\n* [Features](features.md)\n* [Quickstart](quickstart.md)\n* [Get otoroshi](getotoroshi/index.md)\n* [First run](firstrun/index.md)\n* [Setup](setup/index.md)\n* [Using Otoroshi](usage/index.md)\n* [Integrations](integrations/index.md)\n* [Detailed topics](topics/index.md)\n* [Admin REST API](api.md)\n* [Deploy to production](deploy/index.md)\n* [Developing Otoroshi](./dev.md)\n\n@@@\n"
},
{
"name": "analytics.md",
@@ -200,7 +200,7 @@
"id": "/quickstart.md",
"url": "/quickstart.html",
"title": "Try Otoroshi in 5 minutes",
- "content": "# Try Otoroshi in 5 minutes\n\nwhat you will need :\n\n* JDK 11\n* curl\n* jq\n* 5 minutes of free time\n\n## The elevator pitch\n\nOtoroshi is an awesome reverse proxy built with Scala that handles all the calls to and between your microservices without service locator and lets you change configuration dynamically at runtime.\n\n## Download otoroshi\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v1.5.0-dev/otoroshi.jar'\n```\n\nIf you don’t/can’t have these tools on your machine, you can start a sandboxed environment using here with the following command\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi\n```\n\n## Start otoroshi\n\nto start otoroshi, just run the following command \n\n```sh\njava -jar otoroshi.jar\n```\n\nthis will start an in-memory otoroshi instance with a generated password that will be printed in the logs. You can set the password with the following flags\n\n```sh\njava -Dapp.adminLogin=admin@foo.bar -Dapp.adminPassword=password -jar otoroshi.jar\n```\n\nif you want to have otoroshi content persisted between launch without having to setup a datastore, just usse the following flag\n\n```sh\njava -Dapp.storage=file -jar otoroshi.jar\n```\n\nas the result, you will see something like\n\n```log\n$ java -jar otoroshi.jar\n\n[info] otoroshi-env - Otoroshi version 1.5.0-dev\n[info] otoroshi-env - Admin API exposed on http://otoroshi-api.oto.tools:8080\n[info] otoroshi-env - Admin UI exposed on http://otoroshi.oto.tools:8080\n[warn] otoroshi-env - Scripting is enabled on this Otoroshi instance !\n[info] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[info] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[info] otoroshi-env - You can log into the Otoroshi admin console with the following credentials: admin@otoroshi.io / xol1Kwjzqe9OXjqDxxPPbPb9p0BPjhCO\n[info] play.api.Play - Application started (Prod)\n[info] otoroshi-script-manager - Compiling and starting scripts ...\n[info] otoroshi-script-manager - Finding and starting plugins ...\n[info] otoroshi-script-manager - Compiling and starting scripts done in 18 ms.\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n[info] p.c.s.AkkaHttpServer - Listening for HTTPS on /0:0:0:0:0:0:0:0:8443\n[info] otoroshi-script-manager - Finding and starting plugins done in 4681 ms.\n[info] otoroshi-env - Generating CA certificate for Otoroshi self signed certificates ...\n[info] otoroshi-env - Generating a self signed SSL certificate for https://*.oto.tools ...\n```\n\n## Log into the admin UI\n\njust go to http://otoroshi.oto.tools:8080 and log in with the credentials printed in the logs\n\n## Create you first service\n\nto create your first service you can either do it using the admin UI or using the admin API. Let's use the admin API.\n\nBy default, otoroshi registers an admin apikey with `admin-api-apikey-id:admin-api-apikey-secret` value (those values can be tuned at first startup). Of course you can create your own with\n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/apikeys/_template \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n -d '{\n \"clientId\": \"quickstart\",\n \"clientSecret\": \"secret\",\n \"clientName\": \"quickstart-apikey\",\n \"authorizedEntities\": [\"group_admin-api-group\"]\n}' | jq\n```\n\nnow let create a new service to proxy `https://maif.gitub.io` on domain `maif.oto.tools`. This service will be public and will not require an apikey to pass\n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/_template \\\n -u quickstart:secret \\\n -d '{\n \"name\": \"quickstart-service\", \n \"hosts\": [\"maif.oto.tools\"], \n \"targets\": [{ \"host\": \"maif.github.io\", \"scheme\": \"https\" }], \n \"publicPatterns\": [\"/.*\"]\n}' | jq\n```\n\nnow just go to `http://maif.oto.tools:8080` to check if it works\n\n## Create a service to proxy an api\n\nnow will we proxy the api at `https://aws.random.cat/meow` that returns random cat pictures and make it use apikeys.\n\n```sh\n$ curl https://aws.random.cat/meow | jq\n\n{\n \"file\": \"https://purr.objects-us-east-1.dream.io/i/20161003_163413.jpg\"\n}\n```\n\nFirst let's create the service \n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/_template \\\n -u quickstart:secret \\\n -d '{\n \"id\": \"cats-api\",\n \"name\": \"cats-api\", \n \"hosts\": [\"cats.oto.tools\"], \n \"targets\": [{ \"host\": \"aws.random.cat\", \"scheme\": \"https\" }],\n \"root\": \"/meow\"\n}' | jq\n```\n\nbut if you try to use it, you will have something like :\n\n```sh\n$ curl http://cats.oto.tools:8080 | jq\n\n{\n \"Otoroshi-Error\": \"No ApiKey provided\"\n}\n```\n\nthat's because the api is not public and needs apikeys to access it. So let's create an apikey\n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/apikeys/_template \\\n -u quickstart:secret \\\n -d '{\n \"clientId\": \"apikey1\",\n \"clientSecret\": \"secret\",\n \"clientName\": \"quickstart-apikey-1\",\n \"authorizedEntities\": [\"group_default\"]\n}' | jq\n``` \n\nand try again\n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret | jq\n\n{\n \"file\": \"https://purr.objects-us-east-1.dream.io/i/vICG4.gif\"\n}\n```\n\nnow let's try to play with quotas. First, we need to know what is the current state of the apikey quotas by enabling otoroshi headers about consumptions\n\n```sh\ncurl -X PATCH -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/cats-api \\\n -u quickstart:secret \\\n -d '[\n { \"op\": \"replace\", \"path\": \"/sendOtoroshiHeadersBack\", \"value\": true }\n]' | jq\n```\n\nand retry the call with \n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret --include\n\nHTTP/1.1 200 OK\nDate: Tue, 10 Mar 2020 12:56:08 GMT\nServer: Apache\nExpires: Mon, 26 Jul 1997 05:00:00 GMT\nCache-Control: no-cache, must-revalidate\nOtoroshi-Request-Id: 1237361356529729796\nOtoroshi-Proxy-Latency: 79\nOtoroshi-Upstream-Latency: 416\nOtoroshi-Request-Timestamp: 2020-03-10T13:55:11.195+01:00\nAccess-Control-Allow-Origin: *\nAccess-Control-Allow-Methods: GET\nOtoroshi-Daily-Calls-Remaining: 9999998\nOtoroshi-Monthly-Calls-Remaining: 9999998\nContent-Type: application/json\nContent-Length: 71\n\n{\"file\":\"https:\\/\\/purr.objects-us-east-1.dream.io\\/i\\/beerandcat.jpg\"}\n```\n\nnow let's try to allow only 10 request per day on the apikey\n\n```sh\ncurl -X PATCH -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/cats-api/apikeys/apikey1 \\\n -u quickstart:secret \\\n -d '[\n { \"op\": \"replace\", \"path\": \"/dailyQuota\", \"value\": 10 }\n]' | jq\n```\n\nthen try to call you api again\n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret --include\n\nHTTP/1.1 200 OK\nDate: Tue, 10 Mar 2020 13:00:01 GMT\nServer: Apache\nExpires: Mon, 26 Jul 1997 05:00:00 GMT\nCache-Control: no-cache, must-revalidate\nOtoroshi-Request-Id: 1237362334930829633\nOtoroshi-Proxy-Latency: 71\nOtoroshi-Upstream-Latency: 92\nOtoroshi-Request-Timestamp: 2020-03-10T13:59:04.456+01:00\nAccess-Control-Allow-Origin: *\nAccess-Control-Allow-Methods: GET\nOtoroshi-Daily-Calls-Remaining: 7\nOtoroshi-Monthly-Calls-Remaining: 9999997\nContent-Type: application/json\nContent-Length: 66\n\n{\"file\":\"https:\\/\\/purr.objects-us-east-1.dream.io\\/i\\/C1XNK.jpg\"}\n```\n\neventually you will get something like\n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret --include\n\nHTTP/1.1 429 Too Many Requests\nOtoroshi-Error: true\nOtoroshi-Error-Msg: You performed too much requests\nOtoroshi-State-Resp: --\nDate: Tue, 10 Mar 2020 12:59:11 GMT\nContent-Type: application/json\nContent-Length: 52\n\n{\"Otoroshi-Error\":\"You performed too much requests\"}\n```"
+ "content": "# Try Otoroshi in 5 minutes\n\nwhat you will need :\n\n* JDK 11\n* curl\n* jq\n* 5 minutes of free time\n\n## The elevator pitch\n\nOtoroshi is an awesome reverse proxy built with Scala that handles all the calls to and between your microservices without service locator and lets you change configuration dynamically at runtime.\n\n## Download otoroshi\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v1.5.0-alpha.6/otoroshi.jar'\n```\n\nIf you don’t/can’t have these tools on your machine, you can start a sandboxed environment using here with the following command\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi\n```\n\n## Start otoroshi\n\nto start otoroshi, just run the following command \n\n```sh\njava -jar otoroshi.jar\n```\n\nthis will start an in-memory otoroshi instance with a generated password that will be printed in the logs. You can set the password with the following flags\n\n```sh\njava -Dapp.adminLogin=admin@foo.bar -Dapp.adminPassword=password -jar otoroshi.jar\n```\n\nif you want to have otoroshi content persisted between launch without having to setup a datastore, just usse the following flag\n\n```sh\njava -Dapp.storage=file -jar otoroshi.jar\n```\n\nas the result, you will see something like\n\n```log\n$ java -jar otoroshi.jar\n\n[info] otoroshi-env - Otoroshi version 1.5.0-alpha.6\n[info] otoroshi-env - Admin API exposed on http://otoroshi-api.oto.tools:8080\n[info] otoroshi-env - Admin UI exposed on http://otoroshi.oto.tools:8080\n[warn] otoroshi-env - Scripting is enabled on this Otoroshi instance !\n[info] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[info] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[info] otoroshi-env - You can log into the Otoroshi admin console with the following credentials: admin@otoroshi.io / xol1Kwjzqe9OXjqDxxPPbPb9p0BPjhCO\n[info] play.api.Play - Application started (Prod)\n[info] otoroshi-script-manager - Compiling and starting scripts ...\n[info] otoroshi-script-manager - Finding and starting plugins ...\n[info] otoroshi-script-manager - Compiling and starting scripts done in 18 ms.\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n[info] p.c.s.AkkaHttpServer - Listening for HTTPS on /0:0:0:0:0:0:0:0:8443\n[info] otoroshi-script-manager - Finding and starting plugins done in 4681 ms.\n[info] otoroshi-env - Generating CA certificate for Otoroshi self signed certificates ...\n[info] otoroshi-env - Generating a self signed SSL certificate for https://*.oto.tools ...\n```\n\n## Log into the admin UI\n\njust go to http://otoroshi.oto.tools:8080 and log in with the credentials printed in the logs\n\n## Create you first service\n\nto create your first service you can either do it using the admin UI or using the admin API. Let's use the admin API.\n\nBy default, otoroshi registers an admin apikey with `admin-api-apikey-id:admin-api-apikey-secret` value (those values can be tuned at first startup). Of course you can create your own with\n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/apikeys/_template \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n -d '{\n \"clientId\": \"quickstart\",\n \"clientSecret\": \"secret\",\n \"clientName\": \"quickstart-apikey\",\n \"authorizedEntities\": [\"group_admin-api-group\"]\n}' | jq\n```\n\nnow let create a new service to proxy `https://maif.gitub.io` on domain `maif.oto.tools`. This service will be public and will not require an apikey to pass\n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/_template \\\n -u quickstart:secret \\\n -d '{\n \"name\": \"quickstart-service\", \n \"hosts\": [\"maif.oto.tools\"], \n \"targets\": [{ \"host\": \"maif.github.io\", \"scheme\": \"https\" }], \n \"publicPatterns\": [\"/.*\"]\n}' | jq\n```\n\nnow just go to `http://maif.oto.tools:8080` to check if it works\n\n## Create a service to proxy an api\n\nnow will we proxy the api at `https://aws.random.cat/meow` that returns random cat pictures and make it use apikeys.\n\n```sh\n$ curl https://aws.random.cat/meow | jq\n\n{\n \"file\": \"https://purr.objects-us-east-1.dream.io/i/20161003_163413.jpg\"\n}\n```\n\nFirst let's create the service \n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/_template \\\n -u quickstart:secret \\\n -d '{\n \"id\": \"cats-api\",\n \"name\": \"cats-api\", \n \"hosts\": [\"cats.oto.tools\"], \n \"targets\": [{ \"host\": \"aws.random.cat\", \"scheme\": \"https\" }],\n \"root\": \"/meow\"\n}' | jq\n```\n\nbut if you try to use it, you will have something like :\n\n```sh\n$ curl http://cats.oto.tools:8080 | jq\n\n{\n \"Otoroshi-Error\": \"No ApiKey provided\"\n}\n```\n\nthat's because the api is not public and needs apikeys to access it. So let's create an apikey\n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/apikeys/_template \\\n -u quickstart:secret \\\n -d '{\n \"clientId\": \"apikey1\",\n \"clientSecret\": \"secret\",\n \"clientName\": \"quickstart-apikey-1\",\n \"authorizedEntities\": [\"group_default\"]\n}' | jq\n``` \n\nand try again\n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret | jq\n\n{\n \"file\": \"https://purr.objects-us-east-1.dream.io/i/vICG4.gif\"\n}\n```\n\nnow let's try to play with quotas. First, we need to know what is the current state of the apikey quotas by enabling otoroshi headers about consumptions\n\n```sh\ncurl -X PATCH -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/cats-api \\\n -u quickstart:secret \\\n -d '[\n { \"op\": \"replace\", \"path\": \"/sendOtoroshiHeadersBack\", \"value\": true }\n]' | jq\n```\n\nand retry the call with \n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret --include\n\nHTTP/1.1 200 OK\nDate: Tue, 10 Mar 2020 12:56:08 GMT\nServer: Apache\nExpires: Mon, 26 Jul 1997 05:00:00 GMT\nCache-Control: no-cache, must-revalidate\nOtoroshi-Request-Id: 1237361356529729796\nOtoroshi-Proxy-Latency: 79\nOtoroshi-Upstream-Latency: 416\nOtoroshi-Request-Timestamp: 2020-03-10T13:55:11.195+01:00\nAccess-Control-Allow-Origin: *\nAccess-Control-Allow-Methods: GET\nOtoroshi-Daily-Calls-Remaining: 9999998\nOtoroshi-Monthly-Calls-Remaining: 9999998\nContent-Type: application/json\nContent-Length: 71\n\n{\"file\":\"https:\\/\\/purr.objects-us-east-1.dream.io\\/i\\/beerandcat.jpg\"}\n```\n\nnow let's try to allow only 10 request per day on the apikey\n\n```sh\ncurl -X PATCH -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/cats-api/apikeys/apikey1 \\\n -u quickstart:secret \\\n -d '[\n { \"op\": \"replace\", \"path\": \"/dailyQuota\", \"value\": 10 }\n]' | jq\n```\n\nthen try to call you api again\n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret --include\n\nHTTP/1.1 200 OK\nDate: Tue, 10 Mar 2020 13:00:01 GMT\nServer: Apache\nExpires: Mon, 26 Jul 1997 05:00:00 GMT\nCache-Control: no-cache, must-revalidate\nOtoroshi-Request-Id: 1237362334930829633\nOtoroshi-Proxy-Latency: 71\nOtoroshi-Upstream-Latency: 92\nOtoroshi-Request-Timestamp: 2020-03-10T13:59:04.456+01:00\nAccess-Control-Allow-Origin: *\nAccess-Control-Allow-Methods: GET\nOtoroshi-Daily-Calls-Remaining: 7\nOtoroshi-Monthly-Calls-Remaining: 9999997\nContent-Type: application/json\nContent-Length: 66\n\n{\"file\":\"https:\\/\\/purr.objects-us-east-1.dream.io\\/i\\/C1XNK.jpg\"}\n```\n\neventually you will get something like\n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret --include\n\nHTTP/1.1 429 Too Many Requests\nOtoroshi-Error: true\nOtoroshi-Error-Msg: You performed too much requests\nOtoroshi-State-Resp: --\nDate: Tue, 10 Mar 2020 12:59:11 GMT\nContent-Type: application/json\nContent-Length: 52\n\n{\"Otoroshi-Error\":\"You performed too much requests\"}\n```"
},
{
"name": "admin.md",
diff --git a/manual/src/main/paradox/content.json b/manual/src/main/paradox/content.json
index 2c663ead69..7c35cd9eb3 100644
--- a/manual/src/main/paradox/content.json
+++ b/manual/src/main/paradox/content.json
@@ -1 +1 @@
-[{"name":"about.md","id":"/about.md","url":"/about.html","title":"About Otoroshi","content":"# About Otoroshi\n\nAt the beginning of 2017, we had the need to create a new environment to be able to create new \"digital\" products very quickly in an agile fashion at MAIF. Naturally we turned to PaaS solutions and chose the excellent Clever-Cloud product to run our apps. \n\nWe also chose that every feature team will have the freedom to choose its own technological stack to build its product. It was a nice move but it has also introduced some challenges in terms of homogeneity for traceability, security, logging, ... because we did not want to force library usage in the products. We could have used something like Service Mesh Pattern but the deployement model of Clever-Cloud prevented us to do it.\n\nThe right solution was to use a reverse proxy or some kind of API Gateway able to provide tracability, logging, security with apikeys, quotas, DNS as a service locator, etc. We needed something easy to use, with a human friendly UI, a nice API to extends its features, true hot reconfiguration, able to generate internal events for third party usage. A couple of solutions were available at that time, but not one seems to fit our needs, there was always something missing, too complicated for our needs or not playing well with Clever-Cloud deployment model.\n\nAt some point, we tried to write a small prototype to explore what could be our dream reverse proxy. The design was very simple, there were some rough edges but every major feature needed was there waiting to be enhanced.\n\n**Otoroshi** was born and we decided to move ahead with our hairy monster :)\n\n## Philosophy \n\nEvery OSS product build at MAIF like Izanami follow a common philosophy. \n\n* the services or API provided should be technology agnostic.\n* http first: http is the right answer to the previous quote \n* api First: The UI is just another client of the api. \n* secured: The services exposed need authentication for both humans or machines \n* event based: The services should expose a way to get notified of what happened inside. \n"},{"name":"api.md","id":"/api.md","url":"/api.html","title":"Admin REST API","content":"# Admin REST API\n\nOtoroshi provides a fully featured REST admin API to perform almost every operation possible in the Otoroshi dashboard. The Otoroshi dashbaord is just a regular consumer of the admin API.\n\nUsing the admin API, you can do whatever you want and enhance your Otoroshi instances with a lot of features that will feet your needs.\n\nOtoroshi also provides some connectors that uses the Otoroshi admin API to automate Otorshi's instances when used with stuff like containers orchestrators. For more informations about that, just go to the @ref:[third party integrations chapter](./integrations/index.md)\n\n## Swagger descriptor\n\nThe Otoroshi admin API is described using OpenAPI format and is available at :\n\nhttps://maif.github.io/otoroshi/manual/code/swagger.json\n\nEvery Otoroshi instance provides its own embedded OpenAPI descriptor at :\n\nhttp://otoroshi.oto.tools:8080/api/swagger.json\n\n## Swagger documentation\n\nYou can read the OpenAPI descriptor in a more human friendly fashion using `Swagger UI`. The swagger UI documentation of the Otoroshi admin API is available at :\n\nhttps://maif.github.io/otoroshi/swagger-ui/index.html\n\nEvery Otoroshi instance provides its own embedded OpenAPI descriptor at :\n\nhttp://otoroshi.oto.tools:8080/api/swagger/ui\n\nYou can also read the swagger UI documentation of the Otoroshi admin API below :\n\n@@@ div { .swagger-frame }\n\n\n@@@\n"},{"name":"archi.md","id":"/archi.md","url":"/archi.html","title":"Architecture","content":"# Architecture\n\nWhen we started the development of Otoroshi, we had several classical patterns in mind like `Service gateway`, `Service locator`, `Circuit breakers`, etc ...\n\nAt start we thought about providing a bunch of librairies that would be included in each microservice or app to perform these tasks. But the more we were thinking about it, the more it was feeling weird, unagile, etc, it also prevented us to use any technical stack we wanted to use. So we decided to change our approach to something more universal.\n\nWe chose to make Otoroshi the central part of our microservices system, something between a reverse-proxy, a service gateway and a service locator where each call to a microservice (even from another microservice) must pass through Otoroshi. There are multiple benefits to do that, each call can be logged, audited, monitored, integrated with a circuit breaker, etc without imposing libraries and technical stack. Any service is exposed through its own domain and we rely only on DNS to handle the service location part. Any access to a service is secured by default with an api key and is supervised by a circuit breaker to avoid cascading failures.\n\n@@@ div { .centered-img }\n\n@@@\n\nOtoroshi tries to embrace our @ref:[global philosophy](./about.md#philosophy) by providing a full featured REST admin api, a gorgeous admin dashboard written in [React](https://reactjs.org/) that uses the api, by generating traffic events, alerts events, audit events that can be consumed by several channels. Otoroshi also supports a bunch of datastores to better match with different use cases.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"aws-beanstalk.md","id":"/deploy/aws-beanstalk.md","url":"/deploy/aws-beanstalk.html","title":"AWS - Elastic Beanstalk","content":"# AWS - Elastic Beanstalk\n\nNow you want to use Otoroshi on AWS. There are multiple options to deploy Otoroshi on AWS, \nfor instance :\n\n* You can deploy the @ref:[Docker image](../getotoroshi/fromdocker.md) on [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html)\n* You can create a basic [Amazon EC2](https://docs.aws.amazon.com/fr_fr/AWSEC2/latest/UserGuide/concepts.html), access it via SSH, then \ndeploy the @ref:[otoroshi.jar](../firstrun/run.md#from-jar-file) \n* Or you can use [AWS Elastic Beanstalk](https://aws.amazon.com/fr/elasticbeanstalk)\n\nIn this section we are going to cover how to deploy Otoroshi on [AWS Elastic Beanstalk](https://aws.amazon.com/fr/elasticbeanstalk). \n\n## AWS Elastic Beanstalk Overview\nUnlike Clever Cloud, to deploy an application on AWS Elastic Beanstalk, you don't link your app to your VCS repository, push your code and expect it to be built and run.\n\nAWS Elastic Beanstalk does only the run part. So you have to handle your own build pipeline, upload a Zip file containing your runnable, then AWS Elastic Beanstalk will take it from there. \n \nEg: for apps running on the JVM (Scala/Java/Kotlin) a Zip with the jar inside would suffice, for apps running in a Docker container, a Zip with the DockerFile would be enough. \n\n\n## Prepare your deployment target\nActually, there are 2 options to build your target. \n\nEither you create a DockerFile from this @ref:[Docker image](../getotoroshi/fromdocker.md), build a zip, and do all the Otoroshi custom configuration using ENVs.\n\nOr you download the @ref:[otoroshi.jar](../getotoroshi/frombinaries.md), do all the Otoroshi custom configuration using your own otoroshi.conf, and create a DockerFile that runs the jar using your otoroshi.conf. \n\nFor the second option your DockerFile would look like this :\n\n```dockerfile\nFROM openjdk:8\nVOLUME /tmp\nEXPOSE 8080\nADD otoroshi.jar otoroshi.jar\nADD otoroshi.conf otoroshi.conf\nRUN sh -c 'touch /otoroshi.jar'\nENV JAVA_OPTS=\"\"\nENTRYPOINT [ \"sh\", \"-c\", \"java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -Dconfig.file=/otoroshi.conf -jar /otoroshi.jar\" ]\n``` \n \nI'd recommend the second option.\n \nNow Zip your target (Jar + Conf + DockerFile) and get ready for deployment. \n\n## Create an Otoroshi instance on AWS Elastic Beanstalk\nFirst, go to [AWS Elastic Beanstalk Console](https://eu-west-3.console.aws.amazon.com/elasticbeanstalk/home?region=eu-west-3#/welcome), don't forget to sign in and make sure that you are in the good region (eg : eu-west-3 for Paris).\n\nHit **Get started** \n\n@@@ div { .centered-img }\n\n@@@\n\nSpecify the **Application name** of your application, Otoroshi for example.\n\n@@@ div { .centered-img }\n\n@@@\n \nChoose the **Platform** of the application you want to create, in your case use Docker.\n\nFor **Application code** choose **Upload your code** then hit **Upload**.\n\n@@@ div { .centered-img }\n\n@@@\n\nBrowse the zip created in the [previous section](#prepare-your-deployment-target) from your machine. \n\nAs you can see in the image above, you can also choose an S3 location, you can imagine that at the end of your build pipeline you upload your Zip to S3, and then get it from there (I wouldn't recommend that though).\n \nWhen the upload is done, hit **Configure more options**.\n \n@@@ div { .centered-img }\n\n@@@ \n \nRight now an AWS Elastic Beanstalk application has been created, and by default an environment named Otoroshi-env is being created as well.\n\nAWS Elastic Beanstalk can manage multiple environments of the same application, for instance environments can be (prod, preprod, expriments...). \n\nOtoroshi is a bit particular, it doesn't make much sense to have multiple environments, since Otoroshi will handle all the requests from/to downstream services regardless of the environment. \n \nAs you see in the image above, we are now configuring the Otoroshi-env, the one and only environment of Otoroshi.\n \nFor **Configuration presets**, choose custom configuration, now you have a load balancer for your environment with the capacity of at least one instance and at most four.\nI'd recommend at least 2 instances, to change that, on the **Capacity** card hit **Modify**. \n\n@@@ div { .centered-img }\n\n@@@\n\nChange the **Instances** to min 2, max 4 then hit **Save**. For the **Scaling triggers**, I'd keep the default values, but know that you can edit the capacity config any time you want, it only costs a redeploy, which will be done automatically by the way.\n \nInstances size is by default t2.micro, which is a bit small for running Otoroshi, I'd recommend a t2.medium. \nOn the **Instances** card hit **Modify**.\n\n@@@ div { .centered-img }\n\n@@@\n\nFor **Instance type** choose t2.medium, then hit **Save**, no need to change the volume size, unless you have a lot of http call faults, which means a lot more logs, in that case the default volume size may not be enough.\n\nThe default environment created for Otoroshi, for instance Otoroshi-env, is a web server environment which fits in your case, but the thing is that on AWS Elastic Beanstalk by default a web server environment for a docker-based application, runs behind an Nginx proxy.\nWe have to remove that proxy. So on the **Software** card hit **Modify**.\n \n@@@ div { .centered-img }\n\n@@@ \n \nFor **Proxy server** choose None then hit **Save**.\n\nAlso note that you can set Envs for Otoroshi in same page (see image below). \n\n@@@ div { .centered-img }\n\n@@@ \n\nTo finalise the creation process, hit **Create app** on the bottom right.\n\nThe Otoroshi app is now created, and it's running which is cool, but we still don't have neither a **datastore** nor **https**.\n \n## Create an Otoroshi datastore on AWS ElastiCache\n\nBy default Otoroshi uses non persistent memory to store it's data, Otoroshi supports many kinds of datastores. In this section we will be covering Redis datastore. \n\nBefore starting, using a datastore hosted by AWS is not at all mandatory, feel free to use your own if you like, but if you want to learn more about ElastiCache, this section may interest you, otherwise you can skip it.\n\nGo to [AWS ElastiCache](https://eu-west-3.console.aws.amazon.com/elasticache/home?region=eu-west-3#) and hit **Get Started Now**.\n\n@@@ div { .centered-img }\n\n@@@ \n\nFor **Cluster engine** keep Redis.\n\nChoose a **Name** for your datastore, for instance otoroshi-datastore.\n\nYou can keep all the other default values and hit **Create** on the bottom right of the page.\n\nOnce your Redis Cluster is created, it would look like the image below.\n\n@@@ div { .centered-img }\n\n@@@ \n\n\nFor applications in the same security group as your cluster, redis cluster is accessible via the **Primary Endpoint**. Don't worry the default security group is fine, you don't need any configuration to access the cluster from Otoroshi.\n\nTo make Otoroshi use the created cluster, you can either use Envs `APP_STORAGE=redis`, `REDIS_HOST` and `REDIS_PORT`, or set `app.storage=redis`, `app.redis.host` and `app.redis.port` in your otoroshi.conf.\n\n## Create SSL certificate and configure your domain\n\nOtoroshi has now a datastore, but not yet ready for use. \n\nIn order to get it ready you need to :\n\n* Configure Otoroshi with your domain \n* Create a wildcard SSL certificate for your domain\n* Configure Otoroshi AWS Elastic Beanstalk instance with the SSL certificate \n* Configure your DNS to redirect all traffic on your domain to Otoroshi \n \n### Configure Otoroshi with your domain\n\nYou can use ENVs or you can use a custom otoroshi.conf in your Docker container.\n\nFor the second option your otoroshi.conf would look like this :\n\n``` \n include \"application.conf\"\n http.port = 8080\n app {\n env = \"prod\"\n domain = \"mysubdomain.oto.tools\"\n rootScheme = \"https\"\n snowflake {\n seed = 0\n }\n events {\n maxSize = 1000\n }\n backoffice {\n subdomain = \"otoroshi\"\n session {\n exp = 86400000\n }\n }\n \n storage = \"redis\"\n redis {\n host=\"myredishost\"\n port=myredisport\n }\n \n privateapps {\n subdomain = \"privateapps\"\n }\n \n adminapi {\n targetSubdomain = \"otoroshi-admin-internal-api\"\n exposedSubdomain = \"otoroshi-api\"\n defaultValues {\n backOfficeGroupId = \"admin-api-group\"\n backOfficeApiKeyClientId = \"admin-client-id\"\n backOfficeApiKeyClientSecret = \"admin-client-secret\"\n backOfficeServiceId = \"admin-api-service\"\n }\n proxy {\n https = true\n local = false\n }\n }\n claim {\n sharedKey = \"myclaimsharedkey\"\n }\n }\n \n play.http {\n session {\n secure = false\n httpOnly = true\n maxAge = 2147483646\n domain = \".mysubdomain.oto.tools\"\n cookieName = \"oto-sess\"\n }\n }\n``` \n\n### Create a wildcard SSL certificate for your domain\n\nGo to [AWS Certificate Manager](https://eu-west-3.console.aws.amazon.com/acm/home?region=eu-west-3#/firstrun).\n\nBelow **Provision certificates** hit **Get started**.\n\n@@@ div { .centered-img }\n\n@@@ \n \nKeep the default selected value **Request a public certificate** and hit **Request a certificate**.\n \n@@@ div { .centered-img }\n\n@@@ \n\nPut your **Domain name**, use *. for wildcard, for instance *\\*.mysubdomain.oto.tools*, then hit **Next**.\n\n@@@ div { .centered-img }\n\n@@@ \n\nYou can choose between **Email validation** and **DNS validation**, I'd recommend **DNS validation**, then hit **Review**. \n \n@@@ div { .centered-img }\n\n@@@ \n \nVerify that you did put the right **Domain name** then hit **Confirm and request**. \n\n@@@ div { .centered-img }\n\n@@@\n \nAs you see in the image above, to let Amazon do the validation you have to add the `CNAME` record to your DNS configuration. Normally this operation takes around one day.\n \n### Configure Otoroshi AWS Elastic Beanstalk instance with the SSL certificate \n\nOnce the certificate is validated, you need to modify the configuration of Otoroshi-env to add the SSL certificate for HTTPS. \nFor that you need to go to [AWS Elastic Beanstalk applications](https://eu-west-3.console.aws.amazon.com/elasticbeanstalk/home?region=eu-west-3#/applications),\nhit **Otoroshi-env**, then on the left side hit **Configuration**, then on the **Load balancer** card hit **Modify**.\n\n@@@ div { .centered-img }\n\n@@@\n\nIn the **Application Load Balancer** section hit **Add listener**.\n\n@@@ div { .centered-img }\n\n@@@\n\nFill the popup as the image above, then hit **Add**. \n\nYou should now be seeing something like this : \n \n@@@ div { .centered-img }\n\n@@@ \n \n \nMake sure that your listener is enabled, and on the bottom right of the page hit **Apply**.\n\nNow you have **https**, so let's use Otoroshi.\n\n### Configure your DNS to redirect all traffic on your domain to Otoroshi\n \nIt's actually pretty simple, you just need to add a `CNAME` record to your DNS configuration, that redirects *\\*.mysubdomain.oto.tools* to the DNS name of Otoroshi's load balancer.\n\nTo find the DNS name of Otoroshi's load balancer go to [AWS Ec2](https://eu-west-3.console.aws.amazon.com/ec2/v2/home?region=eu-west-3#LoadBalancers:tag:elasticbeanstalk:environment-name=Otoroshi-env;sort=loadBalancerName)\n\nYou would find something like this : \n \n@@@ div { .centered-img }\n\n@@@ \n\nThere is your DNS name, so add your `CNAME` record. \n \nOnce all these steps are done, the AWS Elastic Beanstalk Otoroshi instance, would now be handling all the requests on your domain. ;) \n"},{"name":"clevercloud.md","id":"/deploy/clevercloud.md","url":"/deploy/clevercloud.html","title":"Clever Cloud","content":"# Clever Cloud\n\nNow you want to use Otoroshi on Clever Cloud. Otoroshi has been designed and created to run on Clever Cloud and a lot of choices were made because of how Clever Cloud works.\n\n## Create an Otoroshi instance on CleverCloud\n\nIf you want to customize the configuration @ref:[use env. variables](../firstrun/env.md), you can use [the example provided below](#example-of-clevercloud-env-variables)\n\nCreate a new CleverCloud app based on a clevercloud git repo (not empty) or a github project of your own (not empty).\n\n@@@ div { .centered-img }\n\n@@@\n\nThen choose what kind of app your want to create, for Otoroshi, choose `Java + Jar`\n\n@@@ div { .centered-img }\n\n@@@\n\nNext, set up choose instance size and auto-scalling. Otoroshi can run on small instances, especially if you just want to test it.\n\n@@@ div { .centered-img }\n\n@@@\n\nFinally, choose a name for your app\n\n@@@ div { .centered-img }\n\n@@@\n\nNow you just need to customize environnment variables\n\nat this point, you can also add other env. variables to configure Otoroshi like in [the example provided below](#example-of-clevercloud-env-variables)\n\n@@@ div { .centered-img }\n\n@@@\n\nYou can also use expert mode :\n\n@@@ div { .centered-img }\n\n@@@\n\nNow, your app is ready, don't forget to add a custom domains name on the CleverCloud app matching the Otoroshi app domain. \n\n## Example of CleverCloud env. variables\n\nYou can add more env variables to customize your Otoroshi instance like the following. Use the expert mode to copy/paste all the values in one shot. If you want an real datastore, create a redis addon on clevercloud, link it to your otoroshi app and change the `APP_STORAGE` variable to `redis`\n\n
\n\n\n```\nADMIN_API_CLIENT_ID=xxxx\nADMIN_API_CLIENT_SECRET=xxxxx\nADMIN_API_GROUP=xxxxxx\nADMIN_API_SERVICE_ID=xxxxxxx\nCLAIM_SHAREDKEY=xxxxxxx\nOTOROSHI_INITIAL_ADMIN_LOGIN=youremailaddress\nOTOROSHI_INITIAL_ADMIN_PASSWORD=yourpassword\nPLAY_CRYPTO_SECRET=xxxxxx\nSESSION_NAME=oto-session\nAPP_DOMAIN=yourdomain.tech\nAPP_ENV=prod\nAPP_STORAGE=inmemory\nAPP_ROOT_SCHEME=https\nCC_PRE_BUILD_HOOK=curl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/${latest_otoroshi_version}/otoroshi.jar'\nCC_JAR_PATH=./otoroshi.jar\nCC_JAVA_VERSION=11\nPORT=8080\nSESSION_DOMAIN=.yourdomain.tech\nSESSION_MAX_AGE=604800000\nSESSION_SECURE_ONLY=true\nUSER_AGENT=otoroshi\nMAX_EVENTS_SIZE=1\nWEBHOOK_SIZE=100\nAPP_BACKOFFICE_SESSION_EXP=86400000\nAPP_PRIVATEAPPS_SESSION_EXP=86400000\nENABLE_METRICS=true\nOTOROSHI_ANALYTICS_PRESSURE_ENABLED=true\nUSE_CACHE=true\n```\n
"},{"name":"index.md","id":"/deploy/index.md","url":"/deploy/index.html","title":"Deploy to production","content":"# Deploy to production\n\nNow it's time to deploy Otoroshi in production, in this chapter we will see what kind of things you can do.\n\n@@@ index\n\n* [Kubernetes](./kubernetes.md)\n* [Clever Cloud](./clevercloud.md)\n* [AWS - Elastic Beanstalk](./aws-beanstalk.md)\n* [others](./other.md) \n* [Scaling](./scaling.md) \n\n@@@"},{"name":"kubernetes.md","id":"/deploy/kubernetes.md","url":"/deploy/kubernetes.html","title":"Kubernetes","content":"# Kubernetes\n\nStarting at version 1.5.0, Otoroshi provides a native Kubernetes support. Multiple otoroshi jobs (that are actually kubernetes controllers) are provided in order to\n\n- sync kubernetes secrets of type `kubernetes.io/tls` to otoroshi certificates\n- act as a standard ingress controller (supporting `Ingress` objects)\n- provide Custom Resource Definitions (CRDs) to manage Otoroshi entities from Kubernetes and act as an ingress controller with its own resources\n\n## Installing otoroshi on your kubernetes cluster\n\n@@@ warning\nYou need to have cluster admin privileges to install otoroshi and its service account, role mapping and CRDs on a kubernetes cluster. We also advise you to create a dedicated namespace (you can name it `otoroshi` for example) to install otoroshi\n@@@\n\nIf you want to deploy otoroshi into your kubernetes cluster, you can download the deployment descriptors from https://github.com/MAIF/otoroshi/tree/master/kubernetes and use kustomize to create your own overlay.\n\nYou can also create a `kustomization.yaml` file with a remote base\n\n```yaml\nbases:\n- github.com/MAIF/otoroshi/kubernetes/kustomize/overlays/simple/?ref=v1.5.0-dev\n```\n\nThen deploy it with `kubectl apply -k ./overlays/myoverlay`. \n\nYou can also use Helm to deploy a simple otoroshi cluster on your kubernetes cluster\n\n```sh\nhelm repo add otoroshi https://maif.github.io/otoroshi/helm\nhelm install my-otoroshi otoroshi/otoroshi\n```\n\nBelow, you will find example of deployment. Do not hesitate to adapt them to your needs. Those descriptors have value placeholders that you will need to replace with actual values like \n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: ${domain}\n```\n\nyou will have to edit it to make it look like\n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: 'apis.my.domain'\n```\n\nif you don't want to use placeholders and environment variables, you can create a secret containing the configuration file of otoroshi\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: otoroshi-config\ntype: Opaque\nstringData:\n oto.conf: >\n include \"application.conf\"\n app {\n storage = \"redis\"\n domain = \"apis.my.domain\"\n }\n```\n\nand mount it in the otoroshi container\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: otoroshi-deployment\nspec:\n selector:\n matchLabels:\n run: otoroshi-deployment\n template:\n metadata:\n labels:\n run: otoroshi-deployment\n spec:\n serviceAccountName: otoroshi-admin-user\n terminationGracePeriodSeconds: 60\n hostNetwork: false\n containers:\n - image: maif/otoroshi:1.5.0-dev-jdk11\n imagePullPolicy: IfNotPresent\n name: otoroshi\n args: ['-Dconfig.file=/usr/app/otoroshi/conf/oto.conf']\n ports:\n - containerPort: 8080\n name: \"http\"\n protocol: TCP\n - containerPort: 8443\n name: \"https\"\n protocol: TCP\n volumeMounts:\n - name: otoroshi-config\n mountPath: \"/usr/app/otoroshi/conf\"\n readOnly: true\n volumes:\n - name: otoroshi-config\n secret:\n secretName: otoroshi-config\n ...\n```\n\nYou can also create several secrets for each placeholder, mount them to the otoroshi container then use their file path as value\n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: 'file:///the/path/of/the/secret/file'\n```\n\nyou can use the same trick in the config. file itself\n\n### Note on bare metal kubernetes cluster installation\n\n@@@ note\nBare metal kubernetes clusters don't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples below.\n@@@\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@\n\n### Common manifests\n\nthe following manifests are always needed. They create otoroshi CRDs, tokens, role, etc. Redis deployment is not mandatory, it's just an example. You can use your own existing setup.\n\nrbac.yaml\n: @@snip [rbac.yaml](../snippets/kubernetes/kustomize/base/rbac.yaml) \n\ncrds.yaml\n: @@snip [crds.yaml](../snippets/kubernetes/kustomize/base/crds.yaml) \n\nredis.yaml\n: @@snip [redis.yaml](../snippets/kubernetes/kustomize/base/redis.yaml) \n\n\n### Deploy a simple otoroshi instanciation on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. We use a service of type `LoadBalancer` to expose otoroshi to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple/dns.example) \n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. The otoroshi instance are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/dns.example) \n\n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster using a DaemonSet\n\nHere we have one otoroshi instance on each kubernetes node (with the `otoroshi-kind: instance` label) with redis persistance. The otoroshi instances are exposed as `hostPort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/dns.example) \n\n### Deploy an otoroshi cluster on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas of an otoroshi leader connected to a redis instance and 2 replicas of an otoroshi worker connected to the leader. We use a service of type `LoadBalancer` to expose otoroshi leader/worker to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster\n\nHere we have 2 replicas of otoroshi leader connected to the same redis instance and 2 replicas for otoroshi worker. The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster using DaemonSet\n\nHere we have 1 otoroshi leader instance on each kubernetes node (with the `otoroshi-kind: leader` label) connected to the same redis instance and 1 otoroshi worker instance on each kubernetes node (with the `otoroshi-kind: worker` label). The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\n## Using Otoroshi as an Ingress Controller\n\nIf you want to use Otoroshi as an [Ingress Controller](https://kubernetes.io/fr/docs/concepts/services-networking/ingress/), just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Ingress Controller`.\n\nThen add the following configuration for the job (with your own tweaks of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": true, // sync ingresses\n \"crds\": false, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nNow you can deploy your first service ;)\n\n### Deploy an ingress route\n\nnow let's say you want to deploy an http service and route to the outside world through otoroshi\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: kennethreitz/httpbin\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 80\n name: \"http\"\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8080\n targetPort: http\n name: http\n selector:\n run: http-app-deployment\n---\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nonce deployed, otoroshi will sync with kubernetes and create the corresponding service to route your app. You will be able to access your app with\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get\n```\n\n### Support for Ingress Classes\n\nSince Kubernetes 1.18, you can use `IngressClass` type of manifest to specify which ingress controller you want to use for a deployment (https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#extended-configuration-with-ingress-classes). Otoroshi is fully compatible with this new manifest `kind`. To use it, configure the Ingress job to match your controller\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClasses\": [\"otoroshi.io/ingress-controller\"],\n ...\n }\n}\n```\n\nthen you have to deploy an `IngressClass` to declare Otoroshi as an ingress controller\n\n```yaml\napiVersion: \"networking.k8s.io/v1beta1\"\nkind: \"IngressClass\"\nmetadata:\n name: \"otoroshi-ingress-controller\"\nspec:\n controller: \"otoroshi.io/ingress-controller\"\n parameters:\n apiGroup: \"proxy.otoroshi.io/v1alpha\"\n kind: \"IngressParameters\"\n name: \"otoroshi-ingress-controller\"\n```\n\nand use it in your `Ingress`\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\nspec:\n ingressClassName: otoroshi-ingress-controller\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\n### Use multiple ingress controllers\n\nIt is of course possible to use multiple ingress controller at the same time (https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#using-multiple-ingress-controllers) using the annotation `kubernetes.io/ingress.class`. By default, otoroshi reacts to the class `otoroshi`, but you can make it the default ingress controller with the following config\n\n```json\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClass\": \"*\",\n ...\n }\n}\n```\n\n### Supported annotations\n\nif you need to customize the service descriptor behind an ingress rule, you can use some annotations. If you need better customisation, just go to the CRDs part. The following annotations are supported :\n\n- `ingress.otoroshi.io/groups`\n- `ingress.otoroshi.io/group`\n- `ingress.otoroshi.io/groupId`\n- `ingress.otoroshi.io/name`\n- `ingress.otoroshi.io/targetsLoadBalancing`\n- `ingress.otoroshi.io/stripPath`\n- `ingress.otoroshi.io/enabled`\n- `ingress.otoroshi.io/userFacing`\n- `ingress.otoroshi.io/privateApp`\n- `ingress.otoroshi.io/forceHttps`\n- `ingress.otoroshi.io/maintenanceMode`\n- `ingress.otoroshi.io/buildMode`\n- `ingress.otoroshi.io/strictlyPrivate`\n- `ingress.otoroshi.io/sendOtoroshiHeadersBack`\n- `ingress.otoroshi.io/readOnly`\n- `ingress.otoroshi.io/xForwardedHeaders`\n- `ingress.otoroshi.io/overrideHost`\n- `ingress.otoroshi.io/allowHttp10`\n- `ingress.otoroshi.io/logAnalyticsOnServer`\n- `ingress.otoroshi.io/useAkkaHttpClient`\n- `ingress.otoroshi.io/useNewWSClient`\n- `ingress.otoroshi.io/tcpUdpTunneling`\n- `ingress.otoroshi.io/detectApiKeySooner`\n- `ingress.otoroshi.io/letsEncrypt`\n- `ingress.otoroshi.io/publicPatterns`\n- `ingress.otoroshi.io/privatePatterns`\n- `ingress.otoroshi.io/additionalHeaders`\n- `ingress.otoroshi.io/additionalHeadersOut`\n- `ingress.otoroshi.io/missingOnlyHeadersIn`\n- `ingress.otoroshi.io/missingOnlyHeadersOut`\n- `ingress.otoroshi.io/removeHeadersIn`\n- `ingress.otoroshi.io/removeHeadersOut`\n- `ingress.otoroshi.io/headersVerification`\n- `ingress.otoroshi.io/matchingHeaders`\n- `ingress.otoroshi.io/ipFiltering.whitelist`\n- `ingress.otoroshi.io/ipFiltering.blacklist`\n- `ingress.otoroshi.io/api.exposeApi`\n- `ingress.otoroshi.io/api.openApiDescriptorUrl`\n- `ingress.otoroshi.io/healthCheck.enabled`\n- `ingress.otoroshi.io/healthCheck.url`\n- `ingress.otoroshi.io/jwtVerifier.ids`\n- `ingress.otoroshi.io/jwtVerifier.enabled`\n- `ingress.otoroshi.io/jwtVerifier.excludedPatterns`\n- `ingress.otoroshi.io/authConfigRef`\n- `ingress.otoroshi.io/redirection.enabled`\n- `ingress.otoroshi.io/redirection.code`\n- `ingress.otoroshi.io/redirection.to`\n- `ingress.otoroshi.io/clientValidatorRef`\n- `ingress.otoroshi.io/transformerRefs`\n- `ingress.otoroshi.io/transformerConfig`\n- `ingress.otoroshi.io/accessValidator.enabled`\n- `ingress.otoroshi.io/accessValidator.excludedPatterns`\n- `ingress.otoroshi.io/accessValidator.refs`\n- `ingress.otoroshi.io/accessValidator.config`\n- `ingress.otoroshi.io/preRouting.enabled`\n- `ingress.otoroshi.io/preRouting.excludedPatterns`\n- `ingress.otoroshi.io/preRouting.refs`\n- `ingress.otoroshi.io/preRouting.config`\n- `ingress.otoroshi.io/issueCert`\n- `ingress.otoroshi.io/issueCertCA`\n- `ingress.otoroshi.io/gzip.enabled`\n- `ingress.otoroshi.io/gzip.excludedPatterns`\n- `ingress.otoroshi.io/gzip.whiteList`\n- `ingress.otoroshi.io/gzip.blackList`\n- `ingress.otoroshi.io/gzip.bufferSize`\n- `ingress.otoroshi.io/gzip.chunkedThreshold`\n- `ingress.otoroshi.io/gzip.compressionLevel`\n- `ingress.otoroshi.io/cors.enabled`\n- `ingress.otoroshi.io/cors.allowOrigin`\n- `ingress.otoroshi.io/cors.exposeHeaders`\n- `ingress.otoroshi.io/cors.allowHeaders`\n- `ingress.otoroshi.io/cors.allowMethods`\n- `ingress.otoroshi.io/cors.excludedPatterns`\n- `ingress.otoroshi.io/cors.maxAge`\n- `ingress.otoroshi.io/cors.allowCredentials`\n- `ingress.otoroshi.io/clientConfig.useCircuitBreaker`\n- `ingress.otoroshi.io/clientConfig.retries`\n- `ingress.otoroshi.io/clientConfig.maxErrors`\n- `ingress.otoroshi.io/clientConfig.retryInitialDelay`\n- `ingress.otoroshi.io/clientConfig.backoffFactor`\n- `ingress.otoroshi.io/clientConfig.connectionTimeout`\n- `ingress.otoroshi.io/clientConfig.idleTimeout`\n- `ingress.otoroshi.io/clientConfig.callAndStreamTimeout`\n- `ingress.otoroshi.io/clientConfig.callTimeout`\n- `ingress.otoroshi.io/clientConfig.globalTimeout`\n- `ingress.otoroshi.io/clientConfig.sampleInterval`\n- `ingress.otoroshi.io/enforceSecureCommunication`\n- `ingress.otoroshi.io/sendInfoToken`\n- `ingress.otoroshi.io/sendStateChallenge`\n- `ingress.otoroshi.io/secComHeaders.claimRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateResponseName`\n- `ingress.otoroshi.io/secComTtl`\n- `ingress.otoroshi.io/secComVersion`\n- `ingress.otoroshi.io/secComInfoTokenVersion`\n- `ingress.otoroshi.io/secComExcludedPatterns`\n- `ingress.otoroshi.io/secComSettings.size`\n- `ingress.otoroshi.io/secComSettings.secret`\n- `ingress.otoroshi.io/secComSettings.base64`\n- `ingress.otoroshi.io/secComUseSameAlgo`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.size`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.base64`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.size`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.base64`\n- `ingress.otoroshi.io/secComAlgoInfoToken.size`\n- `ingress.otoroshi.io/secComAlgoInfoToken.secret`\n- `ingress.otoroshi.io/secComAlgoInfoToken.base64`\n- `ingress.otoroshi.io/securityExcludedPatterns`\n\nfor more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n\nwith the previous example, the ingress does not define any apikey, so the route is public. If you want to enable apikeys on it, you can deploy the following descriptor\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\n ingress.otoroshi.io/group: http-app-group\n ingress.otoroshi.io/forceHttps: 'true'\n ingress.otoroshi.io/sendOtoroshiHeadersBack: 'true'\n ingress.otoroshi.io/overrideHost: 'true'\n ingress.otoroshi.io/allowHttp10: 'false'\n ingress.otoroshi.io/publicPatterns: ''\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nnow you can use an existing apikey in the `http-app-group` to access your app\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get -u existing-apikey-1:secret-1\n```\n\n## Use Otoroshi CRDs for a better/full integration\n\nOtoroshi provides some Custom Resource Definitions for kubernetes in order to manage Otoroshi related entities in kubernetes\n\n- `service-groups`\n- `service-descriptors`\n- `apikeys`\n- `certificates`\n- `global-configs`\n- `jwt-verifiers`\n- `auth-modules`\n- `scripts`\n- `tcp-services`\n- `data-exporters`\n- `admins`\n- `teams`\n- `organizations`\n\nusing CRDs, you will be able to deploy and manager those entities from kubectl or the kubernetes api like\n\n```sh\nsudo kubectl get apikeys --all-namespaces\nsudo kubectl get service-descriptors --all-namespaces\ncurl -X GET \\\n -H 'Authorization: Bearer eyJhbGciOiJSUzI....F463SrpOehQRaQ' \\\n -H 'Accept: application/json' -k \\\n https://127.0.0.1:6443/apis/proxy.otoroshi.io/v1alpha1/apikeys | jq\n```\n\nYou can see this as better `Ingress` resources. Like any `Ingress` resource can define which controller it uses (using the `kubernetes.io/ingress.class` annotation), you can chose another kind of resource instead of `Ingress`. With Otoroshi CRDs you can even define resources like `Certificate`, `Apikey`, `AuthModules`, `JwtVerifier`, etc. It will help you to use all the power of Otoroshi while using the deployment model of kubernetes.\n \n@@@ warning\nwhen using Otoroshi CRDs, Kubernetes becomes the single source of truth for the synced entities. It means that any value in the descriptors deployed will overrides the one in Otoroshi datastore each time it's synced. So be careful if you use the Otoroshi UI or the API, some changes in configuration may be overriden by CRDs sync job.\n@@@\n\n### Resources examples\n\ngroup.yaml\n: @@snip [group.yaml](../snippets/crds/group.yaml) \n\napikey.yaml\n: @@snip [apikey.yaml](../snippets/crds/apikey.yaml) \n\nservice-descriptor.yaml\n: @@snip [service.yaml](../snippets/crds/service-descriptor.yaml) \n\ncertificate.yaml\n: @@snip [cert.yaml](../snippets/crds/certificate.yaml) \n\njwt.yaml\n: @@snip [jwt.yaml](../snippets/crds/jwt.yaml) \n\nauth.yaml\n: @@snip [auth.yaml](../snippets/crds/auth.yaml) \n\norganization.yaml\n: @@snip [orga.yaml](../snippets/crds/organization.yaml) \n\nteam.yaml\n: @@snip [team.yaml](../snippets/crds/team.yaml) \n\n\n### Configuration\n\nTo configure it, just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Otoroshi CRDs Controller`. Then add the following configuration for the job (with your own tweak of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"crds\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": false, // sync ingresses\n \"crds\": true, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n \"organization\": {},\n \"team\": {},\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nyou can find a more complete example of the configuration object [here](https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/plugins/jobs/kubernetes/config.scala#L134-L163)\n\n### Note about `apikeys` and `certificates` resources\n\nApikeys and Certificates are a little bit different than the other resources. They have ability to be defined without their secret part, but with an export setting so otoroshi will generate the secret parts and export the apikey or the certificate to kubernetes secret. Then any app will be able to mount them as volumes (see the full example below)\n\nIn those resources you can define \n\n```yaml\nexportSecret: true \nsecretName: the-secret-name\n```\n\nand omit `clientSecret` for apikey or `publicKey`, `privateKey` for certificates. For certificate you will have to provide a `csr` for the certificate in order to generate it\n\n```yaml\ncsr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n - httpapps.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n```\n\nwhen apikeys are exported as kubernetes secrets, they will have the type `otoroshi.io/apikey-secret` with values `clientId` and `clientSecret`\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: apikey-1\ntype: otoroshi.io/apikey-secret\ndata:\n clientId: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n clientSecret: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n```\n\nwhen certificates are exported as kubernetes secrets, they will have the type `kubernetes.io/tls` with the standard values `tls.crt` (the full cert chain) and `tls.key` (the private key). For more convenience, they will also have a `cert.crt` value containing the actual certificate without the ca chain and `ca-chain.crt` containing the ca chain without the certificate.\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: certificate-1\ntype: kubernetes.io/tls\ndata:\n tls.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n tls.key: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n cert.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n ca-chain.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA== \n```\n\n## Full CRD example\n\nthen you can deploy the previous example with better configuration level, and using mtls, apikeys, etc\n\nLet say the app looks like :\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\n// here we read the apikey to access http-app-2 from files mounted from secrets\nconst clientId = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientId').toString('utf8')\nconst clientSecret = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientSecret').toString('utf8')\n\nconst backendKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/tls.key').toString('utf8')\nconst backendCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/cert.crt').toString('utf8')\nconst backendCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/ca-chain.crt').toString('utf8')\n\nconst clientKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/tls.key').toString('utf8')\nconst clientCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/cert.crt').toString('utf8')\nconst clientCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/ca-chain.crt').toString('utf8')\n\nfunction callApi2() {\n return new Promise((success, failure) => {\n const options = { \n // using the implicit internal name (*.global.otoroshi.mesh) of the other service descriptor passing through otoroshi\n hostname: 'http-app-service-descriptor-2.global.otoroshi.mesh', \n port: 433, \n path: '/', \n method: 'GET',\n headers: {\n 'Accept': 'application/json',\n 'Otoroshi-Client-Id': clientId,\n 'Otoroshi-Client-Secret': clientSecret,\n },\n cert: clientCert,\n key: clientKey,\n ca: clientCa\n }; \n let data = '';\n const req = https.request(options, (res) => { \n res.on('data', (d) => { \n data = data + d.toString('utf8');\n }); \n res.on('end', () => { \n success({ body: JSON.parse(data), res });\n }); \n res.on('error', (e) => { \n failure(e);\n }); \n }); \n req.end();\n })\n}\n\nconst options = { \n key: backendKey, \n cert: backendCert, \n ca: backendCa, \n // we want mtls behavior\n requestCert: true, \n rejectUnauthorized: true\n}; \nhttps.createServer(options, (req, res) => { \n res.writeHead(200, {'Content-Type': 'application/json'});\n callApi2().then(resp => {\n res.write(JSON.stringify{ (\"message\": `Hello to ${req.socket.getPeerCertificate().subject.CN}`, api2: resp.body })); \n });\n}).listen(433);\n```\n\nthen, the descriptors will be :\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: foo/http-app\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 443\n name: \"https\"\n volumeMounts:\n - name: apikey-volume\n # here you will be able to read apikey from files \n # - /var/run/secrets/kubernetes.io/apikeys/clientId\n # - /var/run/secrets/kubernetes.io/apikeys/clientSecret\n mountPath: \"/var/run/secrets/kubernetes.io/apikeys\"\n readOnly: true\n volumeMounts:\n - name: backend-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/backend/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/backend/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/backend\"\n readOnly: true\n - name: client-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/client/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/client/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/client\"\n readOnly: true\n volumes:\n - name: apikey-volume\n secret:\n # here we reference the secret name from apikey http-app-2-apikey-1\n secretName: secret-2\n - name: backend-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-backend\n secretName: http-app-certificate-backend-secret\n - name: client-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-client\n secretName: http-app-certificate-client-secret\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8443\n targetPort: https\n name: https\n selector:\n run: http-app-deployment\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ServiceGroup\nmetadata:\n name: http-app-group\n annotations:\n otoroshi.io/id: http-app-group\nspec:\n description: a group to hold services about the http-app\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ApiKey\nmetadata:\n name: http-app-apikey-1\n# this apikey can be used to access the app\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-1\n authorizedEntities: \n - group_http-app-group\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-1\n# this apikey can be used to access another app in a different group\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-2\n authorizedEntities: \n - group_http-app-2-group\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: Certificate\nmetadata:\n name: http-app-certificate-frontend\nspec:\n description: certificate for the http-app on otorshi frontend\n autoRenew: true\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: Certificate\nmetadata:\n name: http-app-certificate-backend\nspec:\n description: certificate for the http-app deployed on pods\n autoRenew: true\n # a secret name http-app-certificate-backend-secret will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: http-app-certificate-backend-secret\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - http-app-service \n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-back, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: Certificate\nmetadata:\n name: http-app-certificate-client\nspec:\n description: certificate for the http-app\n autoRenew: true\n secretName: http-app-certificate-client-secret\n csr:\n issuer: CN=Otoroshi Root\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-client, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ServiceDescriptor\nmetadata:\n name: http-app-service-descriptor\nspec:\n description: the service descriptor for the http app\n groups: \n - http-app-group\n forceHttps: true\n hosts:\n - httpapp.foo.bar # hostname exposed oustide of the kubernetes cluster\n # - http-app-service-descriptor.global.otoroshi.mesh # implicit internal name inside the kubernetes cluster \n matchingRoot: /\n targets:\n - url: https://http-app-service:8443\n # alternatively, you can use serviceName and servicePort to use pods ip addresses\n # serviceName: http-app-service\n # servicePort: https\n mtlsConfig:\n # use mtls to contact the backend\n mtls: true\n certs: \n # reference the DN for the client cert\n - UID=httpapp-client, O=OtoroshiApps\n trustedCerts: \n # reference the DN for the CA cert \n - CN=Otoroshi Root\n sendOtoroshiHeadersBack: true\n xForwardedHeaders: true\n overrideHost: true\n allowHttp10: false\n publicPatterns:\n - /health\n additionalHeaders:\n x-foo: bar\n# here you can specify everything supported by otoroshi like jwt-verifiers, auth config, etc ... for more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n```\n\nnow with this descriptor deployed, you can access your app with a command like \n\n```sh\nCLIENT_ID=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientId}\" | base64 --decode`\nCLIENT_SECRET=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientSecret}\" | base64 --decode`\ncurl -X GET https://httpapp.foo.bar/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n## Expose Otoroshi to outside world\n\nIf you deploy Otoroshi on a kubernetes cluster, the Otoroshi service is deployed as a loadbalancer (service type: `LoadBalancer`). You'll need to declare in your DNS settings any name that can be routed by otoroshi going to the loadbalancer endpoint (CNAME or ip addresses) of your kubernetes distribution. If you use a managed kubernetes cluster from a cloud provider, it will work seamlessly as they will provide external loadbalancers out of the box. However, if you use a bare metal kubernetes cluster, id doesn't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples in the installation section.\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@ \n\n## Access a service from inside the k8s cluster\n\n### Using host header overriding\n\nYou can access any service referenced in otoroshi, through otoroshi from inside the kubernetes cluster by using the otoroshi service name (if you use a template based on https://github.com/MAIF/otoroshi/tree/master/kubernetes/base deployed in the otoroshi namespace) and the host header with the service domain like :\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET -H 'Host: httpapp.foo.bar' https://otoroshi-service.otoroshi.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using dedicated services\n\nit's also possible to define services that targets otoroshi deployment (or otoroshi workers deployment) and use then as valid hosts in otoroshi services \n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-awesome-service\nspec:\n selector:\n # run: otoroshi-deployment\n # or in cluster mode\n run: otoroshi-worker-deployment\n ports:\n - port: 8080\n name: \"http\"\n targetPort: \"http\"\n - port: 8443\n name: \"https\"\n targetPort: \"https\"\n```\n\nand access it like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-namspace.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using coredns integration\n\nYou can also enable the coredns integration to simplify the flow. You can use the the following keys in the plugin config :\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"coreDnsIntegration\": true, // enable coredns integration for intra cluster calls\n \"kubeSystemNamespace\": \"kube-system\", // the namespace where coredns is deployed\n \"corednsConfigMap\": \"coredns\", // the name of the coredns configmap\n \"otoroshiServiceName\": \"otoroshi-service\", // the name of the otoroshi service, could be otoroshi-workers-service\n \"otoroshiNamespace\": \"otoroshi\", // the namespace where otoroshi is deployed\n \"clusterDomain\": \"cluster.local\", // the domain for cluster services\n ...\n }\n}\n```\n\notoroshi will patch coredns config at startup then you can call your services like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh` or `${service-name}.${service-namespace}.svc.otoroshi.local`\n\n### Using coredns with manual patching\n\nyou can also patch the coredns config manually\n\n```sh\nkubectl edit configmaps coredns -n kube-system # or your own custom config map\n```\n\nand change the `Corefile` data to add the following snippet in at the end of the file\n\n```yaml\notoroshi.mesh:53 {\n errors\n health\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.arpa\n }\n rewrite name regex (.*)\\.otoroshi\\.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}\n```\n\nyou can also define simpler rewrite if it suits you use case better\n\n```\nrewrite name my-service.otoroshi.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n```\n\ndo not hesitate to change `otoroshi-worker-service.otoroshi` according to your own setup. If otoroshi is not in cluster mode, change it to `otoroshi-service.otoroshi`. If otoroshi is not deployed in the `otoroshi` namespace, change it to `otoroshi-service.the-namespace`, etc.\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh`\n\nthen you can call your service like \n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\n\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using old kube-dns system\n\nif your stuck with an old version of kubernetes, it uses kube-dns that is not supported by otoroshi, so you will have to provide your own coredns deployment and declare it as a stubDomain in the old kube-dns system. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the kube-dns integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"kubeDnsOperatorIntegration\": true, // enable kube-dns integration for intra cluster calls\n \"kubeDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"kubeDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"kubeDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\n### Using Openshift DNS operator\n\nOpenshift DNS operator does not allow to customize DNS configuration a lot, so you will have to provide your own coredns deployment and declare it as a stub in the Openshift DNS operator. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the Openshift DNS operator integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"openshiftDnsOperatorIntegration\": true, // enable openshift dns operator integration for intra cluster calls\n \"openshiftDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"openshiftDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"openshiftDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\ndon't forget to update the otoroshi `ClusterRole`\n\n```yaml\n- apiGroups:\n - operator.openshift.io\n resources:\n - dnses\n verbs:\n - get\n - list\n - watch\n - update\n```\n\n## Easier integration with otoroshi-sidecar\n\nOtoroshi can help you to easily use existing services without modifications while gettings all the perks of otoroshi like apikeys, mTLS, exchange protocol, etc. To do so, otoroshi will inject a sidecar container in the pod of your deployment that will handle call coming from otoroshi and going to otoroshi. To enable otoroshi-sidecar, you need to deploy the following admission webhooks\n\nwebhooks.yaml\n: @@snip [webhooks.yaml](../snippets/kubernetes/kustomize/base/webhooks.yaml)\n\nthen it's quite easy to add the sidecar, just add the following label to your pod `otoroshi.io/sidecar: inject` and some annotations to tell otoroshi what certificates and apikeys to use.\n\n```yaml\nannotations:\n otoroshi.io/sidecar-apikey: backend-apikey\n otoroshi.io/sidecar-backend-cert: backend-cert\n otoroshi.io/sidecar-client-cert: oto-client-cert\n otoroshi.io/token-secret: secret\n otoroshi.io/expected-dn: UID=oto-client-cert, O=OtoroshiApps\n```\n\nnow you can just call you otoroshi handled apis from inside your pod like `curl http://my-service.namespace.otoroshi.mesh/api` without passing any apikey or client certificate and the sidecar will handle everything for you. Same thing for call from otoroshi to your pod, everything will be done in mTLS fashion with apikeys and otoroshi exchange protocol\n\nhere is a full example\n\nsidecar.yaml\n: @@snip [sidecar.yaml](../snippets/kubernetes/kustomize/base/sidecar.yaml)\n\n@@@ warning\nPlease avoid to use port `80` for your pod as it's the default port to access otoroshi from your pod and the call will be redirect to the sidecar via an iptables rule\n@@@\n\n## Daikoku integration\n\nIt is possible to easily integrate daikoku generated apikeys without any human interaction with the actual apikey secret. To do that, create a plan in Daikoku and setup the integration mode to `Automatic`\n\n@@@ div { .centered-img }\n\n@@@\n\nthen when a user subscribe for an apikey, he will only see an integration token\n\n@@@ div { .centered-img }\n\n@@@\n\nthen just create an ApiKey manifest with this token and your good to go \n\n```yaml\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-3\nspec:\n exportSecret: true \n secretName: secret-3\n daikokuToken: RShQrvINByiuieiaCBwIZfGFgdPu7tIJEN5gdV8N8YeH4RI9ErPYJzkuFyAkZ2xy\n```\n\n"},{"name":"other.md","id":"/deploy/other.md","url":"/deploy/other.html","title":"Others","content":"# Others\n\nOtoroshi can run wherever you want, even on a raspberry pi (Cluster^^) ;)\n\nThis section is not finished yet. So, as Otoroshi is available as a @ref:[Docker image](../getotoroshi/fromdocker.md) that you can run on any Docker compatible cloud, just go ahead and use it on cloud provider until we have more detailed documentation.\n\n## Running Otoroshi on AWS Elastic Beanstalk\n\nSee the @ref:[dedicated page to run Otoroshi on AWS Elastic Beanstalk](./aws-beanstalk.md)\n\n## Running Otoroshi on Amazon Elastic Container Service\n\nDeploy the @ref:[Docker image](../firstrun/run.md#from-docker) using [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html)\n\n## Running Otoroshi on GCE\n\nDeploy the @ref:[Docker image](../firstrun/run.md#from-docker) using [Google Compute Engine container integration](https://cloud.google.com/compute/docs/containers/deploying-containers)\n\n## Running Otoroshi on Azure\n\nDeploy the @ref:[Docker image](../firstrun/run.md#from-docker) using [Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/)\n\n## Running Otoroshi on Heroku\n\nDeploy the @ref:[Docker image](../firstrun/run.md#from-docker) using [Docker integration](https://devcenter.heroku.com/articles/container-registry-and-runtime)\n\n## Running Otoroshi on CloudFoundry\n\nDeploy the @ref:[Docker image](../firstrun/run.md#from-docker) using [Docker integration](https://docs.cloudfoundry.org/adminguide/docker.html)\n\n## Running Otoroshi on your own infrastructure\n\nAs Otoroshi is a [Play Framework](https://www.playframework.com) application, you can read the doc about putting a `Play` app in production.\n\nhttps://www.playframework.com/documentation/2.6.x/ProductionConfiguration\n\nDownload the latest @ref:[Otoroshi distribution](../getotoroshi/frombinaries.md), unzip it, customize it and run it.\n"},{"name":"scaling.md","id":"/deploy/scaling.md","url":"/deploy/scaling.html","title":"Scaling Otoroshi","content":"# Scaling Otoroshi\n\n## Using multiple instances with a front load balancer\n\nOtoroshi has been designed to work with multiple instances. If you already have an infrastructure using frontal load balancing, you just have to declare Otoroshi instances as the target of all domain names handled by Otoroshi\n\n## Using master / workers mode of Otoroshi\n\nYou can read everything about it in @ref:[the clustering section](../topics/clustering.md) of the documentation.\n\n## Using IPVS\n\nYou can use [IPVS](https://en.wikipedia.org/wiki/IP_Virtual_Server) to load balance layer 4 traffic directly from the Linux Kernel to multiple instances of Otoroshi. You can find example of configuration [here](http://www.linuxvirtualserver.org/VS-DRouting.html) \n\n## Using DNS Round Robin\n\nYou can use [DNS round robin technique](https://en.wikipedia.org/wiki/Round-robin_DNS) to declare multiple A records under the domain names handled by Otoroshi.\n\n## Using software L4/L7 load balancers\n\nYou can use software L4 load balancers like NGINX or HAProxy to load balance layer 4 traffic directly from the Linux Kernel to multiple instances of Otoroshi.\n\nNGINX L7\n: @@snip [nginx-http.conf](../snippets/nginx-http.conf) \n\nNGINX L4\n: @@snip [nginx-tcp.conf](../snippets/nginx-tcp.conf) \n\nHA Proxy L7\n: @@snip [haproxy-http.conf](../snippets/haproxy-http.conf) \n\nHA Proxy L4\n: @@snip [haproxy-tcp.conf](../snippets/haproxy-tcp.conf) \n\n## Using a custom TCP load balancer\n\nYou can also use any other TCP load balancer, from a hardware box to a small js file like\n\ntcp-proxy.js\n: @@snip [tcp-proxy.js](../snippets/tcp-proxy.js) \n\ntcp-proxy.rs\n: @@snip [tcp-proxy.rs](../snippets/proxy.rs) \n\n"},{"name":"dev.md","id":"/dev.md","url":"/dev.html","title":"Developing Otoroshi ","content":"# Developing Otoroshi \n\nIf you want to play with Otoroshis code, here are some tips\n\n## The tools\n\nYou will need\n\n* git\n* JDK 11\n* SBT 1.3.x\n* Node 13 + yarn 1.x\n\n## Clone the repository\n\n```sh\ngit clone https://github.com/MAIF/otoroshi.git\n```\n\nor fork otoroshi and clone your own repository.\n\n## Run otoroshi in dev mode\n\nto run otoroshi in dev mode, you'll need to run two separate process to serve the javascript UI and the server part.\n\n### Javascript side\n\njust go to `/otoroshi/javascript` and install the dependencies with\n\n```sh\nyarn install\n# or\nnpm install\n```\n\nthen run the dev server with\n\n```sh\nyarn start\n# or\nnpm run start\n```\n\n### Server side\n\nsetup SBT opts with\n\n```sh\nexport SBT_OPTS=\"-Xmx2G -Xss6M\"\n```\n\nthen just go to `/otoroshi` and run the sbt console with \n\n```sh\nsbt\n```\n\nthen in the sbt console run the following command\n\n```sh\n~run -Dapp.storage=file -Dapp.liveJs=true -Dhttps.port=9998 -D-Dapp.privateapps.port=9999 -Dapp.adminPassword=password -Dapp.domain=oto.tools -Dplay.server.https.engineProvider=ssl.DynamicSSLEngineProvider -Dapp.events.maxSize=0\n```\n\nyou can now access your otoroshi instance at `http://otoroshi.oto.tools:9999`\n\n## Test otoroshi\n\nto run otoroshi test just go to `/otoroshi` and run the main test suite with\n\n```sh\nsbt 'testOnly OtoroshiTests'\n```\n\n## Create a release\n\njust go to `/otoroshi/javascript` and then build the UI\n\n```sh\nyarn install\nyarn build\n```\n\nthen go to `/otoroshi` and build the otoroshi distribution\n\n```sh\nsbt ';clean;compile;dist;assembly'\n```\n\nthe otoroshi build is waiting for you in `/otoroshi/target/scala-2.12/otoroshi.jar` or `/otoroshi/target/universal/otoroshi-1.x.x.zip`\n\n## Build the documentation\n\nfrom the root of your repository run\n\n```sh\nsh ./scripts/doc.sh all\n```\n\n## Format the sources\n\nfrom the root of your repository run\n\n```sh\nsh ./scripts/fmt.sh\n```"},{"name":"features.md","id":"/features.md","url":"/features.html","title":"Features ","content":"# Features \n\n@@@ warning\nThis section is under construction\n@@@\n\nAll the features supported by **Otoroshi** are listed below\n\n* Dynamic changes at runtime without full reload \n* Can proxy any HTTP/HTTP2 server (websockets and streamed responses included)\n* Full featured admin Rest Api to control Otoroshi the way you want. Included, Swagger descriptor\n* Gorgeous React Web UI\n* Full end-to-end streaming of HTTP requests and responses\n* Completely non blocking and async internals\n* @ref:[Official Docker image](./getotoroshi/fromdocker.md)\n* @ref:[Multi backend datastore support](./firstrun/datastore.md)\n * Redis\n * In memory\n * Cassandra (experimental support)\n * filedb (not suitable for production usage)\t\n* Pluggable modules system (plugins) \n * you can create your own modules to change de behavior of Otoroshi per service or globally\n * impacts on access validation, routing, body transformation, apikey extraction\n * listen to internal otoroshi events\n * modules can be written and deployed from the UI\n * lot of module provided out of the box (see TODO:)\n* Full featured TLS integration\n * @ref:[Dynamic SSL termination](./topics/ssl.md)\n * mTLS support for input and output connections (end-to-end mTLS)\n * extended client certificate validation\n * TLS certificate automation (create, renew, etc) based on a CA certificate\n * ACME/Let's Encrypt support (create, renew)\n * on-the-fly certificate generation based on a CA certificate without request loss\n* Classic features for reverse proxying\n * expose the same service on multiple domain names (including wildcards)\n * support multiple loadbalancing algorithms\n * configurable circuit breaker per service, with timeouts per path and verb\n * @ref:[maintenance page per service](./usage/2-services.md)\n * @ref:[build page per service](./usage/2-services.md)\n * @ref:[force HTTPS usage per service](./usage/2-services.md)\n * @ref:[Add current Api key quotas usage in response headers](./usage/3-apikeys.md)\n * @ref:[Add current latencies in response headers](./usage/3-apikeys.md)\n * headers manipulation\n * routing headers\n * custom html error templates\n * healthcheck per service\n * sink services\n * CORS support\n * GZIP support\n * filtering on http verb and path\n* Api management features\n * throttling / daily quotas / monthly quotas per apikey\n * apikey authorization based on http verb and path\n * global throttling\n * global throttling per ip address\n * global or per service ip address blacklist / whitelist\n * automatic apikey secret rotation\n* Authentication modules\n * LDAP\n * In memory (managed by otoroshi)\n * OAuth2/OIDC\n * modules can be used for admin. backoffice login\n * webauthentication support\n * sessions management from UI\n* JWT token utilities\n * validate incoming JWT tokens\n * transform incoming JWT tokens\n * chain multiple validators\n* Analytics / Metrics\n * rich traffic events for each proxied http request\n * @ref:[Live metrics per service and globaly](./usage/4-monitor.md) \n * @ref:[Global metrics and analytics (requires elastic server)](./usage/7-metrics.md)\n * @ref:[Traffic events can be sent using webhooks or Kafka topic](./setup/dangerzone.md#analytics-settings)\n * multiple technical metrics exporters (statsd, datadog, prometheus)\n* Audit trail\n * @ref:[Global audit log alert log on admins actions](./usage/6-audit.md)\n * @ref:[Audit and alerts events can be sent using webhooks or Kafka topic](./setup/dangerzone.md#analytics-settings)\n * @ref:[Alerts events can be send to people by email using email provider (Mailgun, mailjet)](./integrations/mailgun.md)\n* Extract informations from `User-Agent` headers to enrich traffic events\n* Extract geolocation informations (need external service) to enrich traffic events\n* Support enterprise http proxies globaly and per service\n* TCP proxy with SNI and TLS passthrought support\n* TCP / UDP tunnelings\n * add web authentication on top of anything\n * local tunnel client with CLI or UI\n* @ref:[Canary mode per service](./topics/snow-monkey.md)\n* @ref:[Chaos engineering tools with the Snow Monkey](./topics/snow-monkey.md)\n* @ref:[Advanced CleverCloud integration (create services from CleverCloud apps)](./integrations/clevercloud.md) \n"},{"name":"configfile.md","id":"/firstrun/configfile.md","url":"/firstrun/configfile.html","title":"Config. with files","content":"# Config. with files\n\nThere is a lot of things you can configure in Otoroshi. By default, Otoroshi provides a configuration that should be enough for testing purpose. But you'll likely need to update this configuration when you'll need to move into production.\n\nIn this page, any configuration property can be set at runtime using a `-D` flag when launching Otoroshi like\n\n```sh\njava -Dhttp.port=8080 -jar otoroshi.jar\n```\n\nor\n\n```sh\n./bin/otoroshi -Dhttp.port=8080 \n```\n\nif you want to define your own config file and use it on an otoroshi instance, use the following flag\n\n```sh\njava -Dconfig.file=/path/to/otoroshi.conf -jar otoroshi.jar\n``` \n\n## Common configuration\n\n| name | type | default value | description |\n| ---- |:----:| -------------- | ----- |\n| `app.domain` | string | \"oto.tools\" | the domain on which Otoroshi UI/API is be exposed|\n| `app.rootScheme` | string | \"http\" | the scheme on which Otoroshi is exposed, either \"http\" or \"https\" |\n| `app.snowflake.seed` | number | 0 | this number will is used to generate unique ids across the cluster. Each Otorshi instance must have a unique seed. |\n| `app.events.maxSize` | number | 1000 | max number of analytic and alert events stored locally |\n| `app.backoffice.exposed` | boolean | true | does the current Otoroshi instance exposed a backoffice ui|\n| `app.backoffice.subdomain` | string | \"otoroshi\" | the subdomain on wich Otoroshi backoffice will be served |\n| `app.backoffice.session.exp` | number | 86400000 | the number of seconds before the Otoroshi backoffice session expires |\n| `app.privateapps.subdomain` | string | \"privateapps\" | the subdomain on which private apps UI are served |\n| `app.privateapps.session.exp` | number | 86400000 | the number of seconds before the private apps session expires |\n| `app.claim.sharedKey` | string | \"secret\" | the shared secret used for signing the JWT token passed between Otoroshi and backend services |\n| `app.webhooks.size` | number | 100 | number of events sent at most when calling one of the analytics webhooks |\n| `app.throttlingWindow` | number | 10 | time window (in seconds) used to compute throttling quotas for ApiKeys |\n\n## Admin API configuration\n\nWhen Otoroshi starts for the first time, its datastore is empty. As Otoroshi uses Otoroshi to expose its admin REST API, you'll have to provide the details for the admin API exposition. **This part is super important** because if you go to production with the default values, your Otoroshi server won't be secured anymore.\n\n@@@ warning\nYOU HAVE TO CUSTOMIZE THE FOLLOWING VALUES BEFORE GOING TO PRODUCTION !!\n@@@\n\nSome of the following terms will seem obscure to you, but you will learn their meaning in the following chapters :)\n\n| name | type | default value | description |\n| ---- |:----:| -------------- | ----- |\n| `app.adminapi.exposed` | boolean | true | does the current Otoroshi instance expose an admin API |\n| `app.adminapi.targetSubdomain` | string | \"otoroshi-admin-internal-api\" | the subdomain on wich admin API call will be redirected from `app.adminapi.exposedSubdomain` |\n| `app.adminapi.exposedSubdomain` | string | \"otoroshi-api\" | the subdomain on wich the Otoroshi admin API will be exposed |\n| `app.adminapi.defaultValues.backOfficeGroupId` | string | \"admin-api-group\" | the name of the service groups that will contain the service descriptors for the Otoroshi admin API |\n| `app.adminapi.defaultValues.backOfficeApiKeyClientId` | string | \"admin-api-apikey-id\" | the client id of the Otoroshi admin API apikey |\n| `app.adminapi.defaultValues.backOfficeApiKeyClientSecret` | string | \"admin-api-apikey-secret\" | the client secret of the Otoroshi admin API apikey |\n| `app.adminapi.defaultValues.backOfficeServiceId` | string | \"admin-api-service\" | the id of the service descriptors for the Otoroshi admin API |\n| `app.adminapi.proxy.https` | boolean | false | whether or not the current Otoroshi instance serves its content over https. This setting is useful for the backoffice UI to access Otoroshi admin API |\n| `app.adminapi.proxy.local` | boolean | true | whether or not the admin API is accessible through `127.0.0.1`. This setting is useful for the backoffice UI to access Otoroshi admin API |\n\n## Secrets config\n\nWhen Otoroshi starts for the first time, its secrets are set by default. \n\n@@@ warning\nYOU HAVE TO CUSTOMIZE AT LEAST `otoroshi.secret` BEFORE GOING TO PRODUCTION !!\n@@@\n\n| name | type | default value | description |\n| ---- |:----:| -------------- | ----- |\n| `otoroshi.secret` | string | 'verysecretvaluethatyoumustoverwrite' | default Otoroshi secret. This value is used by default for other secrets |\n| `otoroshi.sessions.secret` | string | `otoroshi.secret` | Secret used to cipher session ids |\n| `play.http.secret.key` | string | `otoroshi.secret` | the secret used to sign Otoroshi session cookie |\n\n## DB configuration\n\nAs Otoroshi supports multiple datastores, you'll have to provide some details about how to connect/configure it.\n\n| name | type | default value | description |\n| ---- |:----:| -------------- | ----- |\n| `app.storage` | string | \"inmemory\" | what kind of storage engine you want to use. Possible values are `inmemory`, `file`, `redis`, `cassandra` |\n| `app.importFrom` | string | | a file path or a URL to an Otoroshi export file. If the datastore is empty on startup, this file will be used to import data to the empty DB |\n| `app.importFromHeaders` | array | [] | a list of `:` separated header to use if the `app.importFrom` setting is a URL |\n| `app.initialData` | object | | object representing Otoroshi internal data as exported from danger zone so you don't need a config file and a data import file |\n| `app.redis.host` | string | \"localhost\" | the host of the redis server |\n| `app.redis.port` | number | 6379 | the port of the redis server |\n| `app.redis.slaves` | array | [] | the redis slaves lists |\n| `app.filedb.path` | string | \"./filefb\" | the path where filedb files will be written |\n| `app.cassandra.hosts` | string | \"127.0.0.1\" | the host of the cassandra server |\n| `app.cassandra.host` | string | \"127.0.0.1\" | the list of cassandra hosts |\n| `app.cassandra.port` | number | 9042 | the port of the cassandra servers |\n| `app.pg.uri` | string | | the uri of your pg database |\n| `app.pg.host` | string | localhost | the host of your pg database |\n| `app.pg.port` | number | 5432 | the port of your pg database |\n| `app.pg.database` | string | otoroshi | the database name |\n| `app.pg.user` | string | otoroshi | the username to connect to your pg database |\n| `app.pg.password` | string | otoroshi | the password to connect to your pg database |\n\n## Headers configuration\n\nOtoroshi uses a fair amount of http headers in order to work properly. The name of those headers are customizable to fit your needs.\n\n| name | type | default value | description |\n| ---- |:----:| -------------- | ----- |\n| `otoroshi.headers.trace.label` | string | \"Otoroshi-Viz-From-Label\" | header to pass request tracing informations |\n| `otoroshi.headers.trace.from` | string | \"Otoroshi-Viz-From\" | header to pass request tracing informations (ip address) |\n| `otoroshi.headers.trace.parent` | string | \"Otoroshi-Parent-Request\" | header to pass request tracing informations (parent request id) |\n| `otoroshi.headers.request.adminprofile` | string | \"Otoroshi-Admin-Profile\" | header to pass admin name when the admin API is called from the Otoroshi backoffice |\n| `otoroshi.headers.request.clientid` | string | \"Otoroshi-Client-Id\" | header to pass apikey client id |\n| `otoroshi.headers.request.clientsecret` | string | \"Otoroshi-Client-Secret\" | header to pass apikey client secret |\n| `otoroshi.headers.request.id` | string | \"Otoroshi-Request-Id\" | header containing the id of the current request |\n| `otoroshi.headers.response.proxyhost` | string | \"Otoroshi-Proxied-Host\" | header containing the proxied host |\n| `otoroshi.headers.response.error` | string | \"Otoroshi-Error\" | header containing whether or not the request generated an error |\n| `otoroshi.headers.response.errormsg` | string | \"Otoroshi-Error-Msg\" | header containing error message if some |\n| `otoroshi.headers.response.proxylatency` | string | \"Otoroshi-Proxy-Latency\" | header containing the current latency induced by Otoroshi |\n| `otoroshi.headers.response.upstreamlatency` | string | \"Otoroshi-Upstream-Latency\" | header containing the current latency from Otoroshi to service backend |\n| `otoroshi.headers.response.dailyquota` | string | \"Otoroshi-Daily-Calls-Remaining\" | header containing the number of remaining daily call (apikey) |\n| `otoroshi.headers.response.monthlyquota` | string | \"Otoroshi-Monthly-Calls-Remaining\" | header containing the number of remaining monthly call (apikey) |\n| `otoroshi.headers.comm.state` | string | \"Otoroshi-State\" | header containing a random value for secured mode |\n| `otoroshi.headers.comm.stateresp` | string | \"Otoroshi-State-Resp\" | header containing a random value for secured mode |\n| `otoroshi.headers.comm.claim` | string | \"Otoroshi-Claim\" | header containing a JWT token for secured mode |\n| `otoroshi.headers.healthcheck.test` | string | \"Otoroshi-Health-Check-Logic-Test\" | header containing a logic test for healthcheck |\n| `otoroshi.headers.healthcheck.testresult` | string | \"Otoroshi-Health-Check-Logic-Test-Result\" | header containing the result of a logic test for healthcheck |\n| `otoroshi.headers.jwt.issuer` | string | \"Otoroshi\" | the name of the issuer for the JWT token |\n| `otoroshi.headers.canary.tracker` | string | \"Otoroshi-Canary-Id\" | header containing the ID of the canary session if enabled |\n\n## Play specific configuration\n\nAs Otoroshi is a [Play app](https://www.playframework.com/), you should take a look at [Play configuration documentation](https://www.playframework.com/documentation/2.6.x/Configuration) to tune its internal configuration\n\n| name | type | default value | description |\n| ---- |:----:| -------------- | ----- |\n| `http.port` | number | 8080 | the http port used by Otoroshi. You can use 'disabled' as value if you don't want to use http |\n| `https.port` | number | disabled | the https port used by Otoroshi. You can use 'disabled' as value if you don't want to use https |\n| `http2.enabled` | boolean | false | whether or not http2 is enabled on the Otoroshi server. You need to configure https (listed bellow) to be able to use it |\n| `play.http.secret.key` | string | \"secret\" | the secret used to sign Otoroshi session cookie |\n| `play.http.session.secure` | boolean | false | whether or not the Otoroshi backoffice session will be served over https only |\n| `play.http.session.httpOnly` | boolean | true | whether or not the Otoroshi backoffice session will be accessible from Javascript |\n| `play.http.session.maxAge` | number | 259200000 | the number of seconds before Otoroshi backoffice session expired |\n| `play.http.session.domain` | string | \".oto.tools\" | the domain on which the Otoroshi backoffice session is authorized |\n| `play.http.session.cookieName` | string | \"otoroshi-session\" | the name of the Otoroshi backoffice session |\n| `play.ws.play.ws.useragent` | string | \"Otoroshi\" | the user agent sent by Otoroshi if not present on the original http request |\n| `play.server.https.keyStore.path` | string | | the path to the keystore containing the private key and certificate, if not provided generates a keystore for you |\n| `play.server.https.keyStore.type` | string | JKS | the key store type, defaults to JKS |\n| `play.server.https.keyStore.password` | string | '' | the password, defaults to a blank password |\n| `play.server.https.keyStore.algorithm` | string | | the key store algorithm, defaults to the platforms default algorithm |\n\n## More config. options\n\nSee https://github.com/MAIF/otoroshi/blob/master/otoroshi/conf/base.conf and https://github.com/MAIF/otoroshi/blob/master/otoroshi/conf/application.conf\n\nif you want to configure https on your Otoroshi server, just read [PlayFramework documentation about it](https://www.playframework.com/documentation/2.6.x/ConfiguringHttps)\n\n## Example of a custom. configuration file\n\n```conf\ninclude \"application.conf\"\n\nhttp.port = 8080\n\napp {\n storage = \"file\"\n importFrom = \"./my-state.json\"\n env = \"prod\"\n domain = \"oto.tools\"\n rootScheme = \"http\"\n snowflake {\n seed = 0\n }\n events {\n maxSize = 1000\n }\n backoffice {\n subdomain = \"otoroshi\"\n session {\n exp = 86400000\n }\n }\n privateapps {\n subdomain = \"privateapps\"\n session {\n exp = 86400000\n }\n }\n adminapi {\n targetSubdomain = \"otoroshi-admin-internal-api\"\n exposedSubdomain = \"otoroshi-api\"\n defaultValues {\n backOfficeGroupId = \"admin-api-group\"\n backOfficeApiKeyClientId = \"admin-api-apikey-id\"\n backOfficeApiKeyClientSecret = \"admin-api-apikey-secret\"\n backOfficeServiceId = \"admin-api-service\"\n }\n }\n claim {\n sharedKey = \"mysecret\"\n }\n filedb {\n path = \"./filedb/state.ndjson\"\n }\n}\n\nplay.http {\n session {\n secure = false\n httpOnly = true\n maxAge = 2592000000\n domain = \".oto.tools\"\n cookieName = \"oto-sess\"\n }\n}\n```\n\n## Reference configuration\n\n@@snip [reference.conf](../snippets/reference.conf) "},{"name":"datastore.md","id":"/firstrun/datastore.md","url":"/firstrun/datastore.html","title":"Choose your datastore","content":"# Choose your datastore\n\nRight now, Otoroshi supports multiple datastore.\n\nYou can choose one datastore over another depending on your use case.\n\nAvailable datastores are the following :\n\n* in memory\n* redis\n* cassandra (experimental support, should be used in cluster mode for leaders)\n* postgresql or any postgresql compatible databse like cockroachdb for instance (experimental support, should be used in cluster mode for leaders)\n* filedb (not suitable for production usage)\n\nThe **filedb** datastore is pretty handy for testing purposes, but is not supposed to be used in production mode.\n\nThe **in-memory** datastore is kind of interesting... It can be used for testing purposes, but it is also a good candidate for production because of its fastness. You can check the clustering documentation to find more about it.\n\nThe **redis** datastore is quite nice when you want to easily deploy several Otoroshi instances.\n\nIf you need a datastore more scalable than redis, then you can use the **postgresql** or **cassandra** datastore.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"env.md","id":"/firstrun/env.md","url":"/firstrun/env.html","title":"Config. with ENVs","content":"# Config. with ENVs\n\nNow that you know @ref:[how to configure Otoroshi with the config. file](./configfile.md) every property in the following block can be overriden by an environment variable (an env. variable is written like `${?ENV_VARIABLE}`).\n\n## Reference configuration for env. variables\n\n@@snip [reference-env.conf](../snippets/reference-env.conf) \n"},{"name":"host.md","id":"/firstrun/host.md","url":"/firstrun/host.html","title":"Setup your hosts","content":"# Setup your hosts\n\nBy default, Otoroshi starts with domain `oto.tools` that targets `127.0.0.1`. Of course you can change the domain, you have to add the values in your `/etc/hosts` file according to the setting you put in Otoroshi configuration\n\n* `app.domain` => `oto.tools`\n* `app.backoffice.subdomain` => `otoroshi`\n* `app.privateapps.subdomain` => `privateapps`\n* `app.adminapi.exposedSubdomain` => `otoroshi-api`\n* `app.adminapi.targetSubdomain` => `otoroshi-admin-internal-api`\n\nfor instance if you want to change the default domain and use something like `otoroshi.mydomain.org`, then start otoroshi like \n\n```sh\njava -Dapp.domain=mydomain.org -jar otoroshi.jar\n```\n\n@@@ warning\nOtoroshi cannot be accessed using `http://127.0.0.1:8080` or `http://localhost:8080` because Otoroshi uses Otoroshi to serve it's own UI and API. When otoroshi starts with an empty database, it will create a service descriptor for that using `app.domain` and the settings listed on this page and in the * @ref:[Config. with files page](./configfile.md) that serve Otoroshi API and UI on `http://otoroshi-api.${app.domain}` and `http://otoroshi.${app.domain}`.\nOnce the descriptor is saved in database, if you want to change `app.domain`, you'll have to edit the descriptor in the database or restart Otoroshi with an empty database.\n@@@\n"},{"name":"index.md","id":"/firstrun/index.md","url":"/firstrun/index.html","title":"First run","content":"# First run\n\nNow that you have your own distro of Otoroshi, it's time to run it. \n\nBut before doing so, you'll have to make some choices about some essential stuff in order to have your own customized version of Otoroshi.\n\nLet's start with the datastore\n\n\n@@@ index\n\n* [choose a datastore](./datastore.md)\n* [use custom config file](./configfile.md)\n* [use ENV](./env.md)\n* [initial state](./initialstate.md)\n* [Hosts](./host.md)\n* [Run](./run.md)\n\n@@@"},{"name":"initialstate.md","id":"/firstrun/initialstate.md","url":"/firstrun/initialstate.html","title":"Import initial state","content":"# Import initial state\n\nNow you are almost ready to run Otoroshi for the first time, but maybe you want to import data from previous Otoroshi installation in your current datastore.\n\nTo do that, you need to add the `app.importFrom` setting to the Otoroshi configuration (of `$APP_IMPORT_FROM` env).\n\nIt can be a file path or a URL\n\n## Example of export\n\n```json\n{\n \"config\": {\n \"lines\": [\"prod\"], \n \"limitConcurrentRequests\": true,\n \"maxConcurrentRequests\": 500,\n \"useCircuitBreakers\": true,\n \"apiReadOnly\": false,\n \"registerFromCleverHook\": false,\n \"u2fLoginOnly\": true,\n \"ipFiltering\": {\n \"whitelist\": [],\n \"blacklist\": []\n },\n \"throttlingQuota\": 100000,\n \"perIpThrottlingQuota\": 500,\n \"analyticsEventsUrl\": null,\n \"analyticsWebhooks\": [],\n \"alertsWebhooks\": [],\n \"alertsEmails\": [],\n \"endlessIpAddresses\": []\n },\n \"admins\": [],\n \"simpleAdmins\": [\n {\n \"username\": \"admin@otoroshi.io\",\n \"password\": \"xxxxxxxxxxxxxxxxx\",\n \"label\": \"Otoroshi Admin\",\n \"createdAt\": 1493971715708\n }\n ],\n \"serviceGroups\": [\n {\n \"id\": \"default\",\n \"name\": \"default-group\",\n \"description\": \"The default group\"\n },\n {\n \"id\": \"admin-api-group\",\n \"name\": \"Otoroshi Admin Api group\",\n \"description\": \"No description\"\n }\n ],\n \"apiKeys\": [\n {\n \"clientId\": \"admin-api-apikey-id\",\n \"clientSecret\": \"admin-api-apikey-secret\",\n \"clientName\": \"Otoroshi Backoffice ApiKey\",\n \"authorizedEntities\": [\"group_admin-api-group\"],\n \"enabled\": true,\n \"throttlingQuota\": 10000000,\n \"dailyQuota\": 10000000,\n \"monthlyQuota\": 10000000,\n \"metadata\": {}\n }\n ],\n \"serviceDescriptors\": [\n {\n \"id\": \"admin-api-service\",\n \"groupId\": \"admin-api-group\",\n \"name\": \"otoroshi-admin-api\",\n \"env\": \"prod\",\n \"domain\": \"oto.tools\",\n \"subdomain\": \"otoroshi-api\",\n \"targets\": [\n {\n \"host\": \"localhost:8080\",\n \"scheme\": \"http\"\n }\n ],\n \"root\": \"/\",\n \"enabled\": true,\n \"privateApp\": false,\n \"forceHttps\": false,\n \"maintenanceMode\": false,\n \"buildMode\": false,\n \"enforceSecureCommunication\": true,\n \"publicPatterns\": [],\n \"privatePatterns\": [],\n \"additionalHeaders\": {\n \"Host\": \"otoroshi-admin-internal-api.oto.tools\"\n },\n \"matchingHeaders\": {},\n \"ipFiltering\": {\n \"whitelist\": [],\n \"blacklist\": []\n },\n \"api\": {\n \"exposeApi\": false\n },\n \"healthCheck\": {\n \"enabled\": false,\n \"url\": \"/\"\n },\n \"metadata\": {}\n }\n ],\n \"errorTemplates\": []\n}\n```\n"},{"name":"run.md","id":"/firstrun/run.md","url":"/firstrun/run.html","title":"Run Otoroshi","content":"# Run Otoroshi\n\nNow you are ready to run Otoroshi. You can run the following command with some tweaks depending on the way you want to configure Otoroshi. If you want to pass a custom configuration file, use the `-Dconfig.file=/path/to/file.conf` flag in the following commands.\n\n## From .zip file\n\n```sh\nunzip otoroshi-dist.zip\ncd otoroshi-vx.x.x\n./bin/otoroshi\n```\n\n## From .jar file\n\nFor Java 8 & Java 11\n\n```sh\njava -jar otoroshi.jar\n```\n\n## From docker\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi\n```\n\nYou can also pass useful args like :\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi -Dconfig.file=/usr/app/otoroshi/conf/otoroshi.conf -Dlogger.file=/usr/app/otoroshi/conf/otoroshi.xml\n```\n\nIf you want to provide your own config file, you can read @ref:[the documentation about config files](../firstrun/configfile.md).\n\nYou can also provide some ENV variable using the `--env` flag to customize your Otoroshi instance.\n\nThe list of possible env variables is available @ref:[here](../firstrun/env.md).\n\nYou can use a volume to provide configuration like :\n\n```sh\ndocker run -p \"8080:8080\" -v \"$(pwd):/usr/app/otoroshi/conf\" maif/otoroshi\n```\n\nYou can also use a volume if you choose to use `filedb` datastore like :\n\n```sh\ndocker run -p \"8080:8080\" -v \"$(pwd)/filedb:/usr/app/otoroshi/filedb\" maif/otoroshi -Dapp.storage=file\n```\n\nYou can also use a volume if you choose to use exports files :\n\n```sh\ndocker run -p \"8080:8080\" -v \"$(pwd):/usr/app/otoroshi/imports\" maif/otoroshi -Dapp.importFrom=/usr/app/otoroshi/imports/export.json\n```\n\n## Run examples\n\n```sh\n$ java \\\n -Xms2G \\\n -Xmx8G \\\n -Dhttp.port=8080 \\\n -Dapp.importFrom=/home/user/otoroshi.json \\\n -Dconfig.file=/home/user/otoroshi.conf \\\n -jar ./otoroshi.jar\n\n[warn] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[warn] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[warn] otoroshi-env - Importing from: /home/user/otoroshi.json\n[info] play.api.Play - Application started (Prod)\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n```\n\nIf you choose to start Otoroshi without importing existing data, Otoroshi will create a new admin user and print the login details in the log. When you will log into the admin dashboard, Otoroshi will ask you to create another account to avoid security issues.\n\n```sh\n$ java \\\n -Xms2G \\\n -Xmx8G \\\n -Dhttp.port=8080 \\\n -jar otoroshi.jar\n\n[warn] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[warn] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[warn] otoroshi-env - You can log into the Otoroshi admin console with the following credentials: admin@otoroshi.io / HHUsiF2UC3OPdmg0lGngEv3RrbIwWV5W\n[info] play.api.Play - Application started (Prod)\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n```\n"},{"name":"frombinaries.md","id":"/getotoroshi/frombinaries.md","url":"/getotoroshi/frombinaries.html","title":"From binaries","content":"# From binaries\n\nIf you want to download the last version of Otoroshi and its CLI, you can grab them from the release page of the Otoroshi github page :\n\nGo to https://github.com/MAIF/otoroshi/releases and get the last version of the `otoroshi-dist.zip` file or `otoroshi.jar` file\n"},{"name":"fromdocker.md","id":"/getotoroshi/fromdocker.md","url":"/getotoroshi/fromdocker.html","title":"From docker","content":"# From docker\n\nIf you're a Docker aficionado, Otoroshi is provided as a Docker image that your can pull directly from Official repos.\n\nfirst, fetch the last Docker image of Otoroshi :\n\n```sh\ndocker pull maif/otoroshi:1.5.0-dev\n# or \ndocker pull maif/otoroshi:latest\n# or \ndocker pull maif/otoroshi:jdk8-1.5.0-dev\n# or \ndocker pull maif/otoroshi:jdk11-1.5.0-dev\n# or \ndocker pull maif/otoroshi:jdk12-1.5.0-dev\n# or \ndocker pull maif/otoroshi:jdk13-1.5.0-dev\n# or \ndocker pull maif/otoroshi:jdk14-1.5.0-dev\n```"},{"name":"fromsources.md","id":"/getotoroshi/fromsources.md","url":"/getotoroshi/fromsources.html","title":"From sources","content":"# From sources\n\nto build Otoroshi from sources, you need the following tools :\n\n* git\n* JDK 8\n* SBT\n* node\n* yarn\n\nOnce you've installed all those tools, go to the [Otoroshi github page](https://github.com/MAIF/otoroshi) and clone the sources :\n\n```sh\ngit clone https://github.com/MAIF/otoroshi.git --depth=1\n```\n\nthen you need to run the `build.sh` script to build the documentation, the React UI and the server :\n\n```sh\nsh ./scripts/build.sh\n```\n\nand that's all, you can grab your Otoroshi package at `otoroshi/target/scala-2.12/otoroshi` or `otoroshi/target/universal/`.\n\nFor those who want to build only parts of Otoroshi, read the following.\n\n## Build the documentation only\n\nGo to the `documentation` folder and run :\n\n```sh\nsbt ';clean;paradox'\n```\n\nThe documentation is located at `manual/target/paradox/site/main/`\n\n## Build the React UI\n\nGo to the `otoroshi/javascript` folder and run :\n\n```sh\nyarn install\nyarn build\n```\n\nYou will find the JS bundle at `otoroshi/public/javascripts/bundle/bundle.js`.\n\n## Build the Otoroshi server\n\nGo to the `otoroshi` folder and run :\n\n```sh\nexport SBT_OPTS=\"-Xmx2G -Xss6M\"\nsbt ';clean;compile;dist;assembly'\n```\n\nYou will find your Otoroshi package at `otoroshi/target/scala-2.12/otoroshi` or `otoroshi/target/universal/`.\n"},{"name":"index.md","id":"/getotoroshi/index.md","url":"/getotoroshi/index.html","title":"Get Otoroshi","content":"# Get Otoroshi\n\nThere are several ways to get Otoroshi to run it on your system.\n\nLet's start with a good old build from sources :)\n\n@@@ index\n\n* [from sources](./fromsources.md)\n* [from binaries](./frombinaries.md)\n* [from docker](./fromdocker.md)\n\n@@@"},{"name":"index.md","id":"/index.md","url":"/index.html","title":"Otoroshi","content":"# Otoroshi\n\n**Otoroshi** is a layer of lightweight api management on top of a modern http reverse proxy written in Scala and developped by the MAIF OSS team that can handle all the calls to and between your microservices without service locator and let you change configuration dynamicaly at runtime.\n\n\n> *The Otoroshi is a large hairy monster that tends to lurk on the top of the torii gate in front of Shinto shrines. It's a hostile creature, but also said to be the guardian of the shrine and is said to leap down from the top of the gate to devour those who approach the shrine for only self-serving purposes.*\n\n@@@ div { .centered-img }\n[![Build Status](https://travis-ci.org/MAIF/otoroshi.svg?branch=master)](https://travis-ci.org/MAIF/otoroshi) [![Join the chat at https://gitter.im/MAIF/otoroshi](https://badges.gitter.im/MAIF/otoroshi.svg)](https://gitter.im/MAIF/otoroshi?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [ ![Download](https://img.shields.io/github/release/MAIF/otoroshi.svg) ](hhttps://github.com/MAIF/otoroshi/releases/download/v1.5.0-dev/otoroshi.jar)\n@@@\n\n@@@ div { .centered-img }\n\n@@@\n\n## Installation\n\nYou can download the latest build of Otoroshi as a [fat jar](https://github.com/MAIF/otoroshi/releases/download/v1.5.0-dev/otoroshi.jar), as a [zip package](https://github.com/MAIF/otoroshi/releases/download/v1.5.0-dev/otoroshi-dist.zip) or as a @ref:[docker image](./getotoroshi/fromdocker.md).\n\nYou can install and run Otoroshi with this little bash snippet\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v1.5.0-dev/otoroshi.jar'\njava -jar otoroshi.jar\n```\n\nor using docker\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi:1.5.0-dev\n```\n\nnow open your browser to http://otoroshi.oto.tools:8080/, **log in with the credential generated in the logs** and explore by yourself, if you want better instructions, just go to the @ref:[Quick Start](./quickstart.md) or directly to the @ref:[installation instructions](./getotoroshi/index.md)\n\n## Documentation\n\n* @ref:[About Otoroshi](./about.md)\n* @ref:[Architecture](./archi.md)\n* @ref:[Features](./features.md)\n* @ref:[Try Otoroshi in 5 minutes](./quickstart.md)\n* @ref:[Get Otoroshi](./getotoroshi/index.md)\n* @ref:[First run](./firstrun/index.md)\n* @ref:[Setup Otoroshi](./setup/index.md)\n* @ref:[Using Otoroshi](./usage/index.md)\n* @ref:[Third party Integrations](./integrations/index.md)\n* @ref:[Detailed topics](./topics/index.md)\n* @ref:[Admin REST API](./api.md)\n* @ref:[Deploy to production](./deploy/index.md)\n* @ref:[Developing Otoroshi](./dev.md)\n\n## Discussion\n\nJoin the [Otoroshi](https://gitter.im/MAIF/otoroshi) channel on the [MAIF Gitter](https://gitter.im/MAIF)\n\n## Sources\n\nThe sources of Otoroshi are available on [Github](https://github.com/MAIF/otoroshi).\n\n## Logo\n\nYou can find the official Otoroshi logo [on GitHub](https://github.com/MAIF/otoroshi/blob/master/resources/otoroshi-logo.png). The Otoroshi logo has been created by François Galioto ([@fgalioto](https://twitter.com/fgalioto))\n\n## Changelog\n\nEvery release, along with the migration instructions, is documented on the [Github Releases](https://github.com/MAIF/otoroshi/releases) page.\n\n## Patrons\n\nThe work on Otoroshi was funded by MAIF with the help of the community.\n\n## Licence\n\nOtoroshi is Open Source and available under the [Apache 2 License](https://opensource.org/licenses/Apache-2.0)\n\n@@@ index\n\n* [About Otoroshi](about.md)\n* [Architecture](archi.md)\n* [Features](features.md)\n* [Quickstart](quickstart.md)\n* [Get otoroshi](getotoroshi/index.md)\n* [First run](firstrun/index.md)\n* [Setup](setup/index.md)\n* [Using Otoroshi](usage/index.md)\n* [Integrations](integrations/index.md)\n* [Detailed topics](topics/index.md)\n* [Admin REST API](api.md)\n* [Deploy to production](deploy/index.md)\n* [Developing Otoroshi](./dev.md)\n\n@@@\n"},{"name":"analytics.md","id":"/integrations/analytics.md","url":"/integrations/analytics.html","title":"Analytics","content":"# Analytics\n\nEach action and request on Otoroshi creates events that can be sent outside of Otoroshi for further usage. Those events can be sent using a webhook and/or through a Kafka topic.\n\n## Push events to Elasticsearch\n\n@@@ warning\nOtoroshi supports only Elasticsearch versions under 7.0\n@@@\n\nYou can use elastic search to store otoroshi events. To do this you have to configure the access to elasticsearch from `settings (cog icon) / Danger Zone` and expand the `Analytics: Elastic cluster (write)` section.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Read events from Elasticsearch\n\nYou can use elastic search to store otoroshi events. To do this you have to configure the access to elasticsearch from `settings (cog icon) / Danger Zone` and expand the `Analytics: Elastic dashboard datasource (read)` section.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Push events to WebHooks\n\nGo to `settings (cog icon) / Danger Zone` and expand the `Analytics: Webhooks` section.\n\n@@@ div { .centered-img }\n\n@@@\n\nHere you can configure the URL of the webhook and its headers if needed.\n\n## Push events to Kafka\n\nEvents can also be sent through a Kafka topic. Go to `settings (cog icon) / Danger Zone` and expand the `Analytics: Kafka` section.\n\n@@@ div { .centered-img }\n\n@@@\n\nFill the form, default values for topic names are :\n\n* `otoroshi-alerts`\n* `otoroshi-analytics`\n* `otoroshi-audits`\n\n@@@ warning\nIf you use trustore/keystore to access your kafka instances, the paths should be absolute and refers to host paths. You can also choose a client certificate from otoroshi for client authentication.\n@@@\n"},{"name":"clevercloud.md","id":"/integrations/clevercloud.md","url":"/integrations/clevercloud.html","title":"Clever Cloud","content":"# Clever Cloud\n\nOtoroshi provides an integration with Clever Cloud to create easily services based on application deployed on your Clever Cloud account.\nGo to `settings (cog icon) / Danger Zone` and expand the `CleverCloud settings` section.\n\n@@@ div { .centered-img }\n\n@@@\n\nFill the form with your CleverCloud credentials (https://www.clever-cloud.com/doc/clever-cloud-apis/cc-api/) and your CleverCloud `organization id`.\n\nOnce it's done, you will see a new menu in the side bar.\n\n@@@ div { .centered-img }\n\n@@@\n\nIf you click on it, you'll see a page listing all your apps deployed on Clever Cloud with buttons to create new services with the app as the target.\n\n@@@ div { .centered-img }\n\n@@@\n\nYou will also see a new button in the `Target` section of services to attach Clever Cloud applications as target for a service.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"index.md","id":"/integrations/index.md","url":"/integrations/index.html","title":"Third party Integrations","content":"# Third party Integrations\n\nOtoroshi provides some settings to interact with some third party systems.\n\n@@@ index\n\n* [Analytics](./analytics.md)\n* [Mailgun / Mailjet](./mailgun.md)\n* [StatsD / Datadog](./statsd.md)\n* [clevercloud](./clevercloud.md)\n\n@@@\n"},{"name":"mailgun.md","id":"/integrations/mailgun.md","url":"/integrations/mailgun.html","title":"Mailgun","content":"# Mailgun\n\nIf you want to receive Otoroshi alert by emails, you have to configure Otoroshi with your Mailgun credentials. Go to `settings (cog icon) / Danger Zone` and expand the `Mailgun settings` section.\n\n@@@ div { .centered-img }\n\n@@@\n\nFill the form with provided information on the `domain informations` page on Mailgun located at https://app.mailgun.com/app/domains/my.domain.\n\nThen, expand the `Alert settings` section and add email addresses separated by comma in the `Alert emails` field. **Don't forget to save.**\n\n@@@ div { .centered-img }\n\n@@@\n\n# Mailjet\n\nOtoroshi also supports Mailjet. Just select `Mailjet` in `Mailer settings type` and fill the requested fields."},{"name":"statsd.md","id":"/integrations/statsd.md","url":"/integrations/statsd.html","title":"StatsD / Datadog","content":"# StatsD / Datadog\n\nOtoroshi provides a StatsD integration to monitor some technical metrics across all your Otoroshi instances.\nGo to `settings (cog icon) / Danger Zone` and expand the `Statsd settings` section.\n\n@@@ div { .centered-img }\n\n@@@\n\nAdd the host and port of the Statsd agent on your system.\nIf you're using Datadog, don't forget to check the `Datadog` switch.\n"},{"name":"quickstart.md","id":"/quickstart.md","url":"/quickstart.html","title":"Try Otoroshi in 5 minutes","content":"# Try Otoroshi in 5 minutes\n\nwhat you will need :\n\n* JDK 11\n* curl\n* jq\n* 5 minutes of free time\n\n## The elevator pitch\n\nOtoroshi is an awesome reverse proxy built with Scala that handles all the calls to and between your microservices without service locator and lets you change configuration dynamically at runtime.\n\n## Download otoroshi\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v1.5.0-dev/otoroshi.jar'\n```\n\nIf you don’t/can’t have these tools on your machine, you can start a sandboxed environment using here with the following command\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi\n```\n\n## Start otoroshi\n\nto start otoroshi, just run the following command \n\n```sh\njava -jar otoroshi.jar\n```\n\nthis will start an in-memory otoroshi instance with a generated password that will be printed in the logs. You can set the password with the following flags\n\n```sh\njava -Dapp.adminLogin=admin@foo.bar -Dapp.adminPassword=password -jar otoroshi.jar\n```\n\nif you want to have otoroshi content persisted between launch without having to setup a datastore, just usse the following flag\n\n```sh\njava -Dapp.storage=file -jar otoroshi.jar\n```\n\nas the result, you will see something like\n\n```log\n$ java -jar otoroshi.jar\n\n[info] otoroshi-env - Otoroshi version 1.5.0-dev\n[info] otoroshi-env - Admin API exposed on http://otoroshi-api.oto.tools:8080\n[info] otoroshi-env - Admin UI exposed on http://otoroshi.oto.tools:8080\n[warn] otoroshi-env - Scripting is enabled on this Otoroshi instance !\n[info] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[info] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[info] otoroshi-env - You can log into the Otoroshi admin console with the following credentials: admin@otoroshi.io / xol1Kwjzqe9OXjqDxxPPbPb9p0BPjhCO\n[info] play.api.Play - Application started (Prod)\n[info] otoroshi-script-manager - Compiling and starting scripts ...\n[info] otoroshi-script-manager - Finding and starting plugins ...\n[info] otoroshi-script-manager - Compiling and starting scripts done in 18 ms.\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n[info] p.c.s.AkkaHttpServer - Listening for HTTPS on /0:0:0:0:0:0:0:0:8443\n[info] otoroshi-script-manager - Finding and starting plugins done in 4681 ms.\n[info] otoroshi-env - Generating CA certificate for Otoroshi self signed certificates ...\n[info] otoroshi-env - Generating a self signed SSL certificate for https://*.oto.tools ...\n```\n\n## Log into the admin UI\n\njust go to http://otoroshi.oto.tools:8080 and log in with the credentials printed in the logs\n\n## Create you first service\n\nto create your first service you can either do it using the admin UI or using the admin API. Let's use the admin API.\n\nBy default, otoroshi registers an admin apikey with `admin-api-apikey-id:admin-api-apikey-secret` value (those values can be tuned at first startup). Of course you can create your own with\n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/apikeys/_template \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n -d '{\n \"clientId\": \"quickstart\",\n \"clientSecret\": \"secret\",\n \"clientName\": \"quickstart-apikey\",\n \"authorizedEntities\": [\"group_admin-api-group\"]\n}' | jq\n```\n\nnow let create a new service to proxy `https://maif.gitub.io` on domain `maif.oto.tools`. This service will be public and will not require an apikey to pass\n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/_template \\\n -u quickstart:secret \\\n -d '{\n \"name\": \"quickstart-service\", \n \"hosts\": [\"maif.oto.tools\"], \n \"targets\": [{ \"host\": \"maif.github.io\", \"scheme\": \"https\" }], \n \"publicPatterns\": [\"/.*\"]\n}' | jq\n```\n\nnow just go to `http://maif.oto.tools:8080` to check if it works\n\n## Create a service to proxy an api\n\nnow will we proxy the api at `https://aws.random.cat/meow` that returns random cat pictures and make it use apikeys.\n\n```sh\n$ curl https://aws.random.cat/meow | jq\n\n{\n \"file\": \"https://purr.objects-us-east-1.dream.io/i/20161003_163413.jpg\"\n}\n```\n\nFirst let's create the service \n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/_template \\\n -u quickstart:secret \\\n -d '{\n \"id\": \"cats-api\",\n \"name\": \"cats-api\", \n \"hosts\": [\"cats.oto.tools\"], \n \"targets\": [{ \"host\": \"aws.random.cat\", \"scheme\": \"https\" }],\n \"root\": \"/meow\"\n}' | jq\n```\n\nbut if you try to use it, you will have something like :\n\n```sh\n$ curl http://cats.oto.tools:8080 | jq\n\n{\n \"Otoroshi-Error\": \"No ApiKey provided\"\n}\n```\n\nthat's because the api is not public and needs apikeys to access it. So let's create an apikey\n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/apikeys/_template \\\n -u quickstart:secret \\\n -d '{\n \"clientId\": \"apikey1\",\n \"clientSecret\": \"secret\",\n \"clientName\": \"quickstart-apikey-1\",\n \"authorizedEntities\": [\"group_default\"]\n}' | jq\n``` \n\nand try again\n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret | jq\n\n{\n \"file\": \"https://purr.objects-us-east-1.dream.io/i/vICG4.gif\"\n}\n```\n\nnow let's try to play with quotas. First, we need to know what is the current state of the apikey quotas by enabling otoroshi headers about consumptions\n\n```sh\ncurl -X PATCH -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/cats-api \\\n -u quickstart:secret \\\n -d '[\n { \"op\": \"replace\", \"path\": \"/sendOtoroshiHeadersBack\", \"value\": true }\n]' | jq\n```\n\nand retry the call with \n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret --include\n\nHTTP/1.1 200 OK\nDate: Tue, 10 Mar 2020 12:56:08 GMT\nServer: Apache\nExpires: Mon, 26 Jul 1997 05:00:00 GMT\nCache-Control: no-cache, must-revalidate\nOtoroshi-Request-Id: 1237361356529729796\nOtoroshi-Proxy-Latency: 79\nOtoroshi-Upstream-Latency: 416\nOtoroshi-Request-Timestamp: 2020-03-10T13:55:11.195+01:00\nAccess-Control-Allow-Origin: *\nAccess-Control-Allow-Methods: GET\nOtoroshi-Daily-Calls-Remaining: 9999998\nOtoroshi-Monthly-Calls-Remaining: 9999998\nContent-Type: application/json\nContent-Length: 71\n\n{\"file\":\"https:\\/\\/purr.objects-us-east-1.dream.io\\/i\\/beerandcat.jpg\"}\n```\n\nnow let's try to allow only 10 request per day on the apikey\n\n```sh\ncurl -X PATCH -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/cats-api/apikeys/apikey1 \\\n -u quickstart:secret \\\n -d '[\n { \"op\": \"replace\", \"path\": \"/dailyQuota\", \"value\": 10 }\n]' | jq\n```\n\nthen try to call you api again\n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret --include\n\nHTTP/1.1 200 OK\nDate: Tue, 10 Mar 2020 13:00:01 GMT\nServer: Apache\nExpires: Mon, 26 Jul 1997 05:00:00 GMT\nCache-Control: no-cache, must-revalidate\nOtoroshi-Request-Id: 1237362334930829633\nOtoroshi-Proxy-Latency: 71\nOtoroshi-Upstream-Latency: 92\nOtoroshi-Request-Timestamp: 2020-03-10T13:59:04.456+01:00\nAccess-Control-Allow-Origin: *\nAccess-Control-Allow-Methods: GET\nOtoroshi-Daily-Calls-Remaining: 7\nOtoroshi-Monthly-Calls-Remaining: 9999997\nContent-Type: application/json\nContent-Length: 66\n\n{\"file\":\"https:\\/\\/purr.objects-us-east-1.dream.io\\/i\\/C1XNK.jpg\"}\n```\n\neventually you will get something like\n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret --include\n\nHTTP/1.1 429 Too Many Requests\nOtoroshi-Error: true\nOtoroshi-Error-Msg: You performed too much requests\nOtoroshi-State-Resp: --\nDate: Tue, 10 Mar 2020 12:59:11 GMT\nContent-Type: application/json\nContent-Length: 52\n\n{\"Otoroshi-Error\":\"You performed too much requests\"}\n```"},{"name":"admin.md","id":"/setup/admin.md","url":"/setup/admin.html","title":"Manage admin users","content":"# Manage admin users\n\n@@@ warning\nThis section is under rewrite. The following content is deprecated and UI may have changed\n@@@\n\n## Create admin user after the first run\n\nClick on the `Create an admin user` warning popup, or go to `settings (cog icon) / Admins`.\n\n@@@ div { .centered-img }\n\n@@@\n\nYou will see the list of registered admin users.\n\n@@@ div { .centered-img }\n\n@@@\n\nClick on `Register admin.`\n\n@@@ div { .centered-img }\n\n@@@\n\nNow, enter informations about the new admin you want to create.\n\n@@@ div { .centered-img }\n\n@@@\n\nClick on `Register Admin`.\n\n@@@ div { .centered-img }\n\n@@@\n\nNow, you can discard the generated admin, confirm, then logout, login with the admin user you have just created and the danger popup will go away\n\n@@@ div { .centered-img }\n\n@@@\n\n## Create admin user with U2F device login\n\nGo to `settings (cog icon) / Admins`, click on `Register Admin`.\n\n@@@ div { .centered-img }\n\n@@@\n\nEnter informations about the new admin you want to create.\n\n@@@ div { .centered-img }\n\n@@@\n\nClick on `Register Admin with WebAuthn`.\n\nOtoroshi will ask you to plug your FIDO U2F device and touch it to complete registration.\n\n@@@ div { .centered-img }\n\n@@@\n\n@@@ warning\nTo be able to use FIDO U2F devices, Otoroshi must be served over https\n@@@\n\n## Discard admin user\n\nGo to `settings (cog icon) / Admins`, at the bottom of the page, you will see a list of admin users that you can discard. Just click on the `Discard User` button on the right side of the row and confirm that you actually want to discard an admin user.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Admin sessions management\n\nGo to `settings (cog icon) / Admins sessions`, you will see a list of active admin user sessions\n\n@@@ div { .centered-img }\n\n@@@\n\nYou can either discard single sessions one by one using the `Discard Session` on each targeted row of the list or discard all active sessions using the `Discard all sessions` button at the top of the page.\n"},{"name":"dangerzone.md","id":"/setup/dangerzone.md","url":"/setup/dangerzone.html","title":"Configure the Danger zone","content":"# Configure the Danger zone\n\n@@@ warning\nThis section is under rewrite. The following content is deprecated and UI may have changed\n@@@\n\nNow that you have an actual admin account, go to `setting (cog icon) / Danger Zone` in order to configure your Otoroshi instance.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Commons settings\n\nThis part allows you to configure various things :\n\n* `No Auth0 login` => allow you to disabled Auth0 login to the Otoroshi admin dashboard\n* `API read only` => disable `writes` on the Otoroshi admin api\n* `Use HTTP streaming` => use http streaming for each response. It should always be true\n* `Auto link default` => when no group is specified on a service, it will be assigned to default one\n* `Use circuit breakers` => allow usage of circuit breakers for each service\n* `Log analytics on servers` => all analytics will be logged on the servers\n* `Use new http client as the default Http client` => all http call will use the new http client client by default\n* `Enable live metrics` => enable live metrics in the Otoroshi cluster. Performs a lot of writes in the datastore\n* `Digitus medius` => change the character of endless HTTP responses from `0` to `🖕`\n* `Limit concurrent requests` => allow you to specify a max number of concurrent requests on an Otoroshi instance to avoid overloading\n* `Max concurrent requests` => max allowed number of concurrent requests on an Otoroshi instance to avoid overloading\n* `Max HTTP/1.0 response size` => max size of an HTTP/1.0 responses, because they are memory mapped\n* `Max local events` => number of events stored localy (alerts and audits)\n* `lines` => at least one (`prod`). for other, it will allow you to declare urls like `service.line.domain.tld`. For prod it will be `service.domain.tld`\n\n@@@ div { .centered-img }\n\n@@@\n\n## Whitelist / blacklist settings\n\nOtoroshi is capable of filtering request by ip address, allowing or blocking requests.\n\nOtoroshi also provides a fun feature called `Endless HTTP responses`. If you put an ip address in that field, then, for any http request on Otoroshi, every response will be 128 GB of `0`.\n\n@@@ div { .centered-img }\n\n@@@\n\n@@@ note\nNote that you may provide ip address with wildcard like the following `42.42.*.42` or `42.42.42.*` or `42.42.*.*`\n@@@\n\n## Global throttling settings\n\nOtoroshi is capable of managing throttling at a global level. Here you can configure number of authorized requests per second on a single Otoroshi instance and the number of authorized request per second for a unique ip address.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Analytics settings\n\nOne on the major features of Otoroshi is being able of generating internal events. Those events are not stored in Otoroshi's datastore but can be sent using `WebHooks`. You can configure those `WebHooks` from the `Danger Zone`.\n\nOtoroshi is also capable of reading some analytics and displays it from another MAIF product called `Omoïkane`. As Omoikane is not publicly available yet, is capable of storing events in an [Elastic](https://www.elastic.co/) cluster. For more information about analytics and what it does, just go to the @ref:[detailed chapter](../integrations/analytics.md)\n\n## Kafka settings\n\nOne on the major features of Otoroshi is being able of generating internal events. These events are not stored in Otoroshi's datastore but can be sent using a [Kafka message broker](https://kafka.apache.org/). You can configure Kafka access from the `Danger Zone`.\n\nBy default, Otoroshi's alert events will be sent on `otoroshi-alerts` topic, Otoroshi's audit events will be sent on `otoroshi-audits` topic and Otoroshi's traffic events will be sent on `otoroshi-analytics` topic.\n\n@@@ warning\nKeystore and truststore paths are optional local path on the server hosting Otoroshi\n@@@\n\n@@@ div { .centered-img }\n\n@@@\n\nFor more information about Kafka integration and what it does, just go to the @ref:[detailed chapter](../integrations/analytics.md)\n\n## Alerts settings\n\nEach time a dangerous action or something unusual is performed on Otoroshi, it will create an alert and store it. You can be notified for each of these alerts using `WebHooks` or emails. To do so, just add the `WebHook` URL and optional headers in the `Danger Zone` or any email address you want (you can add more than one email address).\n\nYou can enable mutual authentication via the `Use mTLS` button and add your certificates. The `TLS loose` option will block all untrustful ssl configs, the `TrustAll` option allows any server certificates even the self-signed ones.\n\n@@@ div { .centered-img }\n\n@@@\n\n## StatsD settings\n\nOtoroshi is capable of sending internal metrics to a StatsD agent. Just put the host and port of you StatsD agent in the `Danger Zone` to collect these metrics. If you using [Datadog](https://www.datadoghq.com), don't forget to check the dedicated button :)\n\n@@@ div { .centered-img }\n\n@@@\n\nFor more information about StatsD integration and what it does, just go to the @ref:[detailed chapter](../integrations/statsd.md)\n\n## Mailer settings\n\nIf you want to send emails for every alert generated by Otoroshi, you need to configure your Mailgun credentials in the `Danger Zone`. These parameters are provided in you Mailgun domain dashboard (i.e. https://app.mailgun.com/app/domains/my.domain.oto.tools) in the information section.\n\n@@@ div { .centered-img }\n\n@@@\n\nFor more information about Mailgun integration and what it does, just go to the @ref:[detailed chapter](../integrations/mailgun.md)\n\n## CleverCloud settings\n\nAs we built our products to run on Clever-Cloud, Otoroshi has a close integration with Clever-Cloud. In this section of `Danger Zone` you can configure how to access Clever-Cloud API.\n\nTo generate the needed value, please refers to [Clever-Cloud documentation](https://www.clever-cloud.com/doc/clever-cloud-apis/cc-api/)\n\n@@@ div { .centered-img }\n\n@@@\n\nFor more information about Clever-Cloud integration and what it does, just go to the @ref:[detailed chapter](../integrations/clevercloud.md)\n\n## Import / exports and panic mode\n\nFor more details about imports and exports, please go to the @ref:[dedicated chapter](../usage/8-importsexports.md)\n\nAbout panic mode, it's an unusual feature that allows you to discard all current admin. sessions, allows only admin users with U2F devices to log back, and pass the API in read-only mode. Only a person who has access to Otoroshi's datastore will be able to turn it back on.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"index.md","id":"/setup/index.md","url":"/setup/index.html","title":"Setup Otoroshi","content":"# Setup Otoroshi\n\nNow that Otoroshi is running, you are ready to log into the Otoroshi admin dashboard and setup your instance. Just go to :\n\nhttp://otoroshi.oto.tools:8080\n\nand you will see the login page\n\n@@@ div { .centered-img }\n\n@@@\n\n@@@ warning\nUse the credentials generated in Otoroshi **logs** during **first run**.\n@@@\n\n@@@ div { .centered-img #first-login-example }\n\n@@@\n\n(of course, you can change this url dependending on the configuration you provided to Otoroshi).\n\nOnce logged in, the first screen you'll see should look like :\n\n@@@ div { .centered-img #first-login }\n\n@@@\n\nAs you can see, Otoroshi is not really happy about you being logged with a generated admin account.\n\nBut we will fix that in the next chapter\n\n@@@ index\n\n* [create admins](./admin.md)\n* [configure danger zone](./dangerzone.md)\n\n@@@\n"},{"name":"clustering.md","id":"/topics/clustering.md","url":"/topics/clustering.html","title":"Otoroshi clustering","content":"# Otoroshi clustering\n\nOtoroshi can work as a cluster by default as you can spin many Otoroshi servers using the same datastore or datastore cluster. In that case any instance is capable of serving services, Otoroshi admin UI, Otoroshi admin API, etc.\n\nBut sometimes, this is not enough. So Otoroshi provides an additional clustering model named `Leader / Workers` where there is a leader cluster ([control plane](https://en.wikipedia.org/wiki/Control_plane)), composed of Otoroshi instances backed by a datastore like Redis, PostgreSQL or Cassandra, that is in charge of all `writes` to the datastore through Otoroshi admin UI and API, and a worker cluster ([data plane](https://en.wikipedia.org/wiki/Forwarding_plane)) composed of horizontally scalable Otoroshi instances, backed by a super fast in memory datastore, with the sole purpose of routing traffic to your services based on data synced from the leader cluster. With this distributed Otoroshi version, you can reach your goals of high availability, scalability and security.\n\nOtoroshi clustering only uses http internally (right now) to make communications between leaders and workers instances so it is fully compatible with PaaS providers like [Clever-Cloud](https://www.clever-cloud.com/en/) that only provide one external port for http traffic.\n\n@@@ div { .centered-img }\n\n\n*Fig. 1: Simplified view*\n@@@\n\n@@@ div { .centered-img }\n\n\n*Fig. 2: Deployment view*\n@@@\n\n## Cluster configuration\n\n```hocon\notoroshi {\n cluster {\n mode = \"leader\" # can be \"off\", \"leader\", \"worker\"\n compression = 4 # compression of the data sent between leader cluster and worker cluster. From -1 (disabled) to 9\n leader {\n name = ${?CLUSTER_LEADER_NAME} # name of the instance, if none, it will be generated\n urls = [\"http://127.0.0.1:8080\"] # urls to contact the leader cluster\n host = \"otoroshi-api.oto.tools\" # host of the otoroshi api in the leader cluster\n clientId = \"apikey-id\" # otoroshi api client id\n clientSecret = \"secret\" # otoroshi api client secret\n cacheStateFor = 4000 # state is cached during (ms)\n }\n worker {\n name = ${?CLUSTER_WORKER_NAME} # name of the instance, if none, it will be generated\n retries = 3 # number of retries when calling leader cluster\n timeout = 2000 # timeout when calling leader cluster\n state {\n retries = ${otoroshi.cluster.worker.retries} # number of retries when calling leader cluster on state sync\n pollEvery = 10000 # interval of time (ms) between 2 state sync\n timeout = ${otoroshi.cluster.worker.timeout} # timeout when calling leader cluster on state sync\n }\n quotas {\n retries = ${otoroshi.cluster.worker.retries} # number of retries when calling leader cluster on quotas sync\n pushEvery = 2000 # interval of time (ms) between 2 quotas sync\n timeout = ${otoroshi.cluster.worker.timeout} # timeout when calling leader cluster on quotas sync\n }\n }\n }\n}\n```\n\nyou can also use many env. variables to configure Otoroshi cluster\n\n```hocon\notoroshi {\n cluster {\n mode = ${?CLUSTER_MODE}\n compression = ${?CLUSTER_COMPRESSION}\n leader {\n name = ${?CLUSTER_LEADER_NAME}\n host = ${?CLUSTER_LEADER_HOST}\n url = ${?CLUSTER_LEADER_URL}\n clientId = ${?CLUSTER_LEADER_CLIENT_ID}\n clientSecret = ${?CLUSTER_LEADER_CLIENT_SECRET}\n groupingBy = ${?CLUSTER_LEADER_GROUP_BY}\n cacheStateFor = ${?CLUSTER_LEADER_CACHE_STATE_FOR}\n stateDumpPath = ${?CLUSTER_LEADER_DUMP_PATH}\n }\n worker {\n name = ${?CLUSTER_WORKER_NAME}\n retries = ${?CLUSTER_WORKER_RETRIES}\n timeout = ${?CLUSTER_WORKER_TIMEOUT}\n state {\n retries = ${?CLUSTER_WORKER_STATE_RETRIES}\n pollEvery = ${?CLUSTER_WORKER_POLL_EVERY}\n timeout = ${?CLUSTER_WORKER_POLL_TIMEOUT}\n }\n quotas {\n retries = ${?CLUSTER_WORKER_QUOTAS_RETRIES}\n pushEvery = ${?CLUSTER_WORKER_PUSH_EVERY}\n timeout = ${?CLUSTER_WORKER_PUSH_TIMEOUT}\n }\n }\n }\n}\n```\n\n@@@ warning\nYou **should** use HTTPS exposition for the Otoroshi API that will be used for data sync as sensitive informations are exchanged between control plane and data plane.\n@@@\n\n@@@ warning\nYou **must** have the same cluster configuration on every Otoroshi instance (worker/leader) with only names and mode changed for each instance. Some things in leader/worker are computed using configuration of their counterpart worker/leader.\n@@@\n\n## Cluster UI\n\nOnce an Otoroshi instance is launcher as cluster Leader, a new row of live metrics tile will be available on the home page of Otoroshi admin UI.\n\n@@@ div { .centered-img }\n\n@@@\n\nyou can also access a more detailed view of the cluster at `Settings (cog icon) / Cluster View`\n\n@@@ div { .centered-img }\n\n@@@\n\n## Run examples\n\nfor leader \n\n```sh\njava -Dhttp.port=8091 -Dhttps.port=9091 -Dotoroshi.cluster.mode=leader -jar otoroshi.jar\n```\n\nfor worker\n\n```sh\njava -Dhttp.port=8092 -Dhttps.port=9092 -Dotoroshi.cluster.mode=worker \\\n -Dotoroshi.cluster.leader.urls.0=http://127.0.0.1:8091 -jar otoroshi.jar\n```\n"},{"name":"index.md","id":"/topics/index.md","url":"/topics/index.html","title":"Detailed topics","content":"# Detailed topics\n\nIn this sections, you will find informations about various topics supported by Otoroshi\n\n@@@ index\n\n* [Chaos engineering with the Snow Monkey](./snow-monkey.md)\n* [JWT Tokens verification](./jwt-verifications.md)\n* [SSL/TLS termination with Otoroshi](./ssl.md)\n* [Mutual TLS with Otoroshi](./mtls.md)\n* [Otoroshi clustering](./clustering.md)\n* [Otoroshi plugins](./plugins.md)\n* [Otoroshi monitoring](./monitoring.md)\n\n@@@\n"},{"name":"jwt-verifications.md","id":"/topics/jwt-verifications.md","url":"/topics/jwt-verifications.html","title":"JWT Tokens verification","content":"# JWT Tokens verification\n\nSometimes, it can be pretty useful to verify Jwt tokens coming from other provider on some services. Otoroshi provides a tool to do that per service. In the Service descriptor page, you can find a `Jwt token Verification` section dedicated to this topic.\n\n## Service descriptor local verifications\n\n@@@ div { .centered-img }\n\n@@@\n\nin this section you can select the type of verification you can choose if the verifier is local to the `Service descriptor` or reference a global one.\n\nYou can also enabled/disable jwt verification and activate strict mode. In strict mode, requests will be rejected if the jwt token is not found.\n\n### Jwt token location\n\nYou can use the `Source` selector to specify where the Jwt token can be found. \n\n* in a query string param\n\n@@@ div { .centered-img }\n\n@@@\n\n* in a header\n\n@@@ div { .centered-img }\n\n@@@\n\n* in a cookie\n\n@@@ div { .centered-img }\n\n@@@\n\n### Jwt signing\n\nYou can use the `Algo.` selector to specify the signing algorithm to use to verifiy the token\n\n@@@ div { .centered-img }\n\n@@@\n\nyou can choose between\n\n* Hmac + SHA256\n* Hmac + SHA384\n* Hmac + SHA512\n* RSA + SHA256\n* RSA + SHA384\n* RSA + SHA512\n* Elliptic Curve + SHA256\n* Elliptic Curve + SHA384\n* Elliptic Curve + SHA512\n\n@@@ div { .centered-img }\n\n@@@\n\nYou can use syntax like `${env.MY_ENV_VAR}` or `${config.my.config.path}` to provide secret/keys values. \n\n\n### Just verify signature and fields value\n\nUsing the `Verif. strategy` selector, you can choose `Verify jwt token`. This will verify if the token is signed using the settings from `jwt signing` section and the value of the fields provided in `Verify token fields`. Then the token will be send to the target just like that.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Re-sign the token\n\nUsing the `Verif. strategy` selector, you can choose `Verify and re-sign jwt token`. This will verify if the token is signed using the settings from `jwt signing` section and the value of the fields provided in `Verify token fields`. Then the token will be re-signed using the settings provided in `Re-sign algo` and will be send to the target.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Transform the token\n\nUsing the `Verif. strategy` selector, you can choose `Verify, re-sign and transform jwt token`. This will verify if the token is signed using the settings from `jwt signing` section and the value of the fields provided in `Verify token fields`. Then the token will be re-signed using the settings provided in `Re-sign algo`. You can also change the location of the token using `Token location`, remove fields using `Remove token fields`, set fields value using `Set token fields` and even rename fields using `Rename token fields`.\n\n@@@ div { .centered-img }\n\n@@@\n\nYou can also use a mini expression language in `Set token fields`. You just have to add expressions in values like `${expression}`. Supported expressions are the following :\n\n* `${date}` => set the current date\n* `${date.format('dd/MM/yyyy')}` => set the current date formatted with the format you want\n* `${token.fieldName}` => get the value of the field named `fieldName`\n* `${token.fieldName.replace('a', 'b')}` => get the value of the field named `fieldName` and replace `a` with `b`\n* `${token.fieldName.replaceAll('[0-9]', '-')}` => get the value of the field named `fieldName` and replace digits with `-`\n\nyou can of course use multiple expressions in one field like `my-value-is-${date}-with${token.user}`\n\n## Global verifications\n\nYou can create global jwt verifiers and reference them in your services (from the `Type` selector). When you set the type of verification to `Reference to a global definition`, you can choose an existing global jwt verifier\n\n@@@ div { .centered-img }\n\n@@@\n\nTo create a global verifier, go to `Settings (cog icon) / Global Jwt Verifiers` and it will display the list of global verifiers.\n\n@@@ div { .centered-img }\n\n@@@\n\nyou can them create, edit or delete verifiers\n\n@@@ div { .centered-img }\n\n@@@\n\n"},{"name":"monitoring.md","id":"/topics/monitoring.md","url":"/topics/monitoring.html","title":"Monitoring Otoroshi","content":"# Monitoring Otoroshi\n\nThe Otoroshi API exposes two endpoints for \n\n* `/health`: the health of the Otoroshi instance\n* `/metrics`: the metrics of the Otoroshi instance, either in JSON or Prometheus format using the `Accept` header (with `application/json` / `application/prometheus` values) or the `format` query param (with `json` or `prometheus` values)\n\n## Endpoints security\n\nThe two endpoints are exposed publicly on the Otoroshi admin api. But you can remove the corresponding public pattern and query the endpoints using standard apikeys. If you don't want to use apikeys but don't want to expose the endpoints publicly, you can defined two config. variables (`app.health.accessKey` or `HEALTH_ACCESS_KEY` and `otoroshi.metrics.accessKey` or `OTOROSHI_METRICS_ACCESS_KEY`) that will hold an access key for the endpoints. Then you can call the endpoints with an `access_key` query param with the value defined in the config. If you don't defined `otoroshi.metrics.accessKey` but define `app.health.accessKey`, `otoroshi.metrics.accessKey` will have the value of `app.health.accessKey`.\n \n## Examples\n\nlet say `app.health.accessKey` has value `MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY`\n\n```sh\n$ curl http://otoroshi-api.oto.tools:8080/health\\?access_key\\=MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY\n{\"otoroshi\":\"healthy\",\"datastore\":\"healthy\"}\n\n$ curl -H 'Accept: application/json' http://otoroshi-api.oto.tools:8080/metrics\\?access_key\\=MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY\n{\"version\":\"4.0.0\",\"gauges\":{\"attr.app.commit\":{\"value\":\"xxxx\"},\"attr.app.id\":{\"value\":\"xxxx\"},\"attr.cluster.mode\":{\"value\":\"Leader\"},\"attr.cluster.name\":{\"value\":\"otoroshi-leader-0\"},\"attr.instance.env\":{\"value\":\"prod\"},\"attr.instance.id\":{\"value\":\"xxxx\"},\"attr.instance.number\":{\"value\":\"0\"},\"attr.jvm.cpu.usage\":{\"value\":136},\"attr.jvm.heap.size\":{\"value\":1409},\"attr.jvm.heap.used\":{\"value\":112},\"internals.0.concurrent-requests\":{\"value\":1},\"internals.global.throttling-quotas\":{\"value\":2},\"jvm.attr.name\":{\"value\":\"2085@xxxx\"},\"jvm.attr.uptime\":{\"value\":2296900},\"jvm.attr.vendor\":{\"value\":\"JDK11\"},\"jvm.gc.PS-MarkSweep.count\":{\"value\":3},\"jvm.gc.PS-MarkSweep.time\":{\"value\":261},\"jvm.gc.PS-Scavenge.count\":{\"value\":12},\"jvm.gc.PS-Scavenge.time\":{\"value\":161},\"jvm.memory.heap.committed\":{\"value\":1477967872},\"jvm.memory.heap.init\":{\"value\":1690304512},\"jvm.memory.heap.max\":{\"value\":3005218816},\"jvm.memory.heap.usage\":{\"value\":0.03916456777568639},\"jvm.memory.heap.used\":{\"value\":117698096},\"jvm.memory.non-heap.committed\":{\"value\":166445056},\"jvm.memory.non-heap.init\":{\"value\":7667712},\"jvm.memory.non-heap.max\":{\"value\":994050048},\"jvm.memory.non-heap.usage\":{\"value\":0.1523920694986979},\"jvm.memory.non-heap.used\":{\"value\":151485344},\"jvm.memory.pools.CodeHeap-'non-nmethods'.committed\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'non-nmethods'.init\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'non-nmethods'.max\":{\"value\":5832704},\"jvm.memory.pools.CodeHeap-'non-nmethods'.usage\":{\"value\":0.28408093398876405},\"jvm.memory.pools.CodeHeap-'non-nmethods'.used\":{\"value\":1656960},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.committed\":{\"value\":11796480},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.init\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.max\":{\"value\":122912768},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.usage\":{\"value\":0.09536102872567315},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.used\":{\"value\":11721088},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.committed\":{\"value\":37355520},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.init\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.max\":{\"value\":122912768},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.usage\":{\"value\":0.2538573047187417},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.used\":{\"value\":31202304},\"jvm.memory.pools.Compressed-Class-Space.committed\":{\"value\":14942208},\"jvm.memory.pools.Compressed-Class-Space.init\":{\"value\":0},\"jvm.memory.pools.Compressed-Class-Space.max\":{\"value\":367001600},\"jvm.memory.pools.Compressed-Class-Space.usage\":{\"value\":0.033858838762555805},\"jvm.memory.pools.Compressed-Class-Space.used\":{\"value\":12426248},\"jvm.memory.pools.Metaspace.committed\":{\"value\":99794944},\"jvm.memory.pools.Metaspace.init\":{\"value\":0},\"jvm.memory.pools.Metaspace.max\":{\"value\":375390208},\"jvm.memory.pools.Metaspace.usage\":{\"value\":0.25168142904782426},\"jvm.memory.pools.Metaspace.used\":{\"value\":94478744},\"jvm.memory.pools.PS-Eden-Space.committed\":{\"value\":349700096},\"jvm.memory.pools.PS-Eden-Space.init\":{\"value\":422576128},\"jvm.memory.pools.PS-Eden-Space.max\":{\"value\":1110966272},\"jvm.memory.pools.PS-Eden-Space.usage\":{\"value\":0.07505125052077188},\"jvm.memory.pools.PS-Eden-Space.used\":{\"value\":83379408},\"jvm.memory.pools.PS-Eden-Space.used-after-gc\":{\"value\":0},\"jvm.memory.pools.PS-Old-Gen.committed\":{\"value\":1127219200},\"jvm.memory.pools.PS-Old-Gen.init\":{\"value\":1127219200},\"jvm.memory.pools.PS-Old-Gen.max\":{\"value\":2253914112},\"jvm.memory.pools.PS-Old-Gen.usage\":{\"value\":0.014950035505168354},\"jvm.memory.pools.PS-Old-Gen.used\":{\"value\":33696096},\"jvm.memory.pools.PS-Old-Gen.used-after-gc\":{\"value\":23791152},\"jvm.memory.pools.PS-Survivor-Space.committed\":{\"value\":1048576},\"jvm.memory.pools.PS-Survivor-Space.init\":{\"value\":70254592},\"jvm.memory.pools.PS-Survivor-Space.max\":{\"value\":1048576},\"jvm.memory.pools.PS-Survivor-Space.usage\":{\"value\":0.59375},\"jvm.memory.pools.PS-Survivor-Space.used\":{\"value\":622592},\"jvm.memory.pools.PS-Survivor-Space.used-after-gc\":{\"value\":622592},\"jvm.memory.total.committed\":{\"value\":1644412928},\"jvm.memory.total.init\":{\"value\":1697972224},\"jvm.memory.total.max\":{\"value\":3999268864},\"jvm.memory.total.used\":{\"value\":269184904},\"jvm.thread.blocked.count\":{\"value\":0},\"jvm.thread.count\":{\"value\":82},\"jvm.thread.daemon.count\":{\"value\":11},\"jvm.thread.deadlock.count\":{\"value\":0},\"jvm.thread.deadlocks\":{\"value\":[]},\"jvm.thread.new.count\":{\"value\":0},\"jvm.thread.runnable.count\":{\"value\":25},\"jvm.thread.terminated.count\":{\"value\":0},\"jvm.thread.timed_waiting.count\":{\"value\":10},\"jvm.thread.waiting.count\":{\"value\":47}},\"counters\":{},\"histograms\":{},\"meters\":{},\"timers\":{}}\n\n$ curl -H 'Accept: application/prometheus' http://otoroshi-api.oto.tools:8080/metrics\\?access_key\\=MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY\n# TYPE attr_jvm_cpu_usage gauge\nattr_jvm_cpu_usage 83.0\n# TYPE attr_jvm_heap_size gauge\nattr_jvm_heap_size 1409.0\n# TYPE attr_jvm_heap_used gauge\nattr_jvm_heap_used 220.0\n# TYPE internals_0_concurrent_requests gauge\ninternals_0_concurrent_requests 1.0\n# TYPE internals_global_throttling_quotas gauge\ninternals_global_throttling_quotas 3.0\n# TYPE jvm_attr_uptime gauge\njvm_attr_uptime 2372614.0\n# TYPE jvm_gc_PS_MarkSweep_count gauge\njvm_gc_PS_MarkSweep_count 3.0\n# TYPE jvm_gc_PS_MarkSweep_time gauge\njvm_gc_PS_MarkSweep_time 261.0\n# TYPE jvm_gc_PS_Scavenge_count gauge\njvm_gc_PS_Scavenge_count 12.0\n# TYPE jvm_gc_PS_Scavenge_time gauge\njvm_gc_PS_Scavenge_time 161.0\n# TYPE jvm_memory_heap_committed gauge\njvm_memory_heap_committed 1.477967872E9\n# TYPE jvm_memory_heap_init gauge\njvm_memory_heap_init 1.690304512E9\n# TYPE jvm_memory_heap_max gauge\njvm_memory_heap_max 3.005218816E9\n# TYPE jvm_memory_heap_usage gauge\njvm_memory_heap_usage 0.07680553268571043\n# TYPE jvm_memory_heap_used gauge\njvm_memory_heap_used 2.30817432E8\n# TYPE jvm_memory_non_heap_committed gauge\njvm_memory_non_heap_committed 1.66510592E8\n# TYPE jvm_memory_non_heap_init gauge\njvm_memory_non_heap_init 7667712.0\n# TYPE jvm_memory_non_heap_max gauge\njvm_memory_non_heap_max 9.94050048E8\n# TYPE jvm_memory_non_heap_usage gauge\njvm_memory_non_heap_usage 0.15262878997416435\n# TYPE jvm_memory_non_heap_used gauge\njvm_memory_non_heap_used 1.51720656E8\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__committed gauge\njvm_memory_pools_CodeHeap__non_nmethods__committed 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__init gauge\njvm_memory_pools_CodeHeap__non_nmethods__init 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__max gauge\njvm_memory_pools_CodeHeap__non_nmethods__max 5832704.0\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__usage gauge\njvm_memory_pools_CodeHeap__non_nmethods__usage 0.28408093398876405\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__used gauge\njvm_memory_pools_CodeHeap__non_nmethods__used 1656960.0\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__committed gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__committed 1.1862016E7\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__init gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__init 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__max gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__max 1.22912768E8\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__usage gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__usage 0.09610562183417755\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__used gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__used 1.1812608E7\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__committed gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__committed 3.735552E7\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__init gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__init 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__max gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__max 1.22912768E8\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__usage gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__usage 0.25493618368435084\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__used gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__used 3.1334912E7\n# TYPE jvm_memory_pools_Compressed_Class_Space_committed gauge\njvm_memory_pools_Compressed_Class_Space_committed 1.4942208E7\n# TYPE jvm_memory_pools_Compressed_Class_Space_init gauge\njvm_memory_pools_Compressed_Class_Space_init 0.0\n# TYPE jvm_memory_pools_Compressed_Class_Space_max gauge\njvm_memory_pools_Compressed_Class_Space_max 3.670016E8\n# TYPE jvm_memory_pools_Compressed_Class_Space_usage gauge\njvm_memory_pools_Compressed_Class_Space_usage 0.03386023385184152\n# TYPE jvm_memory_pools_Compressed_Class_Space_used gauge\njvm_memory_pools_Compressed_Class_Space_used 1.242676E7\n# TYPE jvm_memory_pools_Metaspace_committed gauge\njvm_memory_pools_Metaspace_committed 9.9794944E7\n# TYPE jvm_memory_pools_Metaspace_init gauge\njvm_memory_pools_Metaspace_init 0.0\n# TYPE jvm_memory_pools_Metaspace_max gauge\njvm_memory_pools_Metaspace_max 3.75390208E8\n# TYPE jvm_memory_pools_Metaspace_usage gauge\njvm_memory_pools_Metaspace_usage 0.25170985813247426\n# TYPE jvm_memory_pools_Metaspace_used gauge\njvm_memory_pools_Metaspace_used 9.4489416E7\n# TYPE jvm_memory_pools_PS_Eden_Space_committed gauge\njvm_memory_pools_PS_Eden_Space_committed 3.49700096E8\n# TYPE jvm_memory_pools_PS_Eden_Space_init gauge\njvm_memory_pools_PS_Eden_Space_init 4.22576128E8\n# TYPE jvm_memory_pools_PS_Eden_Space_max gauge\njvm_memory_pools_PS_Eden_Space_max 1.110966272E9\n# TYPE jvm_memory_pools_PS_Eden_Space_usage gauge\njvm_memory_pools_PS_Eden_Space_usage 0.17698545577448457\n# TYPE jvm_memory_pools_PS_Eden_Space_used gauge\njvm_memory_pools_PS_Eden_Space_used 1.96624872E8\n# TYPE jvm_memory_pools_PS_Eden_Space_used_after_gc gauge\njvm_memory_pools_PS_Eden_Space_used_after_gc 0.0\n# TYPE jvm_memory_pools_PS_Old_Gen_committed gauge\njvm_memory_pools_PS_Old_Gen_committed 1.1272192E9\n# TYPE jvm_memory_pools_PS_Old_Gen_init gauge\njvm_memory_pools_PS_Old_Gen_init 1.1272192E9\n# TYPE jvm_memory_pools_PS_Old_Gen_max gauge\njvm_memory_pools_PS_Old_Gen_max 2.253914112E9\n# TYPE jvm_memory_pools_PS_Old_Gen_usage gauge\njvm_memory_pools_PS_Old_Gen_usage 0.014950035505168354\n# TYPE jvm_memory_pools_PS_Old_Gen_used gauge\njvm_memory_pools_PS_Old_Gen_used 3.3696096E7\n# TYPE jvm_memory_pools_PS_Old_Gen_used_after_gc gauge\njvm_memory_pools_PS_Old_Gen_used_after_gc 2.3791152E7\n# TYPE jvm_memory_pools_PS_Survivor_Space_committed gauge\njvm_memory_pools_PS_Survivor_Space_committed 1048576.0\n# TYPE jvm_memory_pools_PS_Survivor_Space_init gauge\njvm_memory_pools_PS_Survivor_Space_init 7.0254592E7\n# TYPE jvm_memory_pools_PS_Survivor_Space_max gauge\njvm_memory_pools_PS_Survivor_Space_max 1048576.0\n# TYPE jvm_memory_pools_PS_Survivor_Space_usage gauge\njvm_memory_pools_PS_Survivor_Space_usage 0.59375\n# TYPE jvm_memory_pools_PS_Survivor_Space_used gauge\njvm_memory_pools_PS_Survivor_Space_used 622592.0\n# TYPE jvm_memory_pools_PS_Survivor_Space_used_after_gc gauge\njvm_memory_pools_PS_Survivor_Space_used_after_gc 622592.0\n# TYPE jvm_memory_total_committed gauge\njvm_memory_total_committed 1.644478464E9\n# TYPE jvm_memory_total_init gauge\njvm_memory_total_init 1.697972224E9\n# TYPE jvm_memory_total_max gauge\njvm_memory_total_max 3.999268864E9\n# TYPE jvm_memory_total_used gauge\njvm_memory_total_used 3.82665128E8\n# TYPE jvm_thread_blocked_count gauge\njvm_thread_blocked_count 0.0\n# TYPE jvm_thread_count gauge\njvm_thread_count 82.0\n# TYPE jvm_thread_daemon_count gauge\njvm_thread_daemon_count 11.0\n# TYPE jvm_thread_deadlock_count gauge\njvm_thread_deadlock_count 0.0\n# TYPE jvm_thread_new_count gauge\njvm_thread_new_count 0.0\n# TYPE jvm_thread_runnable_count gauge\njvm_thread_runnable_count 25.0\n# TYPE jvm_thread_terminated_count gauge\njvm_thread_terminated_count 0.0\n# TYPE jvm_thread_timed_waiting_count gauge\njvm_thread_timed_waiting_count 10.0\n# TYPE jvm_thread_waiting_count gauge\njvm_thread_waiting_count 47.0\n```"},{"name":"mtls.md","id":"/topics/mtls.md","url":"/topics/mtls.html","title":"Mutual TLS with Otoroshi","content":"# Mutual TLS with Otoroshi\n\n@@@ warning\nThis section is under rewrite. The following content is deprecated\n@@@\n\nOtoroshi support mutual TLS out of the box. mTLS from client to Otoroshi and from Otoroshi to targets are supported. In this article we will see how to configure Otoroshi to use end-to-end mTLS. All code and files used in this articles can be found on the [Otoroshi github](https://github.com/MAIF/otoroshi/tree/master/demos/mtls)\n\n@@@ note { title=\"Experimental Feature\" }\nDynamic Mutual TLS is an experimental feature. It can change until it becomess an official feature\n@@@\n\n## End-to-end mTLS\n\nThe use case is the following :\n\n@@@ div { .centered-img }\n\n@@@\n\nfor this demo you will have to edit your `/etc/hosts` file to add the following entries\n\n```\n127.0.0.1 api.backend.lol api.frontend.lol www.backend.lol www.frontend.lol validation.backend.lol\n```\n\n### Create certificates\n\nBut first we need to generate some certificates to make the demo work\n\n```sh\nmkdir mtls-demo\ncd mtls-demo\nmkdir ca\nmkdir server\nmkdir client\n\n# create a certificate authority key, use password as pass phrase\nopenssl genrsa -out ./ca/ca-backend.key 4096\n# remove pass phrase\nopenssl rsa -in ./ca/ca-backend.key -out ./ca/ca-backend.key\n# generate the certificate authority cert\nopenssl req -new -x509 -sha256 -days 730 -key ./ca/ca-backend.key -out ./ca/ca-backend.cer -subj \"/CN=MTLSB\"\n\n\n# create a certificate authority key, use password as pass phrase\nopenssl genrsa -out ./ca/ca-frontend.key 2048\n# remove pass phrase\nopenssl rsa -in ./ca/ca-frontend.key -out ./ca/ca-frontend.key\n# generate the certificate authority cert\nopenssl req -new -x509 -sha256 -days 730 -key ./ca/ca-frontend.key -out ./ca/ca-frontend.cer -subj \"/CN=MTLSF\"\n\n\n# now create the backend cert key, use password as pass phrase\nopenssl genrsa -out ./server/_.backend.lol.key 2048\n# remove pass phrase\nopenssl rsa -in ./server/_.backend.lol.key -out ./server/_.backend.lol.key\n# generate the csr for the certificate\nopenssl req -new -key ./server/_.backend.lol.key -sha256 -out ./server/_.backend.lol.csr -subj \"/CN=*.backend.lol\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./server/_.backend.lol.csr -CA ./ca/ca-backend.cer -CAkey ./ca/ca-backend.key -set_serial 1 -out ./server/_.backend.lol.cer\n# verify the certificate, should output './server/_.backend.lol.cer: OK'\nopenssl verify -CAfile ./ca/ca-backend.cer ./server/_.backend.lol.cer\n\n\n# now create the frontend cert key, use password as pass phrase\nopenssl genrsa -out ./server/_.frontend.lol.key 2048\n# remove pass phrase\nopenssl rsa -in ./server/_.frontend.lol.key -out ./server/_.frontend.lol.key\n# generate the csr for the certificate\nopenssl req -new -key ./server/_.frontend.lol.key -sha256 -out ./server/_.frontend.lol.csr -subj \"/CN=*.frontend.lol\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./server/_.frontend.lol.csr -CA ./ca/ca-frontend.cer -CAkey ./ca/ca-frontend.key -set_serial 1 -out ./server/_.frontend.lol.cer\n# verify the certificate, should output './server/_.frontend.lol.cer: OK'\nopenssl verify -CAfile ./ca/ca-frontend.cer ./server/_.frontend.lol.cer\n\n\n# now create the client cert key for backend, use password as pass phrase\nopenssl genrsa -out ./client/_.backend.lol.key 2048\n# remove pass phrase\nopenssl rsa -in ./client/_.backend.lol.key -out ./client/_.backend.lol.key\n# generate the csr for the certificate\nopenssl req -new -key ./client/_.backend.lol.key -out ./client/_.backend.lol.csr -subj \"/CN=*.backend.lol\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./client/_.backend.lol.csr -CA ./ca/ca-backend.cer -CAkey ./ca/ca-backend.key -set_serial 2 -out ./client/_.backend.lol.cer\n# generate a pkcs12 version of the cert and key, use password as password\nopenssl pkcs12 -export -clcerts -in client/_.backend.lol.cer -inkey client/_.backend.lol.key -out client/_.backend.lol.p12\n\n\n# now create the client cert key for frontend, use password as pass phrase\nopenssl genrsa -out ./client/_.frontend.lol.key 2048\n# remove pass phrase\nopenssl rsa -in ./client/_.frontend.lol.key -out ./client/_.frontend.lol.key\n# generate the csr for the certificate\nopenssl req -new -key ./client/_.frontend.lol.key -out ./client/_.frontend.lol.csr -subj \"/CN=*.frontend.lol\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./client/_.frontend.lol.csr -CA ./ca/ca-frontend.cer -CAkey ./ca/ca-frontend.key -set_serial 2 -out ./client/_.frontend.lol.cer\n# generate a pkcs12 version of the cert and key, use password as password\nopenssl pkcs12 -export -clcerts -in client/_.frontend.lol.cer -inkey client/_.frontend.lol.key -out client/_.frontend.lol.p12\n```\n\nonce it's done, you should have something like\n\n```sh\n$ tree\n.\n├── backend.js\n├── ca\n│ ├── ca-backend.cer\n│ ├── ca-backend.key\n│ ├── ca-frontend.cer\n│ └── ca-frontend.key\n├── client\n│ ├── _.backend.lol.cer\n│ ├── _.backend.lol.csr\n│ ├── _.backend.lol.key\n│ ├── _.backend.lol.p12\n│ ├── _.frontend.lol.cer\n│ ├── _.frontend.lol.csr\n│ ├── _.frontend.lol.key\n│ └── _.frontend.lol.p12\n└── server\n ├── _.backend.lol.cer\n ├── _.backend.lol.csr\n ├── _.backend.lol.key\n ├── _.frontend.lol.cer\n ├── _.frontend.lol.csr\n └── _.frontend.lol.key\n\n3 directories, 18 files\n```\n\n### The backend service \n\nnow, let's create a backend service using nodejs. Create a file named `backend.js`\n\n```sh\ntouch backend.js\n```\n\nand put the following content\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\nconst options = { \n key: fs.readFileSync('./server/_.backend.lol.key'), \n cert: fs.readFileSync('./server/_.backend.lol.cer'), \n ca: fs.readFileSync('./ca/ca-backend.cer'), \n}; \n\nhttps.createServer(options, (req, res) => { \n res.writeHead(200, {\n 'Content-Type': 'application/json'\n }); \n res.end(JSON.stringify({ message: 'Hello World!' }) + \"\\n\"); \n}).listen(8444);\n```\n\nto run the server, just do \n\n```sh\nnode ./backend.js\n```\n\nnow you can try your server with\n\n```sh\ncurl --cacert ./ca/ca-backend.cer https://api.backend.lol:8444/\n# will print {\"message\":\"Hello World!\"}\n```\n\nnow modify your backend server to ensure that the client provides a client certificate like:\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\nconst options = { \n key: fs.readFileSync('./server/_.backend.lol.key'), \n cert: fs.readFileSync('./server/_.backend.lol.cer'), \n ca: fs.readFileSync('./ca/ca-backend.cer'), \n requestCert: true, \n rejectUnauthorized: true\n}; \n\nhttps.createServer(options, (req, res) => { \n console.log('Client certificate CN: ', req.socket.getPeerCertificate().subject.CN);\n res.writeHead(200, {\n 'Content-Type': 'application/json'\n }); \n res.end(JSON.stringify({ message: 'Hello World!' }) + \"\\n\"); \n}).listen(8444);\n```\n\nyou can test your new server with\n\n```sh\ncurl --cacert ./ca/ca-backend.cer --cert-type pkcs12 --cert ./client/_.backend.lol.p12:password https://api.backend.lol:8444/\n# will print {\"message\":\"Hello World!\"}\n```\n\n### Otoroshi setup\n\nDownload the latest version of the Otoroshi jar and run it like\n\n```sh\njava -jar otoroshi.jar\n\n[info] otoroshi-env - Admin API exposed on http://otoroshi-api.oto.tools:8080\n[info] otoroshi-env - Admin UI exposed on http://otoroshi.oto.tools:8080\n[info] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[info] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[info] otoroshi-env - You can log into the Otoroshi admin console with the following credentials: admin@otoroshi.io / xxxxxxxxxxxx\n[info] play.api.Play - Application started (Prod)\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n[info] p.c.s.AkkaHttpServer - Listening for HTTPS on /0:0:0:0:0:0:0:0:8443\n[info] otoroshi-env - Generating a self signed SSL certificate for https://*.oto.tools ...\n```\n\nand log into otoroshi with the tuple `admin@otoroshi.io / xxxxxxxxxxxx` displayed in the logs. Once logged in, create a new public service exposed on `http://api.frontend.lol` that targets `ahttps://api.backend.lol:8444/`.\n\n@@@ div { .centered-img }\n\n@@@\n\nand test it\n\n```sh\ncurl http://api.frontend.lol:8080/\n# the following error should be returned: {\"Otoroshi-Error\":\"Something went wrong, you should try later. Thanks for your understanding.\"}\n```\n\n@@@ warning\nAs seen before, the target of the otoroshi service is `ahttps://api.backend.lol:8444/`. `ahttps://` is not a typo and is intended. This tells otoroshi to use its experimental `http client` with dynamic tls support to fetch this resource.\n@@@\n\nyou should get an error due to the fact that Otoroshi doesn't know about the server certificate or the client certificate expected by the server.\n\nWe have to add the client certificate for `https://api.backend.lol` to Otoroshi. Go to http://otoroshi.oto.tools:8080/bo/dashboard/certificates and create a new item. Copy and paste the content of `./client/_.backend.lol.cer` and `./client/_.backend.lol.key` respectively in `Certificate full chain` and `Certificate private key`.\n\n@@@ div { .centered-img }\n\n@@@\n\nand retry the following curl command \n\n```sh\ncurl http://api.frontend.lol:8080/\n# the output should be: {\"message\":\"Hello World!\"}\n```\n\nnow we have to expose `https://api.frontend.lol:8443` using otoroshi. Go to http://otoroshi.oto.tools:8080/bo/dashboard/certificates and create a new item. Copy and paste the content of `./server/_.frontend.lol.cer` and `./server/_.frontend.lol.key` respectively in `Certificate full chain` and `Certificate private key`.\n\nand try the following command\n\n```sh\ncurl --cacert ./ca/ca-frontend.cer https://api.frontend.lol:8443/\n# the output should be: {\"message\":\"Hello World!\"}\n```\n\nnow we have to enforce the fact that we want client certificate for `api.frontend.lol`. To do that, we have to create a `Validation authority` in otoroshi and use it on the `api.frontend.lol` service. Go to http://otoroshi.oto.tools:8080/bo/dashboard/validation-authorities and create a new item. A validation authority is supposed to be a remote service that will say if the client certificate is valid. Here we don't really care if the certificate is valid or not, but we want to enforce the fact that there is a client certificate. So just check the `All cert. valid button`.\n\n@@@ div { .centered-img }\n\n@@@\n\nnow go back on your `api.frontend.lol` service, in the `Validation authority` section and select the authority you just created.\n\n@@@ div { .centered-img }\n\n@@@\n\nnow if you retry \n\n```sh\ncurl --cacert ./ca/ca-frontend.cer https://api.frontend.lol:8443/\n# the output should be: {\"Otoroshi-Error\":\"You're not authorized here !\"}\n```\n\nyou should get an error because no client cert. is passed with the request. But if you pass the `./client/_.frontend.lol.p12` client cert in your curl call\n\n```sh\ncurl --cacert ./ca/ca-frontend.cer --cert-type pkcs12 --cert ./client/_.frontend.lol.p12:password https://api.frontend.lol:8443/\n# the output should be: {\"message\":\"Hello World!\"}\n```\n\n### End to end test\n\nNow we can try to write a small nodejs client that uses our client certificates. Create a `client.js` file with the following code\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\nprocess.env['NODE_TLS_REJECT_UNAUTHORIZED'] = 0;\n\nconst options = { \n hostname: 'api.frontend.lol', \n port: 8443, \n path: '/', \n method: 'GET', \n key: fs.readFileSync('./client/_.frontend.lol.key'), \n cert: fs.readFileSync('./client/_.frontend.lol.cer'), \n ca: fs.readFileSync('./ca/ca-frontend.cer'), \n}; \n\nconst req = https.request(options, (res) => { \n console.log('statusCode', res.statusCode);\n console.log('headers', res.headers);\n console.log('body:');\n res.on('data', (data) => { \n process.stdout.write(data); \n }); \n}); \n\nreq.end(); \n\nreq.on('error', (e) => { \n console.error(e); \n});\n```\n\nand run the following command\n\n```sh\n$ node client.js\n# statusCode 200\n# headers { date: 'Mon, 10 Dec 2018 16:01:11 GMT',\n# connection: 'close',\n# 'transfer-encoding': 'chunked',\n# 'content-type': 'application/json' }\n# body:\n# {\"message\":\"Hello World!\"}\n```\n\nAnd that's it \n\n## Validating client certificates based on user identity\n\n@@@ note { title=\"Experimental Feature\" }\nValidation authorities is an experimental feature. It can change until it becomess an official feature\n@@@\n\nThe use case is the following :\n\n@@@ div { .centered-img }\n\n@@@\n\nthe idea here is to provide a unique client certificate per device that can access Otoroshi and use a validation authority to check if the user is allowed to access the underlying app with a specific device.\n\n### Generate client certificates for devices\n\nTo do that we are going to create two client certificates, one per device (let say for a laptop and a desktop computer). We are going to use the device serial number as common name of the certificate to be able to identify the device behind the certificate.\n\n```sh\nopenssl genrsa -out ./client/device-1.key 2048\nopenssl rsa -in ./client/device-1.key -out ./client/device-1.key\nopenssl req -new -key ./client/device-1.key -out ./client/device-1.csr -subj \"/CN=mbp-123456789\"\nopenssl x509 -req -days 365 -sha256 -in ./client/device-1.csr -CA ./ca/ca-frontend.cer -CAkey ./ca/ca-frontend.key -set_serial 3 -out ./client/device-1\nopenssl pkcs12 -export -clcerts -in client/device-1 -inkey client/device-1.key -out client/device-1.p12\n\nopenssl genrsa -out ./client/device-2.key 2048\nopenssl rsa -in ./client/device-2.key -out ./client/device-2.key\nopenssl req -new -key ./client/device-2.key -out ./client/device-2.csr -subj \"/CN=nuc-987654321\"\nopenssl x509 -req -days 365 -sha256 -in ./client/device-2.csr -CA ./ca/ca-frontend.cer -CAkey ./ca/ca-frontend.key -set_serial 4 -out ./client/device-2\nopenssl pkcs12 -export -clcerts -in client/device-2 -inkey client/device-2.key -out client/device-2.p12\n```\n\n### Setup actual validation\n\nnow we are going to write an validation authority (with mTLS too) that is going to respond on `https://validation.backend.lol:8445`. The server has access to a list of apps, users and devices to check if everything is correct. In this implementation, the lists are hardcoded, but you can write your own implementation that will fetch data from your corporate LDAP, CA, etc. Create a `validation.js` file and add the following content. Don't forget to do `yarn add x509` before running the server with `node validation.js`\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \nconst x509 = require('x509');\n\n// list of knwon apps\nconst apps = [\n {\n \"id\": \"iogOIDH09EktFhydTp8xspGvdaBq961DUDr6MBBNwHO2EiBMlOdafGnImhbRGy8z\",\n \"name\": \"my-web-service\",\n \"description\": \"A service that says hello\",\n \"host\": \"www.frontend.lol\"\n }\n];\n\n// list of known users\nconst users = [\n {\n \"name\": \"Mathieu\",\n \"email\": \"mathieu@oto.tools\",\n \"appRights\": [\n {\n \"id\": \"iogOIDH09EktFhydTp8xspGvdaBq961DUDr6MBBNwHO2EiBMlOdafGnImhbRGy8z\",\n \"profile\": \"user\",\n \"forbidden\": false\n },\n {\n \"id\": \"PqgOIDH09EktFhydTp8xspGvdaBq961DUDr6MBBNwHO2EiBMlOdafGnImhbRGy8z\",\n \"profile\": \"none\",\n \"forbidden\": true\n },\n ],\n \"ownedDevices\": [\n \"mbp-123456789\",\n \"nuc-987654321\",\n ]\n }\n];\n\n// list of known devices\nconst devices = [\n {\n \"serialNumber\": \"mbp-123456789\",\n \"hardware\": \"Macbook Pro 2018 13 inc. with TouchBar, 2.6 GHz, 16 Gb\",\n \"acquiredAt\": \"2018-10-01\",\n },\n {\n \"serialNumber\": \"nuc-987654321\",\n \"hardware\": \"Intel NUC i7 3.0 GHz, 32 Gb\",\n \"acquiredAt\": \"2018-09-01\",\n },\n {\n \"serialNumber\": \"iphone-1234\",\n \"hardware\": \"Iphone XS, 256 Gb\",\n \"acquiredAt\": \"2018-12-01\",\n }\n];\n\nconst options = { \n key: fs.readFileSync('./server/_.backend.lol.key'), \n cert: fs.readFileSync('./server/_.backend.lol.cer'), \n ca: fs.readFileSync('./ca/ca-backend.cer'), \n requestCert: true, \n rejectUnauthorized: true\n}; \n\nfunction readBody(request) {\n return new Promise((success, failure) => {\n const body = [];\n request.on('data', (chunk) => {\n body.push(chunk);\n }).on('end', () => {\n const bodyStr = Buffer.concat(body).toString();\n success(JSON.parse(bodyStr));\n });\n });\n}\n\nfunction chainIsValid(chain) {\n // validate cert dates\n // validate cert against clr\n // validate whatever you want here\n return true;\n}\n\nfunction call(req, res) {\n readBody(req).then(body => {\n const service = body.service;\n const email = (body.user || { email: 'mathieu@oto.tools' }).email; // here, should not be null if used with an otoroshi auth. module\n // common name should be device serial number\n const commonName = x509.getSubject(body.chain).commonName\n // search for a known device\n const device = devices.filter(d => d.serialNumber === commonName)[0];\n // search for a known user\n const user = users.filter(d => d.email === email)[0];\n // search for a known application\n const app = apps.filter(d => d.id === service.id)[0];\n res.writeHead(200, { 'Content-Type': 'application/json' }); \n if (chainIsValid(body.chain.map(x509.parseCert)) && user && device && app) {\n // check if the user actually owns the device\n const userOwnsDevice = user.ownedDevices.filter(d => d === device.serialNumber)[0];\n // check if the user has rights to access the app\n const rights = user.appRights.filter(d => d.id === app.id)[0];\n const hasRightToUseApp = !rights.forbidden\n if (userOwnsDevice && hasRightToUseApp) {\n // yeah !!!!\n console.log(`Call from user \"${user.email}\" with device \"${device.hardware}\" on app \"${app.name}\" with profile \"${rights.profile}\" authorized`)\n res.end(JSON.stringify({ status: 'good', profile: rights.profile }) + \"\\n\"); \n } else {\n // nope !!! nope, nope nope\n console.log(`Call from user \"${user.email}\" with device \"${device.hardware}\" on app \"${app.name}\" unauthorized because user doesn't owns the hardware or has no rights`)\n res.end(JSON.stringify({ status: 'unauthorized' }) + \"\\n\"); \n }\n } else {\n console.log(`Call unauthorized`)\n res.end(JSON.stringify({ status: 'unauthorized' }) + \"\\n\"); \n }\n });\n}\n\nhttps.createServer(options, call).listen(8445);\n```\n\nthe corresponding authority validation can be created in Otoroshi like \n\n```json\n{\n \"id\": \"r7m8j31rh66hhdia3ormfm0wfevu1kvg0zgaxsp3oxb6ivf7fy8kvygmvnrlxv81\",\n \"name\": \"Actual validation authority\",\n \"description\": \"Actual validation authority\",\n \"url\": \"ahttps://validation.backend.lol:8445\",\n \"host\": \"validation.backend.lol\",\n \"goodTtl\": 600000,\n \"badTtl\": 60000,\n \"method\": \"POST\",\n \"path\": \"/certificates/_validate\",\n \"timeout\": 10000,\n \"noCache\": false,\n \"alwaysValid\": false,\n \"headers\": {}\n}\n```\n\nbut you don't need to create it right now.\n\nTypically, a validation authority server is a server with a route on `POST /certificates/_validate` that accepts `application/json` and returns `application/json` with a body like\n\n```json\n{\n \"apikey\": nullable {\n \"clientId\": String,\n \"clientName\": String,\n \"authorizedEntities\": Seq[String],\n \"enabled\": Boolean,\n \"readOnly\": Boolean,\n \"allowClientIdOnly\": Boolean,\n \"throttlingQuota\": Long,\n \"dailyQuota\": Long,\n \"monthlyQuota\": Long,\n \"metadata\": Map[String, String]\n },\n \"user\": nullable {\n \"email\": String,\n \"name\": String,\n },\n \"service\": {\n \"id\": String,\n \"name\": String,\n \"groups\": Seq[String],\n \"domain\": String,\n \"env\": String,\n \"subdomain\": String,\n \"root\": String,\n \"metadata\": String\n },\n \"chain\": PemFormattedCertificateChainString,\n \"fingerprints\": Array[String]\n}\n```\n\n\n### Setup Otoroshi\n\nYou can start Otoroshi and import data from the `state.json` file in the demo folder. The login tuple is `admin@otoroshi.io / password`. The `state.json` file contains everything you need for the demo, like certificates, service descriptors, auth. modules, etc ...\n\n```sh\njava -Dapp.importFrom=$(pwd)/state.json -Dapp.privateapps.port=8080 -jar otoroshi.jar\n\n[info] otoroshi-env - Admin API exposed on http://otoroshi-api.oto.tools:8080\n[info] otoroshi-env - Admin UI exposed on http://otoroshi.oto.tools:8080\n[info] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[info] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[info] otoroshi-env - Importing from: /pwd/state.json\n[info] play.api.Play - Application started (Prod)\n[info] otoroshi-env - Successful import !\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n[info] p.c.s.AkkaHttpServer - Listening for HTTPS on /0:0:0:0:0:0:0:0:8443\n```\n\n### Testing \n\nYou can test the service with curl like\n\n```sh\ncurl --cacert ./ca/ca-frontend.cer --cert-type pkcs12 --cert ./client/device-1.p12:password https://www.frontend.lol:8443/\n# output: Hello World !!!
\ncurl --cacert ./ca/ca-frontend.cer --cert-type pkcs12 --cert ./client/device-2.p12:password https://www.frontend.lol:8443/\n# output: Hello World !!!
\ncurl --cacert ./ca/ca-frontend.cer --cert-type pkcs12 --cert ./client/_.frontend.lol.p12:password https://www.frontend.lol:8443/\n# output: {\"Otoroshi-Error\":\"You're not authorized here !\"}\n```\n\nas expected, the first two call works as their common name is known by the validation server. The last one fails as it's not known.\n\n### Validate user identity\n\nNow let's try to setup firefox to provide the client certificate. Open firefox settings, go to `privacy settings and security` and click on `display certificates` at the bottom of the page. Here you can add the frontend CA (`./ca/ca-frontend.cer`) in the `Authorities` tab, check the 'authorize this CA to identify websites', and then in the `certificates` tab, import one of the devices `.p12` file (like `./client/device-1.p12`). Firefox will ask for the files password (it should be `password`).\n\n@@@ div { .centered-img }\n\n@@@\n\nNow restart firefox.\n\nNext, go to the `my-web-service` service in otoroshi (log in with `admin@otoroshi.io / password`) and activate `Enforce user login` in the Authentication section. It means that now, you'll have to log in when you'll go to https://www.frontend.lol:8443. With authentication activated on otoroshi, the user identity will be sent to the validation authority, so you can change the following line in the file `validation.js`\n\n```js\nconst email = (body.user || { email: 'mathieu@oto.tools' }).email; // here, should not be null if used with an otoroshi auth. module\n```\n\nto\n\n```js\nconst email = body.user.email;\n```\n\nThen, in Firefox, go to https://www.frontend.lol:8443/, firefox will ask which client certificate to use. Select the one you imported (in the process, maybe firefox will warn you that the certificate of the site is auto signed, just ignore it and continue ;) )\n\n@@@ div { .centered-img }\n\n@@@\n\nthen, you'll see a login screen from otoroshi. You can log in with `mathieu@oto.tools / password` and then you should see the hello world message.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Going further with user authentication\n\nFor stronger user authentication, you can try to use an auth. module baked by a keycloak instance with yubikey as a strong second factor authentication instead of the basic auth. module we used previously in this article.\n"},{"name":"plugins.md","id":"/topics/plugins.md","url":"/topics/plugins.html","title":"Otoroshi plugins","content":"# Otoroshi plugins\n\n@@@ warning\nThis section is under rewrite. The following content is deprecated\n@@@\n\nWhen everything has failed and you absolutely need a feature in Otoroshi to make your use case work, there is a solution. Plugins is the feature in Otoroshi that allow you to code how Otoroshi should behave when receiving, validating and routing an http request. With request plugin, you can change request / response headers and request / response body the way you want, provide your own apikey, etc.\n\n## Plugin types\n\nthere are many plugin types\n\n* `request sinks` plugins: used when no services are matched in otoroshi. Can reply with any content\n* `pre-routes` plugins: used to extract values (like custom apikeys) and provide them to other plugins or otoroshi engine\n* `access validation` plugins: used to validate if a request can pass or not based on whatever you want\n* `request transformer` plugins: used to transform request, responses and their body. Can be used to return arbitrary content\n* `event listener` plugins: any plugin type can listen to otoroshi internal events and react to thems\n* `job` plugins: tasks taht can run automatically once, on be scheduled with a cron expression or every defined interval\n\n## Code and signatures\n\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/requestsink.scala#L11-L16\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/routing.scala#L60-L63\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/accessvalidator.scala#L63-L82\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/script.scala#L314-L455\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/eventlistener.scala#L27-L48\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/job.scala#L74-L81\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/job.scala#L108-L110\n\n\nfor more information about APIs you can use\n\n* https://www.playframework.com/documentation/2.6.x/api/scala/index.html#package\n* https://www.playframework.com/documentation/2.6.x/api/scala/index.html#play.api.mvc.Results\n* https://github.com/MAIF/otoroshi\n* https://doc.akka.io/docs/akka/2.5/stream/index.html\n* https://doc.akka.io/api/akka/current/akka/stream/index.html\n* https://doc.akka.io/api/akka/current/akka/stream/scaladsl/Source.html\n\n## Plugin examples\n\nA lot of plugins comes with otoroshi, you can find it on [github](https://github.com/MAIF/otoroshi/tree/master/otoroshi/app/plugins)\n\n## Writing a plugin from Otoroshi UI\n\nLog into Otoroshi and go to `Settings (cog icon) / Plugins`. Here you can create multiple request transformer scripts and associate it with service descriptor later.\n\n@@@ div { .centered-img }\n\n@@@\n\nwhen you write for instance a transformer in the Otoroshi UI, do the following\n\n```scala\nimport akka.stream.Materializer\nimport env.Env\nimport models.{ApiKey, PrivateAppsUser, ServiceDescriptor}\nimport otoroshi.script._\nimport play.api.Logger\nimport play.api.mvc.{Result, Results}\nimport scala.util._\nimport scala.concurrent.{ExecutionContext, Future}\n\nclass MyTransformer extends RequestTransformer {\n\n val logger = Logger(\"my-transformer\")\n\n // implements the methods you want\n}\n\n// WARN: do not forget this line to provide a working instance of your transformer to Otoroshi\nnew MyTransformer()\n```\n\nYou can use the compile button to check if the script compiles, or code the transformer in your IDE (see next point).\n\nThen go to a service descriptor, scroll to the bottom of the page, and select your transformer in the list\n\n@@@ div { .centered-img }\n\n@@@\n\n## Providing a transformer from Java classpath\n\nYou can write your own transformer using your favorite IDE. Just create an SBT project with the following dependencies. It can be quite handy to manage the source code like any other piece of code, and it avoid the compilation time for the script at Otoroshi startup.\n\n```scala\nlazy val root = (project in file(\".\")).\n settings(\n inThisBuild(List(\n organization := \"com.example\",\n scalaVersion := \"2.12.7\",\n version := \"0.1.0-SNAPSHOT\"\n )),\n name := \"request-transformer-example\",\n resolvers += Resolver.bintrayRepo(\"maif\", \"maven\"),\n libraryDependencies += \"fr.maif.otoroshi\" %% \"otoroshi\" % \"1.x.x\"\n )\n```\n\nWhen your code is ready, create a jar file \n\n```\nsbt package\n```\n\nand add the jar file to the Otoroshi classpath\n\n```sh\njava -cp \"/path/to/transformer.jar:$/path/to/otoroshi.jar\" play.core.server.ProdServerStart\n```\n\nthen, in your service descriptor, you can chose your transformer in the list. If you want to do it from the API, you have to defined the transformerRef using `cp:` prefix like \n\n```json\n{\n \"transformerRef\": \"cp:my.class.package.MyTransformer\"\n}\n```\n\n## Getting custom configuration from the Otoroshi config. file\n\nLet say you need to provide custom configuration values for a script, then you can customize a configuration file of Otoroshi\n\n```hocon\ninclude \"application.conf\"\n\notoroshi {\n scripts {\n enabled = true\n }\n}\n\nmy-transformer {\n env = \"prod\"\n maxRequestBodySize = 2048\n maxResponseBodySize = 2048\n}\n```\n\nthen start Otoroshi like\n\n```sh\njava -Dconfig.file=/path/to/custom.conf -jar otoroshi.jar\n```\n\nthen, in your transformer, you can write something like \n\n```scala\npackage com.example.otoroshi\n\nimport akka.stream.Materializer\nimport akka.stream.scaladsl._\nimport akka.util.ByteString\nimport env.Env\nimport models.{ApiKey, PrivateAppsUser, ServiceDescriptor}\nimport otoroshi.script._\nimport play.api.Logger\nimport play.api.mvc.{Result, Results}\nimport scala.util._\nimport scala.concurrent.{ExecutionContext, Future}\n\nclass BodyLengthLimiter extends RequestTransformer {\n\n override def def transformResponseWithCtx(ctx: TransformerResponseContext)(implicit env: Env, ec: ExecutionContext, mat: Materializer): Source[ByteString, _] = {\n val max = env.configuration.getOptional[Long](\"my-transformer.maxResponseBodySize\").getOrElse(Long.MaxValue)\n ctx.body.limitWeighted(max)(_.size)\n }\n\n override def transformRequestWithCtx(ctx: TransformerRequestContext)(implicit env: Env, ec: ExecutionContext, mat: Materializer): Source[ByteString, _] = {\n val max = env.configuration.getOptional[Long](\"my-transformer.maxRequestBodySize\").getOrElse(Long.MaxValue)\n ctx.body.limitWeighted(max)(_.size)\n }\n}\n```\n\n## Using a library that is not embedded in Otoroshi\n\nJust use the `classpath` option when running Otoroshi\n\n```sh\njava -cp \"/path/to/library.jar:$/path/to/otoroshi.jar\" play.core.server.ProdServerStart\n```\n\nBe carefull as your library can conflict with other libraries used by Otoroshi and affect its stability\n\n## Enabling plugins\n\nplugins can be enabled per service from the service settings page or globally from the danger zone in the plugins section.\n"},{"name":"snow-monkey.md","id":"/topics/snow-monkey.md","url":"/topics/snow-monkey.html","title":"Chaos engineering with the Snow Monkey","content":"# Chaos engineering with the Snow Monkey\n\nNihonzaru (the Snow Monkey) is the chaos engineering tool provided by Otoroshi. You can access it at `Settings (cog icon) / Snow Monkey`.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Chaos engineering\n\nOtoroshi offers some tools to introduce [chaos engineering](https://principlesofchaos.org/) in your everyday life. With chaos engineering, you will improve the resilience of your architecture by creating faults in production on running systems. With [Nihonzaru (the snow monkey)](https://en.wikipedia.org/wiki/Japanese_macaque) Otoroshi helps you to create faults on http request/response handled by Otoroshi. \n\n@@@ div { .centered-img }\n\n@@@\n\n## Settings\n\n@@@ div { .centered-img }\n\n@@@\n\nThe snow monkey let you define a few settings to work properly :\n\n* **Include user facing apps.**: you want to create fault in production, but maybe you don't want your users to enjoy some nice snow monkey generated error pages. Using this switch let you include of not user facing apps (ui apps). Each service descriptor has a `User facing app switch` that will be used by the snow monkey.\n* **Dry run**: when dry run is enabled, outages will be registered and will generate events and alerts (in the otoroshi eventing system) but requests won't be actualy impacted. It's a good way to prepare applications to the snow monkey habits\n* **Outage strategy**: Either `AllServicesPerGroup` or `OneServicePerGroup`. It means that only one service per group or all services per groups will have `n` outages (see next bullet point) during the snow monkey working period\n* **Outages per day**: during snow monkey working period, each service per group or one service per group will have only `n` outages registered \n* **Working period**: the snow monkey only works during a working period. Here you can defined when it starts and when it stops\n* **Outage duration**: here you can defined the bounds for the random outage duration when an outage is created on a service\n* **Impacted groups**: here you can define a list of service groups impacted by the snow monkey. If none is specified, then all service groups will be impacted\n\n## Faults\n\nWith the snow monkey, you can generate four types of faults\n\n* **Large request fault**: Add trailing bytes at the end of the request body (if one)\n* **Large response fault**: Add trailing bytes at the end of the response body\n* **Latency injection fault**: Add random response latency between two bounds\n* **Bad response injection fault**: Create predefined responses with custom headers, body and status code\n\nEach fault let you define a ratio for impacted requests. If you specify a ratio of `0.2`, then 20% of the requests for the impacte service will be impacted by this fault\n\n@@@ div { .centered-img }\n\n@@@\n\nThen you juste have to start the snow monkey and enjoy the show ;)\n\n@@@ div { .centered-img }\n\n@@@\n\n## Current outages\n\nIn the last section of the snow monkey page, you can see current outages (per service), when they started, their duration, etc ...\n\n@@@ div { .centered-img }\n\n@@@"},{"name":"ssl.md","id":"/topics/ssl.md","url":"/topics/ssl.html","title":"SSL/TLS termination with Otoroshi","content":"# SSL/TLS termination with Otoroshi\n\nOtoroshi can be used as an SSL/TLS termination. It is enabled by default but you can customise HTTPS port with `https.port` config. and env. var `HTTPS_PORT`. You can create upload any certificate you want in the Otoroshi UI or using the API. Just go to `settings (cog icon) / SSL/TLSS certificates`.\n\n@@@ note { title=\"Experimental Feature\" }\nDynamic SSL/TLS termination is an experimental feature. It can change until it becomess an official feature\n@@@\n\n@@@ note { title=\"TLS 1.3 support\" }\nOtoroshi does support TLS 1.3 when used in combination with JDK 11\n\n\n@@@\n\n@@@ div { .centered-img }\n\n@@@\n\nHere you can add your own certificates, your own CA and even create self signed certificates or certificates from CAs. You can enable auto renewal of thoses self signed certificates or certificates generated. Certificates have to be created with the certificate chain and the private key in PEM format with no password on the private key.\n\nYou can remove the password of a key with the following command\n\n```sh\nopenssl rsa -in keywithpassword.key -out keywithoutpassword.key\n```\n\n@@@ div { .centered-img }\n\n@@@\n\n"},{"name":"1-groups.md","id":"/usage/1-groups.md","url":"/usage/1-groups.html","title":"Managing service groups","content":"# Managing service groups\n\nGo to `settings (cog icon) / All service groups` to access the list of service groups.\n\n@@@ div { .centered-img }\n\n@@@\n\nAnd you should see the list of existing `Service groups`.\n\n@@@ div { .centered-img }\n\n@@@\n\nBut what is a `Service group` anyway ?\n\n## Otoroshi entities\n\nThere are 3 major entities at the core of Otoroshi :\n\n* **service groups**\n* service descriptors\n* api keys\n\n@@@ div { .centered-img }\n\n@@@\n\nA `service group` is just some kind of logical container for `service descriptors`. A `service group` also has some `api keys` assigned that will be used to access all the `service descriptors` contained in the `service group`.\n\n## Create a service group\n\nA `service group` is a really simple structure with an `id`, a name and a description. To create a new one, just click on the `Add item` button.\n\n@@@ div { .centered-img }\n\n@@@\n\nmodify the name and the description of the group\n\n@@@ div { .centered-img }\n\n@@@\n\nand click on `Create group`\n\n@@@ div { .centered-img }\n\n@@@\n\nThen, you should find your brand new `Service group` in the list of `Service groups`\n\n@@@ div { .centered-img }\n\n@@@\n\n## Update a service\n\nTo update a `Service group`, just click on the edit button of your `Service group`\n\n@@@ div { .centered-img }\n\n@@@\n\nUpdate the name and description of the `Service group` and click on the `Update group` button to validate name update.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Delete a service group\n\nTo delete a `Service group`, just click on the delete button of your `Service group`\n\n@@@ div { .centered-img }\n\n@@@\n\nFinally confirm the command\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"2-services.md","id":"/usage/2-services.md","url":"/usage/2-services.html","title":"Managing services","content":"# Managing services\n\nNow let's create services. Services or `service descriptor` let you declare how to proxy a call from a domain name to another domain name (or multiple domain names). Let's say you have an API exposed on `http://192.168.0.42` and I want to expose it on `https://my.api.foo`. Otoroshi will proxy all calls to `https://my.api.foo` and forward them to `http://192.168.0.42`. While doing that, it will also log everyhting, control accesses, etc.\n\n## Otoroshi entities\n\nThere are 3 major entities at the core of Otoroshi\n\n* service groups\n* **service descriptors**\n* api keys\n\n@@@ div { .centered-img }\n\n@@@\n\nA `service descriptor` is contained in one or multiple `service group`s and is allowed to be accessed by all the `api key`s authorized on those `service group`s or apikeys directly authorized on the service itself.\n\n## Create a service descriptor\n\nTo create a `service descriptor`, click on `Add service` on the Otoroshi sidebar. Then you will be asked to choose a name for the service and the group of the service. You also have two buttons to create a new group and assign it to the service and create a new group with a name based on the service name.\n\nYou will have a serie of toggle buttons to\n\n* activate / deactivate a service\n* display maintenance page for a service\n* display contruction page for a service\n* enable otoroshi custom response headers containing request id, latency, etc \n* force https usage on the exposed service\n* enable read only flag : this service will only be used with `HEAD`, `OPTIONS` and `GET` http verbs. You can also active the same flag on `ApiKey`s to be more specific on who cannot use write http verbs.\n\nThen, you will be able to choose the URL that will be used to reach your new service on Otoroshi.\n\n@@@ div { .centered-img #service-flags }\n\n@@@\n\nIn the `service targets` section, you will be able to choose where the call will be forwarded. You can use multiple targets, in that case, Otoroshi will perform a round robin load balancing between the targets. If the `override Host header` toggle is on, the host header will be changed for the host of the target. For example, if you request `http://www.oto.tools/api` with a target to `http://www-internal.service.local/api`, the target will receive a `Host: www-internal.service.local` instead of `Host: www.oto.tools`.\n\nYou can also specify a target root, if you say that the target root is `/foo/`, then any call to `https://my.api.foo` will call `http://192.168.0.42/foo/` and nay call to `https://my.api.foo/bar` will call `http://192.168.0.42/foo/bar`.\n\nIn the URL patterns section, you will be able to choose, URL by URL which is private and which is public. By default, all services are private and each call must provide an `api key`. But sometimes, you need to access a service publicly. In that case, you can provide patterns (regex) to make some or all URL public (for example with the pattern `/.*`). You also have a `private pattern` field to restrict public patterns.\n\n@@@ div { .centered-img #targets }\n\n@@@\n\n### Otoroshi exchange protocol\n\n#### V1 challenge\n\nIf you enable secure communication for a given service with `V1 - simple values exchange` activated, you will have to add a filter on the target application that will take the `Otoroshi-State` header and return it in a header named `Otoroshi-State-Resp`. \n\n@@@ div { .centered-img }\n\n@@@\n\n#### V2 challenge\n\nIf you enable secure communication for a given service with `V2 - signed JWT token exhange` activated, you will have to add a filter on the target application that will take the `Otoroshi-State` header value containing a JWT token, verify it's content signature then extract a claim named `state` and return a new JWT token in a header named `Otoroshi-State-Resp` with the `state` value in a claim named `state-resp`. By default, the signature algorithm is HMAC+SHA512 but can you can choose your own. The sent and returned JWT tokens have short TTL to avoid being replayed. You must be validate the tokens TTL.\n\n@@@ div { .centered-img }\n\n@@@\n\n#### Info. token\n\nOtoroshi is also sending a JWT token in a header named `Otoroshi-Claim` that the target app can validate too.\n\nThe `Otoroshi-Claim` is a JWT token containing some informations about the service that is called and the client if available. You can choose between a legacy version of the token and a new one that is more clear and structured.\n\nBy default, the otoroshi jwt token is signed with the `app.claim.sharedKey` config property (or using the `$CLAIM_SHAREDKEY` env. variable) and uses the `HMAC512` signing algorythm. But it is possible to customize how the token is signed from the service descriptor page in the `Otoroshi exchange protocol` section. \n\n@@@ div { .centered-img }\n\n@@@\n\nusing another signing algo.\n\n@@@ div { .centered-img }\n\n@@@\n\nhere you can choose the signing algorithm and the secret/keys used. You can use syntax like `${env.MY_ENV_VAR}` or `${config.my.config.path}` to provide secret/keys values. \n\nFor example, for a service named `my-service` with a signing key `secret` with `HMAC512` signing algorythm, the basic JWT token that will be sent should look like the following\n\n```\neyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJzdWIiOiItLSIsImF1ZCI6Im15LXNlcnZpY2UiLCJpc3MiOiJPdG9yb3NoaSIsImV4cCI6MTUyMTQ0OTkwNiwiaWF0IjoxNTIxNDQ5ODc2LCJqdGkiOiI3MTAyNWNjMTktMmFjNy00Yjk3LTljYzctMWM0ODEzYmM1OTI0In0.mRcfuFVFPLUV1FWHyL6rLHIJIu0KEpBkKQCk5xh-_cBt9cb6uD6enynDU0H1X2VpW5-bFxWCy4U4V78CbAQv4g\n```\n\nif you decode it, the payload will look something like\n\n```json\n{\n \"sub\": \"apikey_client_id\",\n \"aud\": \"my-service\",\n \"iss\": \"Otoroshi\",\n \"exp\": 1521449906,\n \"iat\": 1521449876,\n \"jti\": \"71025cc19-2ac7-4b97-9cc7-1c4813bc5924\"\n}\n```\n\nIf you want to validate the `Otoroshi-Claim` on the target app side to ensure that the input requests only comes from `Otoroshi`, you will have to write an HTTP filter to do the job. For instance, if you want to write a filter to make sure that requests only comes from Otoroshi, you can write something like the following (using playframework 2.6).\n\nScala\n: @@snip [filter.scala](../snippets/filter.scala)\n\nJava\n: @@snip [filter.java](../snippets/filter.java)\n\n\n### Canary mode\n\nOtoroshi provides a feature called `Canary mode`. It lets you define new targets for a service, and route a percentage of the traffic on those targets. It's a good way to test a new version of a service before public release. As any client need to be routed to the same version of targets any time, Otoroshi will issue a special header and a cookie containing a `session id`. The header is named `Otoroshi-Canary-Id`.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Service health check\n\nOtoroshi is also capable of checking the health of a service. You can define a URL that will be tested, and Otoroshi will ping that URL regularly. Will doing so, Otoroshi will pass a numeric value in a header named `Otoroshi-Health-Check-Logic-Test`. You can respond with a header named `Otoroshi-Health-Check-Logic-Test-Result` that contains the value of `Otoroshi-Health-Check-Logic-Test` + 42 to indicate that the service is working properly.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Service circuit breaker\n\nIn Otoroshi, each service has its own client settings with a circuit breaker and some retry capabilities. In the `Client settings` section, you will be able to customize the client's behavior.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Service settings\n\nYou can also provide some additionnal information about a given service, like an `Open API` descriptor, some metadata, a list of whitelisted/blacklisted ip addresses, etc.\n\n@@@ div { .centered-img #service-meta }\n\n@@@\n\n### HTTP Headers\n\nHere you can define some headers that will be added to each request to client requests or responses. \nYou will also be able to define headers to route the call only if the defined header is present on the request.\n\n@@@ div { .centered-img #service-meta }\n\n@@@\n\n### CORS \n\nIf you enabled this section, CORS will be automatically supported on the current service provider. The pre-flight request will be handled by Otoroshi. You can customize every CORS headers :\n\n@@@ div { .centered-img }\n\n@@@\n\n### Service authentication\n\nSee @ref:[Aauthentication](./9-auth.md)\n\n### Custom error templates\n\nFinally, you can define custom error templates that will be displayed when an error occurs when Otoroshi try to reach the target or when Otoroshi itself has an error. You can also define custom templates for maintenance and service pages.\n"},{"name":"3-apikeys.md","id":"/usage/3-apikeys.md","url":"/usage/3-apikeys.html","title":"Managing API keys","content":"# Managing API keys\n\nNow that you know how to create service groups and service descriptors, we will see how to create API keys.\n\n## Otoroshi entities\n\nThere are 3 major entities at the core of Otoroshi.\n\n* service groups\n* service descriptors\n* **api keys**\n\n@@@ div { .centered-img }\n\n@@@\n\nAn `API key` is linked to one or more `service group` and `service descriptor` to allow you to access any `service descriptor` linked or contained in one of the linked `service group`. You can, of course, create multiple `API key` for given `service group`s/`service descriptor`s.\n\nIn the Otoroshi admin dashboard, we chose to access `API keys` from `service descriptors` only, but when you access `API keys` for a `service descriptor`, you actually access `API keys` for the `service group` containing the `service descriptor`.\n\n`API keys` can be provided to Otoroshi through :\n\n* `Otoroshi-Authorization: Basic $base64(client_id:client_secret)` header, in that case, the `Otoroshi-Authorization` header will **not** be sent to the target. `Basic ` is optional.\n* `Authorization: Basic $base64(client_id:client_secret)` header, in that case, the `Authorization` header **will** be sent to the target\n* `Otoroshi-Token: Bearer $jwt_token` where the JWT token has been signed with the `API key` client secret, in that case, the `Otoroshi-Token` header will **not** be sent to the target. `Bearer ` is optional.\n* `Authorization: Bearer $jwt_token` where the JWT token has been signed with the `API key` client secret, in that case, the `Authorization` header **will** be sent to the target\n* `Cookie: access_token=$jwt_token;` where the JWT token has been signed with the `API key` client secret, in that case, the cookie named `access_token` **will** be sent to the target\n* `Otoroshi-Client-Id` and `Otoroshi-Client-Secret` headers, in that case the `Otoroshi-Client-Id` and `Otoroshi-Client-Secret` headers will not be sent to the target.\n\n## List API keys for a service descriptor\n\nGo to a service descriptor using `All services` quick link in the sidebar or the search box.\n\n@@@ div { .centered-img }\n\n@@@\n\nSelect a `service descriptor`.\n\n@@@ div { .centered-img }\n\n@@@\n\nClick on `API keys` in the sidebar\n\n@@@ div { .centered-img }\n\n@@@\n\nYou should see the list of API keys for that `service descriptor`\n\n@@@ div { .centered-img }\n\n@@@\n\n## Create an API key for a service descriptor\n\n@@@ div { .centered-img }\n\n@@@\n\nYou can add a name for your new API key, you can also change client's id and client's secret. You can also configure the throttling rate of the API key (calls per second), and the authorized number of call per day and per month. You may also activate or de-activate the api key from that screen.\n\nInformations about current quotas usage will be returned in response headers.\n\n* `Otoroshi-Daily-Calls-Remaining` : authorized calls remaining for this day\n* `Otoroshi-Monthly-Calls-Remaining` : authorized calls remaining for this month\n* `Otoroshi-Proxy-Latency` : latency induced by Otoroshi\n* `Otoroshi-Upstream-Latency` : latency between Otoroshi and target\n\n@@@ div { .centered-img #quotas }\n\n@@@\n\n@@@ warning\nDaily and monthly quotas are based on the following rules :\n\n* daily quota is computed between 00h00:00.000 and 23h59:59.999\n* monthly qutoas is computed between the first day of the month at 00h00:00.000 and the last day of the month at 23h59:59.999\n@@@\n\n## Update an API key\n\nTo update an `API key`, just click on the edit button of your `API key`\n\n@@@ div { .centered-img }\n\n@@@\n\nUpdate the name, secret, state and quotas (if needed) of the `API key` and click on the `Update API key` button\n\n@@@ div { .centered-img }\n\n@@@\n\n## Delete an API key\n\nTo delete an `API key`, just click on the delete button of your `API key`\n\n@@@ div { .centered-img }\n\n@@@\n\nand confirm the command\n\n@@@ div { .centered-img }\n\n@@@\n\n### Read only\n\nThe read only flag on an `ApiKey` this apikey can only use allowed services with `HEAD`, `OPTIONS` and `GET` http verbs.\n\n## Use a JWT token to pass an API key\n\nYou can use a JWT token to pass an API key to Otoroshi. \nYou can use `Otoroshi-Authorization: Bearer $jwt_token`, `Authorization: Bearer $jwt_token` header and `Cookie: access_token=$jwt_token;` to pass the JWT token.\nYou have to create a JWT token with a signing algorythm that can be `HS256` or `HS512`. Then you have to provide an `iss` claim with the value of your API key `clientId` and sign the JWT token with your API key `clientSecret`.\n\nFor example, with an API key like `clientId=abcdef` and `clientSecret=1234456789`, your JWT token should look like\n\n```json\n{\n \"alg\": \"HS256\",\n \"typ\": \"JWT\"\n}\n{\n \"iss\":\"abcdef\",\n \"name\": \"John Doe\",\n \"admin\": true\n}\n```\n\nin that case, when you sign the token with the secret of the API key `1234456789`, the signature will be `_eancnYCD3makSSox2v2xErjNYkRtcX558QiJGCbino`, resulting in a encoded JWT header like\n\n```\neyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.\neyJpc3MiOiJhYmNkZWYiLCJuYW1lIjoiSm9obiBEb2UiLCJhZG1pbiI6dHJ1ZX0.\n_eancnYCD3makSSox2v2xErjNYkRtcX558QiJGCbino\n```\n"},{"name":"4-monitor.md","id":"/usage/4-monitor.md","url":"/usage/4-monitor.html","title":"Monitoring services","content":"# Monitoring services\n\nOnce you have declared services, you can monitor them with Otoroshi.\n\n@@@ warning\nYou have to use [Elastic](https://www.elastic.co) to enable analytics features in Otoroshi\n@@@\n\nOnce you have setup @ref:[Otoroshi events push to an elastic cluster](../integrations/analytics.md) (through webhooks, kafka, or elastic integration) you can setup Otoroshi events read from an elastic cluster. Go to `settings (cog icon) / Danger Zone` and expand the `Analytics: Elastic cluster (write)` section.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Service healthcheck\n\nIf you have defined an health check URL in the service descriptor, you can access the health check page from the sidebar of the service page.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Service live stats\n\nYou can also monitor live stats like total of served request, average response time, average overhead, etc. The live stats page can be accessed from the sidebar of the service page.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Service analytics\n\nYou can also get some aggregated metrics. The analytics page can be accessed from the sidebar of the service page.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"5-sessions.md","id":"/usage/5-sessions.md","url":"/usage/5-sessions.html","title":"Managing sessions","content":"# Managing sessions\n\nWith Otoroshi you can manage sessions of connected users and you can discard sessions whenever you want. Session last 24h by default and you can customize them with `app.backoffice.session.exp` and `app.privateapps.session.exp` @ref:[config keys](../firstrun/configfile.md)\n\n## Admin. sessions\n\nTo see last current admin session on Otoroshi from the UI, go to `settings (cog icon) / Admins sessions`. Here you can discard individual sessions or all sessions at once using `Discard session` and `Discard all sessions` buttons.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Private apps. session\n\nTo see last current admin session on Otoroshi from the UI, go to `settings (cog icon) / Priv. apps sessions`. Here you can discard individual sessions or all sessions at once using `Discard session` and `Discard all sessions` buttons.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"6-audit.md","id":"/usage/6-audit.md","url":"/usage/6-audit.html","title":"Auditing Otoroshi","content":"# Auditing Otoroshi\n\nWith Otoroshi, any admin action and any sucpicious/alert action is recorded. These records are stored in Otoroshi's datastore (only the last n records, defined by the `app.events.maxSize` @ref:[config key](../firstrun/configfile.md)). All the records can be send through the analytics mechanism (WebHook, Kafka, Elastic) for external and/or further usage. We recommand sending away those records for security reasons.\n\n@@@ warning\nYou have to use [Elastic](https://www.elastic.co) to enable analytics features in Otoroshi. See @ref:[Elastic setup section](../integrations/analytics.md)\n@@@\n\n## Audit trail\n\nTo see last `app.events.maxSize` admin actions on Otoroshi from the UI, go to `settings (cog icon) / Audit log`.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Alerts\n\nTo see last `app.events.maxSize` alerts on Otoroshi from the UI, go to `settings (cog icon) / Alerts log`.\n\n@@@ div { .centered-img }\n\n@@@\n\nYou can also have a look at the payload sent to the Otoroshi server by clicking the `content` button\n\n@@@ div { .centered-img }\n\n@@@\n\n## List of possible alerts\n\n```\nMaxConcurrentRequestReachedAlert\nCircuitBreakerOpenedAlert\nCircuitBreakerClosedAlert\nSessionDiscardedAlert\nSessionsDiscardedAlert\nPanicModeAlert\nOtoroshiExportAlert\nU2FAdminDeletedAlert\nBlackListedBackOfficeUserAlert\nAdminLoggedInAlert\nAdminFirstLogin\nAdminLoggedOutAlert\nDbResetAlert\nDangerZoneAccessAlert\nGlobalConfigModification\nRevokedApiKeyUsageAlert\nServiceGroupCreatedAlert\nServiceGroupUpdatedAlert\nServiceGroupDeletedAlert\nServiceCreatedAlert\nServiceUpdatedAlert\nServiceDeletedAlert\nApiKeyCreatedAlert\nApiKeyUpdatedAlert\nApiKeyDeletedAlert\n```\n"},{"name":"7-metrics.md","id":"/usage/7-metrics.md","url":"/usage/7-metrics.html","title":"Otoroshi global metrics","content":"# Otoroshi global metrics\n\nOtoroshi provide some global metrics about services usage. Go to `settings (cog icon) / Global Ananlytics`\n\n@@@ warning\nYou have to use [Elastic](https://www.elastic.co) to enable analytics features in Otoroshi. See @ref:[Elastic setup section](../integrations/analytics.md)\n@@@\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"8-importsexports.md","id":"/usage/8-importsexports.md","url":"/usage/8-importsexports.html","title":"Import and export","content":"# Import and export\n\nWith Otoroshi you can easily save the current state of the proxy and restore it later. Go to `settings (cog icon) / Danger Zone` and scroll to the bottom of the page\n\n## Full export\n\nClick on the `Full export` button.\n\n@@@ div { .centered-img }\n\n@@@\n\nYour browser will start to download a JSON file containing the internal state of your Otoroshi cluster.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Full import\n\nIf you want to restore an export, go to `settings (cog icon) / Danger Zone` and scroll to the bottom of the page. Click on the `Recover from full export file` button\n\n@@@ div { .centered-img }\n\n@@@\n\nChoose export file on your system.\n\n@@@ div { .centered-img }\n\n@@@\n\nClick on the `Flush datastore and import ...` button, confirm and you will be logged out.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"9-auth.md","id":"/usage/9-auth.md","url":"/usage/9-auth.html","title":"Authentication","content":"# Authentication\n\nYou can create auth. configuration in Otoroshi. Just go to `settings (cog icon) / Authentication configs`.\n\n## OAuth 2\n\nCreate a new `Generic oauth2 provider` config and customize the following informations:\n\n```json\n{\n \"clientId\": \"xxxx\",\n \"clientSecret\": \"xxxx\",\n \"authorizeUrl\": \"http://yourOAuthServer/oauth/authorize\",\n \"tokenUrl\": \"http://yourOAuthServer/oauth/token\",\n \"userInfoUrl\": \"http://yourOAuthServer/userinfo\",\n \"loginUrl\": \"http://yourOAuthServer/login\",\n \"logoutUrl\": \"http://yourOAuthServer/logout?redirectQueryParamName=${redirect}\",\n \"accessTokenField\": \"access_token\",\n \"nameField\": \"name\",\n \"emailField\": \"email\",\n \"callbackUrl\": \"http://privateapps.oto.tools/privateapps/generic/callback\"\n}\n```\n\nIf used for BackOffice authentication, the callback url should be `http://otoroshi.oto.tools/backoffice/auth0/callback`.\n\nFor `logoutUrl`, `redirectQueryParamName` is a parameter with a name specific to your OAuth2 provider (for example, in Auth0, this parameter is called `returnTo`, in Kecloak it is called `redirect_uri`).\n\nif you are using a [KeyCloak](https://www.keycloak.org/) server, you can configure it this way, assuming you are using the master realm and you created a new client with a client secret, callback urls set to `http://privateapps.oto.tools/*`.\n\n```json\n{\n \"clientId\": \"clientId\",\n \"clientSecret\": \"clientSecret\",\n \"authorizeUrl\": \"http://keycloakHost/auth/realms/master/protocol/openid-connect/auth\",\n \"tokenUrl\": \"http://keycloakHost/auth/realms/master/protocol/openid-connect/token\",\n \"userInfoUrl\": \"http://keycloakHost/auth/realms/master/protocol/openid-connect/userinfo\",\n \"loginUrl\": \"http://keycloakHost/auth/realms/master/protocol/openid-connect/auth\",\n \"logoutUrl\": \"http://keycloakHost/auth/realms/master/protocol/openid-connect/logout?redirect_uri=${redirect}\",\n \"accessTokenField\": \"access_token\",\n \"nameField\": \"name\",\n \"emailField\": \"email\",\n \"callbackUrl\": \"http://privateapps.oto.tools/privateapps/generic/callback\"\n}\n```\n\n## Ldap\n\nCreate a new `Ldap auth. provider` config and customize the following informations:\n\n```json\n{\n \"serverUrl\": \"ldap://ldap.forumsys.com:389\",\n \"searchBase\": \"dc=example,dc=com\",\n \"groupFilter\": \"ou=chemists\",\n \"searchFilter\": \"(mail=${username})\",\n \"adminUsername\": \"cn=read-only-admin,dc=example,dc=com\",\n \"adminPassword\": \"password\",\n \"nameField\": \"cn\",\n \"emailField\": \"mail\"\n}\n```\n\n## In Memory\n\nCreate a new `In memory auth. provider` config and then you will be able to create new users. To set the password, just click on the `Set password` button. It will generate a BCrypt hash of the password you typed.\n\n## Auth0\n\nCreate a new OAuth 2 config and add the following informations:\n\n```json\n{\n \"clientId\": \"yourAuth0ClientId\",\n \"clientSecret\": \"yourAuth0ClientSecret\",\n \"authorizeUrl\": \"https://yourAuth0Domain/authorize\",\n \"tokenUrl\": \"https://yourAuth0Domain/oauth/token\",\n \"userInfoUrl\": \"https://yourAuth0Domain/userinfo\",\n \"loginUrl\": \"https://yourAuth0Domain/authorize\",\n \"logoutUrl\": \"https://yourAuth0Domain/v2/logout?returnTo=${redirect}\",\n \"accessTokenField\": \"access_token\",\n \"nameField\": \"name\",\n \"emailField\": \"email\",\n \"otoroshiDataField\": \"app_metadata | otoroshi_data\",\n \"callbackUrl\": \"http://privateapps.oto.tools/privateapps/generic/callback\"\n}\n```\n\nIf you enable Otoroshi exchange protocol, the JWT xill have the following fields (all optional)\n\n* `email`\n* `name`\n* `picture`\n* `user_id`\n* `given_name`\n* `family_name`\n* `gender`\n* `locale`\n* `nickname`\n\nIn Auth0, the metadata is a flat object placed in the `profile / http://yourdomain/app_metadata / otoroshi_data`. You might need to write an Auth0 rule to copy app metadata under `http://yourdomain/app_metadata`, the `http://yourdomain/app_metadata` value is a config property `app.appMeta`. The rule could be something like the following\n\n```js\nfunction (user, context, callback) {\n var namespace = 'http://yourdomain/';\n context.idToken[namespace + 'user_id'] = user.user_id;\n context.idToken[namespace + 'user_metadata'] = user.user_metadata;\n context.idToken[namespace + 'app_metadata'] = user.app_metadata;\n callback(null, user, context);\n}\n```"},{"name":"index.md","id":"/usage/index.md","url":"/usage/index.html","title":"Using Otoroshi","content":"# Using Otoroshi\n\nNow we will see how to use Otoroshi for basic tasks that will be useful for your day to day work with Otoroshi.\n\n@@@ index\n\n* [create group](./1-groups.md)\n* [create service](./2-services.md)\n* [create API Keys](./3-apikeys.md)\n* [monitor service](./4-monitor.md)\n* [sessions management](./5-sessions.md)\n* [Audit trail and alerts](./6-audit.md)\n* [Global metrics](./7-metrics.md)\n* [Exports and imports](./8-importsexports.md)\n* [Authentication](./9-auth.md)\n\n@@@\n"}]
\ No newline at end of file
+[{"name":"about.md","id":"/about.md","url":"/about.html","title":"About Otoroshi","content":"# About Otoroshi\n\nAt the beginning of 2017, we had the need to create a new environment to be able to create new \"digital\" products very quickly in an agile fashion at MAIF. Naturally we turned to PaaS solutions and chose the excellent Clever-Cloud product to run our apps. \n\nWe also chose that every feature team will have the freedom to choose its own technological stack to build its product. It was a nice move but it has also introduced some challenges in terms of homogeneity for traceability, security, logging, ... because we did not want to force library usage in the products. We could have used something like Service Mesh Pattern but the deployement model of Clever-Cloud prevented us to do it.\n\nThe right solution was to use a reverse proxy or some kind of API Gateway able to provide tracability, logging, security with apikeys, quotas, DNS as a service locator, etc. We needed something easy to use, with a human friendly UI, a nice API to extends its features, true hot reconfiguration, able to generate internal events for third party usage. A couple of solutions were available at that time, but not one seems to fit our needs, there was always something missing, too complicated for our needs or not playing well with Clever-Cloud deployment model.\n\nAt some point, we tried to write a small prototype to explore what could be our dream reverse proxy. The design was very simple, there were some rough edges but every major feature needed was there waiting to be enhanced.\n\n**Otoroshi** was born and we decided to move ahead with our hairy monster :)\n\n## Philosophy \n\nEvery OSS product build at MAIF like Izanami follow a common philosophy. \n\n* the services or API provided should be technology agnostic.\n* http first: http is the right answer to the previous quote \n* api First: The UI is just another client of the api. \n* secured: The services exposed need authentication for both humans or machines \n* event based: The services should expose a way to get notified of what happened inside. \n"},{"name":"api.md","id":"/api.md","url":"/api.html","title":"Admin REST API","content":"# Admin REST API\n\nOtoroshi provides a fully featured REST admin API to perform almost every operation possible in the Otoroshi dashboard. The Otoroshi dashbaord is just a regular consumer of the admin API.\n\nUsing the admin API, you can do whatever you want and enhance your Otoroshi instances with a lot of features that will feet your needs.\n\nOtoroshi also provides some connectors that uses the Otoroshi admin API to automate Otorshi's instances when used with stuff like containers orchestrators. For more informations about that, just go to the @ref:[third party integrations chapter](./integrations/index.md)\n\n## Swagger descriptor\n\nThe Otoroshi admin API is described using OpenAPI format and is available at :\n\nhttps://maif.github.io/otoroshi/manual/code/swagger.json\n\nEvery Otoroshi instance provides its own embedded OpenAPI descriptor at :\n\nhttp://otoroshi.oto.tools:8080/api/swagger.json\n\n## Swagger documentation\n\nYou can read the OpenAPI descriptor in a more human friendly fashion using `Swagger UI`. The swagger UI documentation of the Otoroshi admin API is available at :\n\nhttps://maif.github.io/otoroshi/swagger-ui/index.html\n\nEvery Otoroshi instance provides its own embedded OpenAPI descriptor at :\n\nhttp://otoroshi.oto.tools:8080/api/swagger/ui\n\nYou can also read the swagger UI documentation of the Otoroshi admin API below :\n\n@@@ div { .swagger-frame }\n\n\n@@@\n"},{"name":"archi.md","id":"/archi.md","url":"/archi.html","title":"Architecture","content":"# Architecture\n\nWhen we started the development of Otoroshi, we had several classical patterns in mind like `Service gateway`, `Service locator`, `Circuit breakers`, etc ...\n\nAt start we thought about providing a bunch of librairies that would be included in each microservice or app to perform these tasks. But the more we were thinking about it, the more it was feeling weird, unagile, etc, it also prevented us to use any technical stack we wanted to use. So we decided to change our approach to something more universal.\n\nWe chose to make Otoroshi the central part of our microservices system, something between a reverse-proxy, a service gateway and a service locator where each call to a microservice (even from another microservice) must pass through Otoroshi. There are multiple benefits to do that, each call can be logged, audited, monitored, integrated with a circuit breaker, etc without imposing libraries and technical stack. Any service is exposed through its own domain and we rely only on DNS to handle the service location part. Any access to a service is secured by default with an api key and is supervised by a circuit breaker to avoid cascading failures.\n\n@@@ div { .centered-img }\n\n@@@\n\nOtoroshi tries to embrace our @ref:[global philosophy](./about.md#philosophy) by providing a full featured REST admin api, a gorgeous admin dashboard written in [React](https://reactjs.org/) that uses the api, by generating traffic events, alerts events, audit events that can be consumed by several channels. Otoroshi also supports a bunch of datastores to better match with different use cases.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"aws-beanstalk.md","id":"/deploy/aws-beanstalk.md","url":"/deploy/aws-beanstalk.html","title":"AWS - Elastic Beanstalk","content":"# AWS - Elastic Beanstalk\n\nNow you want to use Otoroshi on AWS. There are multiple options to deploy Otoroshi on AWS, \nfor instance :\n\n* You can deploy the @ref:[Docker image](../getotoroshi/fromdocker.md) on [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html)\n* You can create a basic [Amazon EC2](https://docs.aws.amazon.com/fr_fr/AWSEC2/latest/UserGuide/concepts.html), access it via SSH, then \ndeploy the @ref:[otoroshi.jar](../firstrun/run.md#from-jar-file) \n* Or you can use [AWS Elastic Beanstalk](https://aws.amazon.com/fr/elasticbeanstalk)\n\nIn this section we are going to cover how to deploy Otoroshi on [AWS Elastic Beanstalk](https://aws.amazon.com/fr/elasticbeanstalk). \n\n## AWS Elastic Beanstalk Overview\nUnlike Clever Cloud, to deploy an application on AWS Elastic Beanstalk, you don't link your app to your VCS repository, push your code and expect it to be built and run.\n\nAWS Elastic Beanstalk does only the run part. So you have to handle your own build pipeline, upload a Zip file containing your runnable, then AWS Elastic Beanstalk will take it from there. \n \nEg: for apps running on the JVM (Scala/Java/Kotlin) a Zip with the jar inside would suffice, for apps running in a Docker container, a Zip with the DockerFile would be enough. \n\n\n## Prepare your deployment target\nActually, there are 2 options to build your target. \n\nEither you create a DockerFile from this @ref:[Docker image](../getotoroshi/fromdocker.md), build a zip, and do all the Otoroshi custom configuration using ENVs.\n\nOr you download the @ref:[otoroshi.jar](../getotoroshi/frombinaries.md), do all the Otoroshi custom configuration using your own otoroshi.conf, and create a DockerFile that runs the jar using your otoroshi.conf. \n\nFor the second option your DockerFile would look like this :\n\n```dockerfile\nFROM openjdk:8\nVOLUME /tmp\nEXPOSE 8080\nADD otoroshi.jar otoroshi.jar\nADD otoroshi.conf otoroshi.conf\nRUN sh -c 'touch /otoroshi.jar'\nENV JAVA_OPTS=\"\"\nENTRYPOINT [ \"sh\", \"-c\", \"java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -Dconfig.file=/otoroshi.conf -jar /otoroshi.jar\" ]\n``` \n \nI'd recommend the second option.\n \nNow Zip your target (Jar + Conf + DockerFile) and get ready for deployment. \n\n## Create an Otoroshi instance on AWS Elastic Beanstalk\nFirst, go to [AWS Elastic Beanstalk Console](https://eu-west-3.console.aws.amazon.com/elasticbeanstalk/home?region=eu-west-3#/welcome), don't forget to sign in and make sure that you are in the good region (eg : eu-west-3 for Paris).\n\nHit **Get started** \n\n@@@ div { .centered-img }\n\n@@@\n\nSpecify the **Application name** of your application, Otoroshi for example.\n\n@@@ div { .centered-img }\n\n@@@\n \nChoose the **Platform** of the application you want to create, in your case use Docker.\n\nFor **Application code** choose **Upload your code** then hit **Upload**.\n\n@@@ div { .centered-img }\n\n@@@\n\nBrowse the zip created in the [previous section](#prepare-your-deployment-target) from your machine. \n\nAs you can see in the image above, you can also choose an S3 location, you can imagine that at the end of your build pipeline you upload your Zip to S3, and then get it from there (I wouldn't recommend that though).\n \nWhen the upload is done, hit **Configure more options**.\n \n@@@ div { .centered-img }\n\n@@@ \n \nRight now an AWS Elastic Beanstalk application has been created, and by default an environment named Otoroshi-env is being created as well.\n\nAWS Elastic Beanstalk can manage multiple environments of the same application, for instance environments can be (prod, preprod, expriments...). \n\nOtoroshi is a bit particular, it doesn't make much sense to have multiple environments, since Otoroshi will handle all the requests from/to downstream services regardless of the environment. \n \nAs you see in the image above, we are now configuring the Otoroshi-env, the one and only environment of Otoroshi.\n \nFor **Configuration presets**, choose custom configuration, now you have a load balancer for your environment with the capacity of at least one instance and at most four.\nI'd recommend at least 2 instances, to change that, on the **Capacity** card hit **Modify**. \n\n@@@ div { .centered-img }\n\n@@@\n\nChange the **Instances** to min 2, max 4 then hit **Save**. For the **Scaling triggers**, I'd keep the default values, but know that you can edit the capacity config any time you want, it only costs a redeploy, which will be done automatically by the way.\n \nInstances size is by default t2.micro, which is a bit small for running Otoroshi, I'd recommend a t2.medium. \nOn the **Instances** card hit **Modify**.\n\n@@@ div { .centered-img }\n\n@@@\n\nFor **Instance type** choose t2.medium, then hit **Save**, no need to change the volume size, unless you have a lot of http call faults, which means a lot more logs, in that case the default volume size may not be enough.\n\nThe default environment created for Otoroshi, for instance Otoroshi-env, is a web server environment which fits in your case, but the thing is that on AWS Elastic Beanstalk by default a web server environment for a docker-based application, runs behind an Nginx proxy.\nWe have to remove that proxy. So on the **Software** card hit **Modify**.\n \n@@@ div { .centered-img }\n\n@@@ \n \nFor **Proxy server** choose None then hit **Save**.\n\nAlso note that you can set Envs for Otoroshi in same page (see image below). \n\n@@@ div { .centered-img }\n\n@@@ \n\nTo finalise the creation process, hit **Create app** on the bottom right.\n\nThe Otoroshi app is now created, and it's running which is cool, but we still don't have neither a **datastore** nor **https**.\n \n## Create an Otoroshi datastore on AWS ElastiCache\n\nBy default Otoroshi uses non persistent memory to store it's data, Otoroshi supports many kinds of datastores. In this section we will be covering Redis datastore. \n\nBefore starting, using a datastore hosted by AWS is not at all mandatory, feel free to use your own if you like, but if you want to learn more about ElastiCache, this section may interest you, otherwise you can skip it.\n\nGo to [AWS ElastiCache](https://eu-west-3.console.aws.amazon.com/elasticache/home?region=eu-west-3#) and hit **Get Started Now**.\n\n@@@ div { .centered-img }\n\n@@@ \n\nFor **Cluster engine** keep Redis.\n\nChoose a **Name** for your datastore, for instance otoroshi-datastore.\n\nYou can keep all the other default values and hit **Create** on the bottom right of the page.\n\nOnce your Redis Cluster is created, it would look like the image below.\n\n@@@ div { .centered-img }\n\n@@@ \n\n\nFor applications in the same security group as your cluster, redis cluster is accessible via the **Primary Endpoint**. Don't worry the default security group is fine, you don't need any configuration to access the cluster from Otoroshi.\n\nTo make Otoroshi use the created cluster, you can either use Envs `APP_STORAGE=redis`, `REDIS_HOST` and `REDIS_PORT`, or set `app.storage=redis`, `app.redis.host` and `app.redis.port` in your otoroshi.conf.\n\n## Create SSL certificate and configure your domain\n\nOtoroshi has now a datastore, but not yet ready for use. \n\nIn order to get it ready you need to :\n\n* Configure Otoroshi with your domain \n* Create a wildcard SSL certificate for your domain\n* Configure Otoroshi AWS Elastic Beanstalk instance with the SSL certificate \n* Configure your DNS to redirect all traffic on your domain to Otoroshi \n \n### Configure Otoroshi with your domain\n\nYou can use ENVs or you can use a custom otoroshi.conf in your Docker container.\n\nFor the second option your otoroshi.conf would look like this :\n\n``` \n include \"application.conf\"\n http.port = 8080\n app {\n env = \"prod\"\n domain = \"mysubdomain.oto.tools\"\n rootScheme = \"https\"\n snowflake {\n seed = 0\n }\n events {\n maxSize = 1000\n }\n backoffice {\n subdomain = \"otoroshi\"\n session {\n exp = 86400000\n }\n }\n \n storage = \"redis\"\n redis {\n host=\"myredishost\"\n port=myredisport\n }\n \n privateapps {\n subdomain = \"privateapps\"\n }\n \n adminapi {\n targetSubdomain = \"otoroshi-admin-internal-api\"\n exposedSubdomain = \"otoroshi-api\"\n defaultValues {\n backOfficeGroupId = \"admin-api-group\"\n backOfficeApiKeyClientId = \"admin-client-id\"\n backOfficeApiKeyClientSecret = \"admin-client-secret\"\n backOfficeServiceId = \"admin-api-service\"\n }\n proxy {\n https = true\n local = false\n }\n }\n claim {\n sharedKey = \"myclaimsharedkey\"\n }\n }\n \n play.http {\n session {\n secure = false\n httpOnly = true\n maxAge = 2147483646\n domain = \".mysubdomain.oto.tools\"\n cookieName = \"oto-sess\"\n }\n }\n``` \n\n### Create a wildcard SSL certificate for your domain\n\nGo to [AWS Certificate Manager](https://eu-west-3.console.aws.amazon.com/acm/home?region=eu-west-3#/firstrun).\n\nBelow **Provision certificates** hit **Get started**.\n\n@@@ div { .centered-img }\n\n@@@ \n \nKeep the default selected value **Request a public certificate** and hit **Request a certificate**.\n \n@@@ div { .centered-img }\n\n@@@ \n\nPut your **Domain name**, use *. for wildcard, for instance *\\*.mysubdomain.oto.tools*, then hit **Next**.\n\n@@@ div { .centered-img }\n\n@@@ \n\nYou can choose between **Email validation** and **DNS validation**, I'd recommend **DNS validation**, then hit **Review**. \n \n@@@ div { .centered-img }\n\n@@@ \n \nVerify that you did put the right **Domain name** then hit **Confirm and request**. \n\n@@@ div { .centered-img }\n\n@@@\n \nAs you see in the image above, to let Amazon do the validation you have to add the `CNAME` record to your DNS configuration. Normally this operation takes around one day.\n \n### Configure Otoroshi AWS Elastic Beanstalk instance with the SSL certificate \n\nOnce the certificate is validated, you need to modify the configuration of Otoroshi-env to add the SSL certificate for HTTPS. \nFor that you need to go to [AWS Elastic Beanstalk applications](https://eu-west-3.console.aws.amazon.com/elasticbeanstalk/home?region=eu-west-3#/applications),\nhit **Otoroshi-env**, then on the left side hit **Configuration**, then on the **Load balancer** card hit **Modify**.\n\n@@@ div { .centered-img }\n\n@@@\n\nIn the **Application Load Balancer** section hit **Add listener**.\n\n@@@ div { .centered-img }\n\n@@@\n\nFill the popup as the image above, then hit **Add**. \n\nYou should now be seeing something like this : \n \n@@@ div { .centered-img }\n\n@@@ \n \n \nMake sure that your listener is enabled, and on the bottom right of the page hit **Apply**.\n\nNow you have **https**, so let's use Otoroshi.\n\n### Configure your DNS to redirect all traffic on your domain to Otoroshi\n \nIt's actually pretty simple, you just need to add a `CNAME` record to your DNS configuration, that redirects *\\*.mysubdomain.oto.tools* to the DNS name of Otoroshi's load balancer.\n\nTo find the DNS name of Otoroshi's load balancer go to [AWS Ec2](https://eu-west-3.console.aws.amazon.com/ec2/v2/home?region=eu-west-3#LoadBalancers:tag:elasticbeanstalk:environment-name=Otoroshi-env;sort=loadBalancerName)\n\nYou would find something like this : \n \n@@@ div { .centered-img }\n\n@@@ \n\nThere is your DNS name, so add your `CNAME` record. \n \nOnce all these steps are done, the AWS Elastic Beanstalk Otoroshi instance, would now be handling all the requests on your domain. ;) \n"},{"name":"clevercloud.md","id":"/deploy/clevercloud.md","url":"/deploy/clevercloud.html","title":"Clever Cloud","content":"# Clever Cloud\n\nNow you want to use Otoroshi on Clever Cloud. Otoroshi has been designed and created to run on Clever Cloud and a lot of choices were made because of how Clever Cloud works.\n\n## Create an Otoroshi instance on CleverCloud\n\nIf you want to customize the configuration @ref:[use env. variables](../firstrun/env.md), you can use [the example provided below](#example-of-clevercloud-env-variables)\n\nCreate a new CleverCloud app based on a clevercloud git repo (not empty) or a github project of your own (not empty).\n\n@@@ div { .centered-img }\n\n@@@\n\nThen choose what kind of app your want to create, for Otoroshi, choose `Java + Jar`\n\n@@@ div { .centered-img }\n\n@@@\n\nNext, set up choose instance size and auto-scalling. Otoroshi can run on small instances, especially if you just want to test it.\n\n@@@ div { .centered-img }\n\n@@@\n\nFinally, choose a name for your app\n\n@@@ div { .centered-img }\n\n@@@\n\nNow you just need to customize environnment variables\n\nat this point, you can also add other env. variables to configure Otoroshi like in [the example provided below](#example-of-clevercloud-env-variables)\n\n@@@ div { .centered-img }\n\n@@@\n\nYou can also use expert mode :\n\n@@@ div { .centered-img }\n\n@@@\n\nNow, your app is ready, don't forget to add a custom domains name on the CleverCloud app matching the Otoroshi app domain. \n\n## Example of CleverCloud env. variables\n\nYou can add more env variables to customize your Otoroshi instance like the following. Use the expert mode to copy/paste all the values in one shot. If you want an real datastore, create a redis addon on clevercloud, link it to your otoroshi app and change the `APP_STORAGE` variable to `redis`\n\n\n\n\n```\nADMIN_API_CLIENT_ID=xxxx\nADMIN_API_CLIENT_SECRET=xxxxx\nADMIN_API_GROUP=xxxxxx\nADMIN_API_SERVICE_ID=xxxxxxx\nCLAIM_SHAREDKEY=xxxxxxx\nOTOROSHI_INITIAL_ADMIN_LOGIN=youremailaddress\nOTOROSHI_INITIAL_ADMIN_PASSWORD=yourpassword\nPLAY_CRYPTO_SECRET=xxxxxx\nSESSION_NAME=oto-session\nAPP_DOMAIN=yourdomain.tech\nAPP_ENV=prod\nAPP_STORAGE=inmemory\nAPP_ROOT_SCHEME=https\nCC_PRE_BUILD_HOOK=curl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/${latest_otoroshi_version}/otoroshi.jar'\nCC_JAR_PATH=./otoroshi.jar\nCC_JAVA_VERSION=11\nPORT=8080\nSESSION_DOMAIN=.yourdomain.tech\nSESSION_MAX_AGE=604800000\nSESSION_SECURE_ONLY=true\nUSER_AGENT=otoroshi\nMAX_EVENTS_SIZE=1\nWEBHOOK_SIZE=100\nAPP_BACKOFFICE_SESSION_EXP=86400000\nAPP_PRIVATEAPPS_SESSION_EXP=86400000\nENABLE_METRICS=true\nOTOROSHI_ANALYTICS_PRESSURE_ENABLED=true\nUSE_CACHE=true\n```\n
"},{"name":"index.md","id":"/deploy/index.md","url":"/deploy/index.html","title":"Deploy to production","content":"# Deploy to production\n\nNow it's time to deploy Otoroshi in production, in this chapter we will see what kind of things you can do.\n\n@@@ index\n\n* [Kubernetes](./kubernetes.md)\n* [Clever Cloud](./clevercloud.md)\n* [AWS - Elastic Beanstalk](./aws-beanstalk.md)\n* [others](./other.md) \n* [Scaling](./scaling.md) \n\n@@@"},{"name":"kubernetes.md","id":"/deploy/kubernetes.md","url":"/deploy/kubernetes.html","title":"Kubernetes","content":"# Kubernetes\n\nStarting at version 1.5.0, Otoroshi provides a native Kubernetes support. Multiple otoroshi jobs (that are actually kubernetes controllers) are provided in order to\n\n- sync kubernetes secrets of type `kubernetes.io/tls` to otoroshi certificates\n- act as a standard ingress controller (supporting `Ingress` objects)\n- provide Custom Resource Definitions (CRDs) to manage Otoroshi entities from Kubernetes and act as an ingress controller with its own resources\n\n## Installing otoroshi on your kubernetes cluster\n\n@@@ warning\nYou need to have cluster admin privileges to install otoroshi and its service account, role mapping and CRDs on a kubernetes cluster. We also advise you to create a dedicated namespace (you can name it `otoroshi` for example) to install otoroshi\n@@@\n\nIf you want to deploy otoroshi into your kubernetes cluster, you can download the deployment descriptors from https://github.com/MAIF/otoroshi/tree/master/kubernetes and use kustomize to create your own overlay.\n\nYou can also create a `kustomization.yaml` file with a remote base\n\n```yaml\nbases:\n- github.com/MAIF/otoroshi/kubernetes/kustomize/overlays/simple/?ref=v1.5.0-alpha.6\n```\n\nThen deploy it with `kubectl apply -k ./overlays/myoverlay`. \n\nYou can also use Helm to deploy a simple otoroshi cluster on your kubernetes cluster\n\n```sh\nhelm repo add otoroshi https://maif.github.io/otoroshi/helm\nhelm install my-otoroshi otoroshi/otoroshi\n```\n\nBelow, you will find example of deployment. Do not hesitate to adapt them to your needs. Those descriptors have value placeholders that you will need to replace with actual values like \n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: ${domain}\n```\n\nyou will have to edit it to make it look like\n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: 'apis.my.domain'\n```\n\nif you don't want to use placeholders and environment variables, you can create a secret containing the configuration file of otoroshi\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: otoroshi-config\ntype: Opaque\nstringData:\n oto.conf: >\n include \"application.conf\"\n app {\n storage = \"redis\"\n domain = \"apis.my.domain\"\n }\n```\n\nand mount it in the otoroshi container\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: otoroshi-deployment\nspec:\n selector:\n matchLabels:\n run: otoroshi-deployment\n template:\n metadata:\n labels:\n run: otoroshi-deployment\n spec:\n serviceAccountName: otoroshi-admin-user\n terminationGracePeriodSeconds: 60\n hostNetwork: false\n containers:\n - image: maif/otoroshi:1.5.0-alpha.6-jdk11\n imagePullPolicy: IfNotPresent\n name: otoroshi\n args: ['-Dconfig.file=/usr/app/otoroshi/conf/oto.conf']\n ports:\n - containerPort: 8080\n name: \"http\"\n protocol: TCP\n - containerPort: 8443\n name: \"https\"\n protocol: TCP\n volumeMounts:\n - name: otoroshi-config\n mountPath: \"/usr/app/otoroshi/conf\"\n readOnly: true\n volumes:\n - name: otoroshi-config\n secret:\n secretName: otoroshi-config\n ...\n```\n\nYou can also create several secrets for each placeholder, mount them to the otoroshi container then use their file path as value\n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: 'file:///the/path/of/the/secret/file'\n```\n\nyou can use the same trick in the config. file itself\n\n### Note on bare metal kubernetes cluster installation\n\n@@@ note\nBare metal kubernetes clusters don't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples below.\n@@@\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@\n\n### Common manifests\n\nthe following manifests are always needed. They create otoroshi CRDs, tokens, role, etc. Redis deployment is not mandatory, it's just an example. You can use your own existing setup.\n\nrbac.yaml\n: @@snip [rbac.yaml](../snippets/kubernetes/kustomize/base/rbac.yaml) \n\ncrds.yaml\n: @@snip [crds.yaml](../snippets/kubernetes/kustomize/base/crds.yaml) \n\nredis.yaml\n: @@snip [redis.yaml](../snippets/kubernetes/kustomize/base/redis.yaml) \n\n\n### Deploy a simple otoroshi instanciation on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. We use a service of type `LoadBalancer` to expose otoroshi to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple/dns.example) \n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. The otoroshi instance are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/dns.example) \n\n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster using a DaemonSet\n\nHere we have one otoroshi instance on each kubernetes node (with the `otoroshi-kind: instance` label) with redis persistance. The otoroshi instances are exposed as `hostPort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/dns.example) \n\n### Deploy an otoroshi cluster on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas of an otoroshi leader connected to a redis instance and 2 replicas of an otoroshi worker connected to the leader. We use a service of type `LoadBalancer` to expose otoroshi leader/worker to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster\n\nHere we have 2 replicas of otoroshi leader connected to the same redis instance and 2 replicas for otoroshi worker. The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster using DaemonSet\n\nHere we have 1 otoroshi leader instance on each kubernetes node (with the `otoroshi-kind: leader` label) connected to the same redis instance and 1 otoroshi worker instance on each kubernetes node (with the `otoroshi-kind: worker` label). The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\n## Using Otoroshi as an Ingress Controller\n\nIf you want to use Otoroshi as an [Ingress Controller](https://kubernetes.io/fr/docs/concepts/services-networking/ingress/), just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Ingress Controller`.\n\nThen add the following configuration for the job (with your own tweaks of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": true, // sync ingresses\n \"crds\": false, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nNow you can deploy your first service ;)\n\n### Deploy an ingress route\n\nnow let's say you want to deploy an http service and route to the outside world through otoroshi\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: kennethreitz/httpbin\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 80\n name: \"http\"\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8080\n targetPort: http\n name: http\n selector:\n run: http-app-deployment\n---\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nonce deployed, otoroshi will sync with kubernetes and create the corresponding service to route your app. You will be able to access your app with\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get\n```\n\n### Support for Ingress Classes\n\nSince Kubernetes 1.18, you can use `IngressClass` type of manifest to specify which ingress controller you want to use for a deployment (https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#extended-configuration-with-ingress-classes). Otoroshi is fully compatible with this new manifest `kind`. To use it, configure the Ingress job to match your controller\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClasses\": [\"otoroshi.io/ingress-controller\"],\n ...\n }\n}\n```\n\nthen you have to deploy an `IngressClass` to declare Otoroshi as an ingress controller\n\n```yaml\napiVersion: \"networking.k8s.io/v1beta1\"\nkind: \"IngressClass\"\nmetadata:\n name: \"otoroshi-ingress-controller\"\nspec:\n controller: \"otoroshi.io/ingress-controller\"\n parameters:\n apiGroup: \"proxy.otoroshi.io/v1alpha\"\n kind: \"IngressParameters\"\n name: \"otoroshi-ingress-controller\"\n```\n\nand use it in your `Ingress`\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\nspec:\n ingressClassName: otoroshi-ingress-controller\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\n### Use multiple ingress controllers\n\nIt is of course possible to use multiple ingress controller at the same time (https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#using-multiple-ingress-controllers) using the annotation `kubernetes.io/ingress.class`. By default, otoroshi reacts to the class `otoroshi`, but you can make it the default ingress controller with the following config\n\n```json\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClass\": \"*\",\n ...\n }\n}\n```\n\n### Supported annotations\n\nif you need to customize the service descriptor behind an ingress rule, you can use some annotations. If you need better customisation, just go to the CRDs part. The following annotations are supported :\n\n- `ingress.otoroshi.io/groups`\n- `ingress.otoroshi.io/group`\n- `ingress.otoroshi.io/groupId`\n- `ingress.otoroshi.io/name`\n- `ingress.otoroshi.io/targetsLoadBalancing`\n- `ingress.otoroshi.io/stripPath`\n- `ingress.otoroshi.io/enabled`\n- `ingress.otoroshi.io/userFacing`\n- `ingress.otoroshi.io/privateApp`\n- `ingress.otoroshi.io/forceHttps`\n- `ingress.otoroshi.io/maintenanceMode`\n- `ingress.otoroshi.io/buildMode`\n- `ingress.otoroshi.io/strictlyPrivate`\n- `ingress.otoroshi.io/sendOtoroshiHeadersBack`\n- `ingress.otoroshi.io/readOnly`\n- `ingress.otoroshi.io/xForwardedHeaders`\n- `ingress.otoroshi.io/overrideHost`\n- `ingress.otoroshi.io/allowHttp10`\n- `ingress.otoroshi.io/logAnalyticsOnServer`\n- `ingress.otoroshi.io/useAkkaHttpClient`\n- `ingress.otoroshi.io/useNewWSClient`\n- `ingress.otoroshi.io/tcpUdpTunneling`\n- `ingress.otoroshi.io/detectApiKeySooner`\n- `ingress.otoroshi.io/letsEncrypt`\n- `ingress.otoroshi.io/publicPatterns`\n- `ingress.otoroshi.io/privatePatterns`\n- `ingress.otoroshi.io/additionalHeaders`\n- `ingress.otoroshi.io/additionalHeadersOut`\n- `ingress.otoroshi.io/missingOnlyHeadersIn`\n- `ingress.otoroshi.io/missingOnlyHeadersOut`\n- `ingress.otoroshi.io/removeHeadersIn`\n- `ingress.otoroshi.io/removeHeadersOut`\n- `ingress.otoroshi.io/headersVerification`\n- `ingress.otoroshi.io/matchingHeaders`\n- `ingress.otoroshi.io/ipFiltering.whitelist`\n- `ingress.otoroshi.io/ipFiltering.blacklist`\n- `ingress.otoroshi.io/api.exposeApi`\n- `ingress.otoroshi.io/api.openApiDescriptorUrl`\n- `ingress.otoroshi.io/healthCheck.enabled`\n- `ingress.otoroshi.io/healthCheck.url`\n- `ingress.otoroshi.io/jwtVerifier.ids`\n- `ingress.otoroshi.io/jwtVerifier.enabled`\n- `ingress.otoroshi.io/jwtVerifier.excludedPatterns`\n- `ingress.otoroshi.io/authConfigRef`\n- `ingress.otoroshi.io/redirection.enabled`\n- `ingress.otoroshi.io/redirection.code`\n- `ingress.otoroshi.io/redirection.to`\n- `ingress.otoroshi.io/clientValidatorRef`\n- `ingress.otoroshi.io/transformerRefs`\n- `ingress.otoroshi.io/transformerConfig`\n- `ingress.otoroshi.io/accessValidator.enabled`\n- `ingress.otoroshi.io/accessValidator.excludedPatterns`\n- `ingress.otoroshi.io/accessValidator.refs`\n- `ingress.otoroshi.io/accessValidator.config`\n- `ingress.otoroshi.io/preRouting.enabled`\n- `ingress.otoroshi.io/preRouting.excludedPatterns`\n- `ingress.otoroshi.io/preRouting.refs`\n- `ingress.otoroshi.io/preRouting.config`\n- `ingress.otoroshi.io/issueCert`\n- `ingress.otoroshi.io/issueCertCA`\n- `ingress.otoroshi.io/gzip.enabled`\n- `ingress.otoroshi.io/gzip.excludedPatterns`\n- `ingress.otoroshi.io/gzip.whiteList`\n- `ingress.otoroshi.io/gzip.blackList`\n- `ingress.otoroshi.io/gzip.bufferSize`\n- `ingress.otoroshi.io/gzip.chunkedThreshold`\n- `ingress.otoroshi.io/gzip.compressionLevel`\n- `ingress.otoroshi.io/cors.enabled`\n- `ingress.otoroshi.io/cors.allowOrigin`\n- `ingress.otoroshi.io/cors.exposeHeaders`\n- `ingress.otoroshi.io/cors.allowHeaders`\n- `ingress.otoroshi.io/cors.allowMethods`\n- `ingress.otoroshi.io/cors.excludedPatterns`\n- `ingress.otoroshi.io/cors.maxAge`\n- `ingress.otoroshi.io/cors.allowCredentials`\n- `ingress.otoroshi.io/clientConfig.useCircuitBreaker`\n- `ingress.otoroshi.io/clientConfig.retries`\n- `ingress.otoroshi.io/clientConfig.maxErrors`\n- `ingress.otoroshi.io/clientConfig.retryInitialDelay`\n- `ingress.otoroshi.io/clientConfig.backoffFactor`\n- `ingress.otoroshi.io/clientConfig.connectionTimeout`\n- `ingress.otoroshi.io/clientConfig.idleTimeout`\n- `ingress.otoroshi.io/clientConfig.callAndStreamTimeout`\n- `ingress.otoroshi.io/clientConfig.callTimeout`\n- `ingress.otoroshi.io/clientConfig.globalTimeout`\n- `ingress.otoroshi.io/clientConfig.sampleInterval`\n- `ingress.otoroshi.io/enforceSecureCommunication`\n- `ingress.otoroshi.io/sendInfoToken`\n- `ingress.otoroshi.io/sendStateChallenge`\n- `ingress.otoroshi.io/secComHeaders.claimRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateResponseName`\n- `ingress.otoroshi.io/secComTtl`\n- `ingress.otoroshi.io/secComVersion`\n- `ingress.otoroshi.io/secComInfoTokenVersion`\n- `ingress.otoroshi.io/secComExcludedPatterns`\n- `ingress.otoroshi.io/secComSettings.size`\n- `ingress.otoroshi.io/secComSettings.secret`\n- `ingress.otoroshi.io/secComSettings.base64`\n- `ingress.otoroshi.io/secComUseSameAlgo`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.size`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.base64`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.size`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.base64`\n- `ingress.otoroshi.io/secComAlgoInfoToken.size`\n- `ingress.otoroshi.io/secComAlgoInfoToken.secret`\n- `ingress.otoroshi.io/secComAlgoInfoToken.base64`\n- `ingress.otoroshi.io/securityExcludedPatterns`\n\nfor more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n\nwith the previous example, the ingress does not define any apikey, so the route is public. If you want to enable apikeys on it, you can deploy the following descriptor\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\n ingress.otoroshi.io/group: http-app-group\n ingress.otoroshi.io/forceHttps: 'true'\n ingress.otoroshi.io/sendOtoroshiHeadersBack: 'true'\n ingress.otoroshi.io/overrideHost: 'true'\n ingress.otoroshi.io/allowHttp10: 'false'\n ingress.otoroshi.io/publicPatterns: ''\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nnow you can use an existing apikey in the `http-app-group` to access your app\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get -u existing-apikey-1:secret-1\n```\n\n## Use Otoroshi CRDs for a better/full integration\n\nOtoroshi provides some Custom Resource Definitions for kubernetes in order to manage Otoroshi related entities in kubernetes\n\n- `service-groups`\n- `service-descriptors`\n- `apikeys`\n- `certificates`\n- `global-configs`\n- `jwt-verifiers`\n- `auth-modules`\n- `scripts`\n- `tcp-services`\n- `data-exporters`\n- `admins`\n- `teams`\n- `organizations`\n\nusing CRDs, you will be able to deploy and manager those entities from kubectl or the kubernetes api like\n\n```sh\nsudo kubectl get apikeys --all-namespaces\nsudo kubectl get service-descriptors --all-namespaces\ncurl -X GET \\\n -H 'Authorization: Bearer eyJhbGciOiJSUzI....F463SrpOehQRaQ' \\\n -H 'Accept: application/json' -k \\\n https://127.0.0.1:6443/apis/proxy.otoroshi.io/v1alpha1/apikeys | jq\n```\n\nYou can see this as better `Ingress` resources. Like any `Ingress` resource can define which controller it uses (using the `kubernetes.io/ingress.class` annotation), you can chose another kind of resource instead of `Ingress`. With Otoroshi CRDs you can even define resources like `Certificate`, `Apikey`, `AuthModules`, `JwtVerifier`, etc. It will help you to use all the power of Otoroshi while using the deployment model of kubernetes.\n \n@@@ warning\nwhen using Otoroshi CRDs, Kubernetes becomes the single source of truth for the synced entities. It means that any value in the descriptors deployed will overrides the one in Otoroshi datastore each time it's synced. So be careful if you use the Otoroshi UI or the API, some changes in configuration may be overriden by CRDs sync job.\n@@@\n\n### Resources examples\n\ngroup.yaml\n: @@snip [group.yaml](../snippets/crds/group.yaml) \n\napikey.yaml\n: @@snip [apikey.yaml](../snippets/crds/apikey.yaml) \n\nservice-descriptor.yaml\n: @@snip [service.yaml](../snippets/crds/service-descriptor.yaml) \n\ncertificate.yaml\n: @@snip [cert.yaml](../snippets/crds/certificate.yaml) \n\njwt.yaml\n: @@snip [jwt.yaml](../snippets/crds/jwt.yaml) \n\nauth.yaml\n: @@snip [auth.yaml](../snippets/crds/auth.yaml) \n\norganization.yaml\n: @@snip [orga.yaml](../snippets/crds/organization.yaml) \n\nteam.yaml\n: @@snip [team.yaml](../snippets/crds/team.yaml) \n\n\n### Configuration\n\nTo configure it, just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Otoroshi CRDs Controller`. Then add the following configuration for the job (with your own tweak of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"crds\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": false, // sync ingresses\n \"crds\": true, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n \"organization\": {},\n \"team\": {},\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nyou can find a more complete example of the configuration object [here](https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/plugins/jobs/kubernetes/config.scala#L134-L163)\n\n### Note about `apikeys` and `certificates` resources\n\nApikeys and Certificates are a little bit different than the other resources. They have ability to be defined without their secret part, but with an export setting so otoroshi will generate the secret parts and export the apikey or the certificate to kubernetes secret. Then any app will be able to mount them as volumes (see the full example below)\n\nIn those resources you can define \n\n```yaml\nexportSecret: true \nsecretName: the-secret-name\n```\n\nand omit `clientSecret` for apikey or `publicKey`, `privateKey` for certificates. For certificate you will have to provide a `csr` for the certificate in order to generate it\n\n```yaml\ncsr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n - httpapps.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n```\n\nwhen apikeys are exported as kubernetes secrets, they will have the type `otoroshi.io/apikey-secret` with values `clientId` and `clientSecret`\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: apikey-1\ntype: otoroshi.io/apikey-secret\ndata:\n clientId: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n clientSecret: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n```\n\nwhen certificates are exported as kubernetes secrets, they will have the type `kubernetes.io/tls` with the standard values `tls.crt` (the full cert chain) and `tls.key` (the private key). For more convenience, they will also have a `cert.crt` value containing the actual certificate without the ca chain and `ca-chain.crt` containing the ca chain without the certificate.\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: certificate-1\ntype: kubernetes.io/tls\ndata:\n tls.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n tls.key: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n cert.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n ca-chain.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA== \n```\n\n## Full CRD example\n\nthen you can deploy the previous example with better configuration level, and using mtls, apikeys, etc\n\nLet say the app looks like :\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\n// here we read the apikey to access http-app-2 from files mounted from secrets\nconst clientId = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientId').toString('utf8')\nconst clientSecret = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientSecret').toString('utf8')\n\nconst backendKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/tls.key').toString('utf8')\nconst backendCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/cert.crt').toString('utf8')\nconst backendCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/ca-chain.crt').toString('utf8')\n\nconst clientKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/tls.key').toString('utf8')\nconst clientCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/cert.crt').toString('utf8')\nconst clientCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/ca-chain.crt').toString('utf8')\n\nfunction callApi2() {\n return new Promise((success, failure) => {\n const options = { \n // using the implicit internal name (*.global.otoroshi.mesh) of the other service descriptor passing through otoroshi\n hostname: 'http-app-service-descriptor-2.global.otoroshi.mesh', \n port: 433, \n path: '/', \n method: 'GET',\n headers: {\n 'Accept': 'application/json',\n 'Otoroshi-Client-Id': clientId,\n 'Otoroshi-Client-Secret': clientSecret,\n },\n cert: clientCert,\n key: clientKey,\n ca: clientCa\n }; \n let data = '';\n const req = https.request(options, (res) => { \n res.on('data', (d) => { \n data = data + d.toString('utf8');\n }); \n res.on('end', () => { \n success({ body: JSON.parse(data), res });\n }); \n res.on('error', (e) => { \n failure(e);\n }); \n }); \n req.end();\n })\n}\n\nconst options = { \n key: backendKey, \n cert: backendCert, \n ca: backendCa, \n // we want mtls behavior\n requestCert: true, \n rejectUnauthorized: true\n}; \nhttps.createServer(options, (req, res) => { \n res.writeHead(200, {'Content-Type': 'application/json'});\n callApi2().then(resp => {\n res.write(JSON.stringify{ (\"message\": `Hello to ${req.socket.getPeerCertificate().subject.CN}`, api2: resp.body })); \n });\n}).listen(433);\n```\n\nthen, the descriptors will be :\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: foo/http-app\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 443\n name: \"https\"\n volumeMounts:\n - name: apikey-volume\n # here you will be able to read apikey from files \n # - /var/run/secrets/kubernetes.io/apikeys/clientId\n # - /var/run/secrets/kubernetes.io/apikeys/clientSecret\n mountPath: \"/var/run/secrets/kubernetes.io/apikeys\"\n readOnly: true\n volumeMounts:\n - name: backend-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/backend/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/backend/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/backend\"\n readOnly: true\n - name: client-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/client/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/client/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/client\"\n readOnly: true\n volumes:\n - name: apikey-volume\n secret:\n # here we reference the secret name from apikey http-app-2-apikey-1\n secretName: secret-2\n - name: backend-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-backend\n secretName: http-app-certificate-backend-secret\n - name: client-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-client\n secretName: http-app-certificate-client-secret\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8443\n targetPort: https\n name: https\n selector:\n run: http-app-deployment\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ServiceGroup\nmetadata:\n name: http-app-group\n annotations:\n otoroshi.io/id: http-app-group\nspec:\n description: a group to hold services about the http-app\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ApiKey\nmetadata:\n name: http-app-apikey-1\n# this apikey can be used to access the app\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-1\n authorizedEntities: \n - group_http-app-group\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-1\n# this apikey can be used to access another app in a different group\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-2\n authorizedEntities: \n - group_http-app-2-group\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: Certificate\nmetadata:\n name: http-app-certificate-frontend\nspec:\n description: certificate for the http-app on otorshi frontend\n autoRenew: true\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: Certificate\nmetadata:\n name: http-app-certificate-backend\nspec:\n description: certificate for the http-app deployed on pods\n autoRenew: true\n # a secret name http-app-certificate-backend-secret will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: http-app-certificate-backend-secret\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - http-app-service \n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-back, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: Certificate\nmetadata:\n name: http-app-certificate-client\nspec:\n description: certificate for the http-app\n autoRenew: true\n secretName: http-app-certificate-client-secret\n csr:\n issuer: CN=Otoroshi Root\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-client, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ServiceDescriptor\nmetadata:\n name: http-app-service-descriptor\nspec:\n description: the service descriptor for the http app\n groups: \n - http-app-group\n forceHttps: true\n hosts:\n - httpapp.foo.bar # hostname exposed oustide of the kubernetes cluster\n # - http-app-service-descriptor.global.otoroshi.mesh # implicit internal name inside the kubernetes cluster \n matchingRoot: /\n targets:\n - url: https://http-app-service:8443\n # alternatively, you can use serviceName and servicePort to use pods ip addresses\n # serviceName: http-app-service\n # servicePort: https\n mtlsConfig:\n # use mtls to contact the backend\n mtls: true\n certs: \n # reference the DN for the client cert\n - UID=httpapp-client, O=OtoroshiApps\n trustedCerts: \n # reference the DN for the CA cert \n - CN=Otoroshi Root\n sendOtoroshiHeadersBack: true\n xForwardedHeaders: true\n overrideHost: true\n allowHttp10: false\n publicPatterns:\n - /health\n additionalHeaders:\n x-foo: bar\n# here you can specify everything supported by otoroshi like jwt-verifiers, auth config, etc ... for more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n```\n\nnow with this descriptor deployed, you can access your app with a command like \n\n```sh\nCLIENT_ID=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientId}\" | base64 --decode`\nCLIENT_SECRET=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientSecret}\" | base64 --decode`\ncurl -X GET https://httpapp.foo.bar/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n## Expose Otoroshi to outside world\n\nIf you deploy Otoroshi on a kubernetes cluster, the Otoroshi service is deployed as a loadbalancer (service type: `LoadBalancer`). You'll need to declare in your DNS settings any name that can be routed by otoroshi going to the loadbalancer endpoint (CNAME or ip addresses) of your kubernetes distribution. If you use a managed kubernetes cluster from a cloud provider, it will work seamlessly as they will provide external loadbalancers out of the box. However, if you use a bare metal kubernetes cluster, id doesn't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples in the installation section.\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@ \n\n## Access a service from inside the k8s cluster\n\n### Using host header overriding\n\nYou can access any service referenced in otoroshi, through otoroshi from inside the kubernetes cluster by using the otoroshi service name (if you use a template based on https://github.com/MAIF/otoroshi/tree/master/kubernetes/base deployed in the otoroshi namespace) and the host header with the service domain like :\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET -H 'Host: httpapp.foo.bar' https://otoroshi-service.otoroshi.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using dedicated services\n\nit's also possible to define services that targets otoroshi deployment (or otoroshi workers deployment) and use then as valid hosts in otoroshi services \n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-awesome-service\nspec:\n selector:\n # run: otoroshi-deployment\n # or in cluster mode\n run: otoroshi-worker-deployment\n ports:\n - port: 8080\n name: \"http\"\n targetPort: \"http\"\n - port: 8443\n name: \"https\"\n targetPort: \"https\"\n```\n\nand access it like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-namspace.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using coredns integration\n\nYou can also enable the coredns integration to simplify the flow. You can use the the following keys in the plugin config :\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"coreDnsIntegration\": true, // enable coredns integration for intra cluster calls\n \"kubeSystemNamespace\": \"kube-system\", // the namespace where coredns is deployed\n \"corednsConfigMap\": \"coredns\", // the name of the coredns configmap\n \"otoroshiServiceName\": \"otoroshi-service\", // the name of the otoroshi service, could be otoroshi-workers-service\n \"otoroshiNamespace\": \"otoroshi\", // the namespace where otoroshi is deployed\n \"clusterDomain\": \"cluster.local\", // the domain for cluster services\n ...\n }\n}\n```\n\notoroshi will patch coredns config at startup then you can call your services like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh` or `${service-name}.${service-namespace}.svc.otoroshi.local`\n\n### Using coredns with manual patching\n\nyou can also patch the coredns config manually\n\n```sh\nkubectl edit configmaps coredns -n kube-system # or your own custom config map\n```\n\nand change the `Corefile` data to add the following snippet in at the end of the file\n\n```yaml\notoroshi.mesh:53 {\n errors\n health\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.arpa\n }\n rewrite name regex (.*)\\.otoroshi\\.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}\n```\n\nyou can also define simpler rewrite if it suits you use case better\n\n```\nrewrite name my-service.otoroshi.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n```\n\ndo not hesitate to change `otoroshi-worker-service.otoroshi` according to your own setup. If otoroshi is not in cluster mode, change it to `otoroshi-service.otoroshi`. If otoroshi is not deployed in the `otoroshi` namespace, change it to `otoroshi-service.the-namespace`, etc.\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh`\n\nthen you can call your service like \n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\n\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using old kube-dns system\n\nif your stuck with an old version of kubernetes, it uses kube-dns that is not supported by otoroshi, so you will have to provide your own coredns deployment and declare it as a stubDomain in the old kube-dns system. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the kube-dns integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"kubeDnsOperatorIntegration\": true, // enable kube-dns integration for intra cluster calls\n \"kubeDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"kubeDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"kubeDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\n### Using Openshift DNS operator\n\nOpenshift DNS operator does not allow to customize DNS configuration a lot, so you will have to provide your own coredns deployment and declare it as a stub in the Openshift DNS operator. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the Openshift DNS operator integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"openshiftDnsOperatorIntegration\": true, // enable openshift dns operator integration for intra cluster calls\n \"openshiftDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"openshiftDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"openshiftDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\ndon't forget to update the otoroshi `ClusterRole`\n\n```yaml\n- apiGroups:\n - operator.openshift.io\n resources:\n - dnses\n verbs:\n - get\n - list\n - watch\n - update\n```\n\n## Easier integration with otoroshi-sidecar\n\nOtoroshi can help you to easily use existing services without modifications while gettings all the perks of otoroshi like apikeys, mTLS, exchange protocol, etc. To do so, otoroshi will inject a sidecar container in the pod of your deployment that will handle call coming from otoroshi and going to otoroshi. To enable otoroshi-sidecar, you need to deploy the following admission webhooks\n\nwebhooks.yaml\n: @@snip [webhooks.yaml](../snippets/kubernetes/kustomize/base/webhooks.yaml)\n\nthen it's quite easy to add the sidecar, just add the following label to your pod `otoroshi.io/sidecar: inject` and some annotations to tell otoroshi what certificates and apikeys to use.\n\n```yaml\nannotations:\n otoroshi.io/sidecar-apikey: backend-apikey\n otoroshi.io/sidecar-backend-cert: backend-cert\n otoroshi.io/sidecar-client-cert: oto-client-cert\n otoroshi.io/token-secret: secret\n otoroshi.io/expected-dn: UID=oto-client-cert, O=OtoroshiApps\n```\n\nnow you can just call you otoroshi handled apis from inside your pod like `curl http://my-service.namespace.otoroshi.mesh/api` without passing any apikey or client certificate and the sidecar will handle everything for you. Same thing for call from otoroshi to your pod, everything will be done in mTLS fashion with apikeys and otoroshi exchange protocol\n\nhere is a full example\n\nsidecar.yaml\n: @@snip [sidecar.yaml](../snippets/kubernetes/kustomize/base/sidecar.yaml)\n\n@@@ warning\nPlease avoid to use port `80` for your pod as it's the default port to access otoroshi from your pod and the call will be redirect to the sidecar via an iptables rule\n@@@\n\n## Daikoku integration\n\nIt is possible to easily integrate daikoku generated apikeys without any human interaction with the actual apikey secret. To do that, create a plan in Daikoku and setup the integration mode to `Automatic`\n\n@@@ div { .centered-img }\n\n@@@\n\nthen when a user subscribe for an apikey, he will only see an integration token\n\n@@@ div { .centered-img }\n\n@@@\n\nthen just create an ApiKey manifest with this token and your good to go \n\n```yaml\napiVersion: proxy.otoroshi.io/v1alpha1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-3\nspec:\n exportSecret: true \n secretName: secret-3\n daikokuToken: RShQrvINByiuieiaCBwIZfGFgdPu7tIJEN5gdV8N8YeH4RI9ErPYJzkuFyAkZ2xy\n```\n\n"},{"name":"other.md","id":"/deploy/other.md","url":"/deploy/other.html","title":"Others","content":"# Others\n\nOtoroshi can run wherever you want, even on a raspberry pi (Cluster^^) ;)\n\nThis section is not finished yet. So, as Otoroshi is available as a @ref:[Docker image](../getotoroshi/fromdocker.md) that you can run on any Docker compatible cloud, just go ahead and use it on cloud provider until we have more detailed documentation.\n\n## Running Otoroshi on AWS Elastic Beanstalk\n\nSee the @ref:[dedicated page to run Otoroshi on AWS Elastic Beanstalk](./aws-beanstalk.md)\n\n## Running Otoroshi on Amazon Elastic Container Service\n\nDeploy the @ref:[Docker image](../firstrun/run.md#from-docker) using [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html)\n\n## Running Otoroshi on GCE\n\nDeploy the @ref:[Docker image](../firstrun/run.md#from-docker) using [Google Compute Engine container integration](https://cloud.google.com/compute/docs/containers/deploying-containers)\n\n## Running Otoroshi on Azure\n\nDeploy the @ref:[Docker image](../firstrun/run.md#from-docker) using [Azure Container Service](https://azure.microsoft.com/en-us/services/container-service/)\n\n## Running Otoroshi on Heroku\n\nDeploy the @ref:[Docker image](../firstrun/run.md#from-docker) using [Docker integration](https://devcenter.heroku.com/articles/container-registry-and-runtime)\n\n## Running Otoroshi on CloudFoundry\n\nDeploy the @ref:[Docker image](../firstrun/run.md#from-docker) using [Docker integration](https://docs.cloudfoundry.org/adminguide/docker.html)\n\n## Running Otoroshi on your own infrastructure\n\nAs Otoroshi is a [Play Framework](https://www.playframework.com) application, you can read the doc about putting a `Play` app in production.\n\nhttps://www.playframework.com/documentation/2.6.x/ProductionConfiguration\n\nDownload the latest @ref:[Otoroshi distribution](../getotoroshi/frombinaries.md), unzip it, customize it and run it.\n"},{"name":"scaling.md","id":"/deploy/scaling.md","url":"/deploy/scaling.html","title":"Scaling Otoroshi","content":"# Scaling Otoroshi\n\n## Using multiple instances with a front load balancer\n\nOtoroshi has been designed to work with multiple instances. If you already have an infrastructure using frontal load balancing, you just have to declare Otoroshi instances as the target of all domain names handled by Otoroshi\n\n## Using master / workers mode of Otoroshi\n\nYou can read everything about it in @ref:[the clustering section](../topics/clustering.md) of the documentation.\n\n## Using IPVS\n\nYou can use [IPVS](https://en.wikipedia.org/wiki/IP_Virtual_Server) to load balance layer 4 traffic directly from the Linux Kernel to multiple instances of Otoroshi. You can find example of configuration [here](http://www.linuxvirtualserver.org/VS-DRouting.html) \n\n## Using DNS Round Robin\n\nYou can use [DNS round robin technique](https://en.wikipedia.org/wiki/Round-robin_DNS) to declare multiple A records under the domain names handled by Otoroshi.\n\n## Using software L4/L7 load balancers\n\nYou can use software L4 load balancers like NGINX or HAProxy to load balance layer 4 traffic directly from the Linux Kernel to multiple instances of Otoroshi.\n\nNGINX L7\n: @@snip [nginx-http.conf](../snippets/nginx-http.conf) \n\nNGINX L4\n: @@snip [nginx-tcp.conf](../snippets/nginx-tcp.conf) \n\nHA Proxy L7\n: @@snip [haproxy-http.conf](../snippets/haproxy-http.conf) \n\nHA Proxy L4\n: @@snip [haproxy-tcp.conf](../snippets/haproxy-tcp.conf) \n\n## Using a custom TCP load balancer\n\nYou can also use any other TCP load balancer, from a hardware box to a small js file like\n\ntcp-proxy.js\n: @@snip [tcp-proxy.js](../snippets/tcp-proxy.js) \n\ntcp-proxy.rs\n: @@snip [tcp-proxy.rs](../snippets/proxy.rs) \n\n"},{"name":"dev.md","id":"/dev.md","url":"/dev.html","title":"Developing Otoroshi ","content":"# Developing Otoroshi \n\nIf you want to play with Otoroshis code, here are some tips\n\n## The tools\n\nYou will need\n\n* git\n* JDK 11\n* SBT 1.3.x\n* Node 13 + yarn 1.x\n\n## Clone the repository\n\n```sh\ngit clone https://github.com/MAIF/otoroshi.git\n```\n\nor fork otoroshi and clone your own repository.\n\n## Run otoroshi in dev mode\n\nto run otoroshi in dev mode, you'll need to run two separate process to serve the javascript UI and the server part.\n\n### Javascript side\n\njust go to `/otoroshi/javascript` and install the dependencies with\n\n```sh\nyarn install\n# or\nnpm install\n```\n\nthen run the dev server with\n\n```sh\nyarn start\n# or\nnpm run start\n```\n\n### Server side\n\nsetup SBT opts with\n\n```sh\nexport SBT_OPTS=\"-Xmx2G -Xss6M\"\n```\n\nthen just go to `/otoroshi` and run the sbt console with \n\n```sh\nsbt\n```\n\nthen in the sbt console run the following command\n\n```sh\n~run -Dapp.storage=file -Dapp.liveJs=true -Dhttps.port=9998 -D-Dapp.privateapps.port=9999 -Dapp.adminPassword=password -Dapp.domain=oto.tools -Dplay.server.https.engineProvider=ssl.DynamicSSLEngineProvider -Dapp.events.maxSize=0\n```\n\nyou can now access your otoroshi instance at `http://otoroshi.oto.tools:9999`\n\n## Test otoroshi\n\nto run otoroshi test just go to `/otoroshi` and run the main test suite with\n\n```sh\nsbt 'testOnly OtoroshiTests'\n```\n\n## Create a release\n\njust go to `/otoroshi/javascript` and then build the UI\n\n```sh\nyarn install\nyarn build\n```\n\nthen go to `/otoroshi` and build the otoroshi distribution\n\n```sh\nsbt ';clean;compile;dist;assembly'\n```\n\nthe otoroshi build is waiting for you in `/otoroshi/target/scala-2.12/otoroshi.jar` or `/otoroshi/target/universal/otoroshi-1.x.x.zip`\n\n## Build the documentation\n\nfrom the root of your repository run\n\n```sh\nsh ./scripts/doc.sh all\n```\n\n## Format the sources\n\nfrom the root of your repository run\n\n```sh\nsh ./scripts/fmt.sh\n```"},{"name":"features.md","id":"/features.md","url":"/features.html","title":"Features ","content":"# Features \n\n@@@ warning\nThis section is under construction\n@@@\n\nAll the features supported by **Otoroshi** are listed below\n\n* Dynamic changes at runtime without full reload \n* Can proxy any HTTP/HTTP2 server (websockets and streamed responses included)\n* Full featured admin Rest Api to control Otoroshi the way you want. Included, Swagger descriptor\n* Gorgeous React Web UI\n* Full end-to-end streaming of HTTP requests and responses\n* Completely non blocking and async internals\n* @ref:[Official Docker image](./getotoroshi/fromdocker.md)\n* @ref:[Multi backend datastore support](./firstrun/datastore.md)\n * Redis\n * In memory\n * Cassandra (experimental support)\n * filedb (not suitable for production usage)\t\n* Pluggable modules system (plugins) \n * you can create your own modules to change de behavior of Otoroshi per service or globally\n * impacts on access validation, routing, body transformation, apikey extraction\n * listen to internal otoroshi events\n * modules can be written and deployed from the UI\n * lot of module provided out of the box (see TODO:)\n* Full featured TLS integration\n * @ref:[Dynamic SSL termination](./topics/ssl.md)\n * mTLS support for input and output connections (end-to-end mTLS)\n * extended client certificate validation\n * TLS certificate automation (create, renew, etc) based on a CA certificate\n * ACME/Let's Encrypt support (create, renew)\n * on-the-fly certificate generation based on a CA certificate without request loss\n* Classic features for reverse proxying\n * expose the same service on multiple domain names (including wildcards)\n * support multiple loadbalancing algorithms\n * configurable circuit breaker per service, with timeouts per path and verb\n * @ref:[maintenance page per service](./usage/2-services.md)\n * @ref:[build page per service](./usage/2-services.md)\n * @ref:[force HTTPS usage per service](./usage/2-services.md)\n * @ref:[Add current Api key quotas usage in response headers](./usage/3-apikeys.md)\n * @ref:[Add current latencies in response headers](./usage/3-apikeys.md)\n * headers manipulation\n * routing headers\n * custom html error templates\n * healthcheck per service\n * sink services\n * CORS support\n * GZIP support\n * filtering on http verb and path\n* Api management features\n * throttling / daily quotas / monthly quotas per apikey\n * apikey authorization based on http verb and path\n * global throttling\n * global throttling per ip address\n * global or per service ip address blacklist / whitelist\n * automatic apikey secret rotation\n* Authentication modules\n * LDAP\n * In memory (managed by otoroshi)\n * OAuth2/OIDC\n * modules can be used for admin. backoffice login\n * webauthentication support\n * sessions management from UI\n* JWT token utilities\n * validate incoming JWT tokens\n * transform incoming JWT tokens\n * chain multiple validators\n* Analytics / Metrics\n * rich traffic events for each proxied http request\n * @ref:[Live metrics per service and globaly](./usage/4-monitor.md) \n * @ref:[Global metrics and analytics (requires elastic server)](./usage/7-metrics.md)\n * @ref:[Traffic events can be sent using webhooks or Kafka topic](./setup/dangerzone.md#analytics-settings)\n * multiple technical metrics exporters (statsd, datadog, prometheus)\n* Audit trail\n * @ref:[Global audit log alert log on admins actions](./usage/6-audit.md)\n * @ref:[Audit and alerts events can be sent using webhooks or Kafka topic](./setup/dangerzone.md#analytics-settings)\n * @ref:[Alerts events can be send to people by email using email provider (Mailgun, mailjet)](./integrations/mailgun.md)\n* Extract informations from `User-Agent` headers to enrich traffic events\n* Extract geolocation informations (need external service) to enrich traffic events\n* Support enterprise http proxies globaly and per service\n* TCP proxy with SNI and TLS passthrought support\n* TCP / UDP tunnelings\n * add web authentication on top of anything\n * local tunnel client with CLI or UI\n* @ref:[Canary mode per service](./topics/snow-monkey.md)\n* @ref:[Chaos engineering tools with the Snow Monkey](./topics/snow-monkey.md)\n* @ref:[Advanced CleverCloud integration (create services from CleverCloud apps)](./integrations/clevercloud.md) \n"},{"name":"configfile.md","id":"/firstrun/configfile.md","url":"/firstrun/configfile.html","title":"Config. with files","content":"# Config. with files\n\nThere is a lot of things you can configure in Otoroshi. By default, Otoroshi provides a configuration that should be enough for testing purpose. But you'll likely need to update this configuration when you'll need to move into production.\n\nIn this page, any configuration property can be set at runtime using a `-D` flag when launching Otoroshi like\n\n```sh\njava -Dhttp.port=8080 -jar otoroshi.jar\n```\n\nor\n\n```sh\n./bin/otoroshi -Dhttp.port=8080 \n```\n\nif you want to define your own config file and use it on an otoroshi instance, use the following flag\n\n```sh\njava -Dconfig.file=/path/to/otoroshi.conf -jar otoroshi.jar\n``` \n\n## Common configuration\n\n| name | type | default value | description |\n| ---- |:----:| -------------- | ----- |\n| `app.domain` | string | \"oto.tools\" | the domain on which Otoroshi UI/API is be exposed|\n| `app.rootScheme` | string | \"http\" | the scheme on which Otoroshi is exposed, either \"http\" or \"https\" |\n| `app.snowflake.seed` | number | 0 | this number will is used to generate unique ids across the cluster. Each Otorshi instance must have a unique seed. |\n| `app.events.maxSize` | number | 1000 | max number of analytic and alert events stored locally |\n| `app.backoffice.exposed` | boolean | true | does the current Otoroshi instance exposed a backoffice ui|\n| `app.backoffice.subdomain` | string | \"otoroshi\" | the subdomain on wich Otoroshi backoffice will be served |\n| `app.backoffice.session.exp` | number | 86400000 | the number of seconds before the Otoroshi backoffice session expires |\n| `app.privateapps.subdomain` | string | \"privateapps\" | the subdomain on which private apps UI are served |\n| `app.privateapps.session.exp` | number | 86400000 | the number of seconds before the private apps session expires |\n| `app.claim.sharedKey` | string | \"secret\" | the shared secret used for signing the JWT token passed between Otoroshi and backend services |\n| `app.webhooks.size` | number | 100 | number of events sent at most when calling one of the analytics webhooks |\n| `app.throttlingWindow` | number | 10 | time window (in seconds) used to compute throttling quotas for ApiKeys |\n\n## Admin API configuration\n\nWhen Otoroshi starts for the first time, its datastore is empty. As Otoroshi uses Otoroshi to expose its admin REST API, you'll have to provide the details for the admin API exposition. **This part is super important** because if you go to production with the default values, your Otoroshi server won't be secured anymore.\n\n@@@ warning\nYOU HAVE TO CUSTOMIZE THE FOLLOWING VALUES BEFORE GOING TO PRODUCTION !!\n@@@\n\nSome of the following terms will seem obscure to you, but you will learn their meaning in the following chapters :)\n\n| name | type | default value | description |\n| ---- |:----:| -------------- | ----- |\n| `app.adminapi.exposed` | boolean | true | does the current Otoroshi instance expose an admin API |\n| `app.adminapi.targetSubdomain` | string | \"otoroshi-admin-internal-api\" | the subdomain on wich admin API call will be redirected from `app.adminapi.exposedSubdomain` |\n| `app.adminapi.exposedSubdomain` | string | \"otoroshi-api\" | the subdomain on wich the Otoroshi admin API will be exposed |\n| `app.adminapi.defaultValues.backOfficeGroupId` | string | \"admin-api-group\" | the name of the service groups that will contain the service descriptors for the Otoroshi admin API |\n| `app.adminapi.defaultValues.backOfficeApiKeyClientId` | string | \"admin-api-apikey-id\" | the client id of the Otoroshi admin API apikey |\n| `app.adminapi.defaultValues.backOfficeApiKeyClientSecret` | string | \"admin-api-apikey-secret\" | the client secret of the Otoroshi admin API apikey |\n| `app.adminapi.defaultValues.backOfficeServiceId` | string | \"admin-api-service\" | the id of the service descriptors for the Otoroshi admin API |\n| `app.adminapi.proxy.https` | boolean | false | whether or not the current Otoroshi instance serves its content over https. This setting is useful for the backoffice UI to access Otoroshi admin API |\n| `app.adminapi.proxy.local` | boolean | true | whether or not the admin API is accessible through `127.0.0.1`. This setting is useful for the backoffice UI to access Otoroshi admin API |\n\n## Secrets config\n\nWhen Otoroshi starts for the first time, its secrets are set by default. \n\n@@@ warning\nYOU HAVE TO CUSTOMIZE AT LEAST `otoroshi.secret` BEFORE GOING TO PRODUCTION !!\n@@@\n\n| name | type | default value | description |\n| ---- |:----:| -------------- | ----- |\n| `otoroshi.secret` | string | 'verysecretvaluethatyoumustoverwrite' | default Otoroshi secret. This value is used by default for other secrets |\n| `otoroshi.sessions.secret` | string | `otoroshi.secret` | Secret used to cipher session ids |\n| `play.http.secret.key` | string | `otoroshi.secret` | the secret used to sign Otoroshi session cookie |\n\n## DB configuration\n\nAs Otoroshi supports multiple datastores, you'll have to provide some details about how to connect/configure it.\n\n| name | type | default value | description |\n| ---- |:----:| -------------- | ----- |\n| `app.storage` | string | \"inmemory\" | what kind of storage engine you want to use. Possible values are `inmemory`, `file`, `redis`, `cassandra` |\n| `app.importFrom` | string | | a file path or a URL to an Otoroshi export file. If the datastore is empty on startup, this file will be used to import data to the empty DB |\n| `app.importFromHeaders` | array | [] | a list of `:` separated header to use if the `app.importFrom` setting is a URL |\n| `app.initialData` | object | | object representing Otoroshi internal data as exported from danger zone so you don't need a config file and a data import file |\n| `app.redis.host` | string | \"localhost\" | the host of the redis server |\n| `app.redis.port` | number | 6379 | the port of the redis server |\n| `app.redis.slaves` | array | [] | the redis slaves lists |\n| `app.filedb.path` | string | \"./filefb\" | the path where filedb files will be written |\n| `app.cassandra.hosts` | string | \"127.0.0.1\" | the host of the cassandra server |\n| `app.cassandra.host` | string | \"127.0.0.1\" | the list of cassandra hosts |\n| `app.cassandra.port` | number | 9042 | the port of the cassandra servers |\n| `app.pg.uri` | string | | the uri of your pg database |\n| `app.pg.host` | string | localhost | the host of your pg database |\n| `app.pg.port` | number | 5432 | the port of your pg database |\n| `app.pg.database` | string | otoroshi | the database name |\n| `app.pg.user` | string | otoroshi | the username to connect to your pg database |\n| `app.pg.password` | string | otoroshi | the password to connect to your pg database |\n\n## Headers configuration\n\nOtoroshi uses a fair amount of http headers in order to work properly. The name of those headers are customizable to fit your needs.\n\n| name | type | default value | description |\n| ---- |:----:| -------------- | ----- |\n| `otoroshi.headers.trace.label` | string | \"Otoroshi-Viz-From-Label\" | header to pass request tracing informations |\n| `otoroshi.headers.trace.from` | string | \"Otoroshi-Viz-From\" | header to pass request tracing informations (ip address) |\n| `otoroshi.headers.trace.parent` | string | \"Otoroshi-Parent-Request\" | header to pass request tracing informations (parent request id) |\n| `otoroshi.headers.request.adminprofile` | string | \"Otoroshi-Admin-Profile\" | header to pass admin name when the admin API is called from the Otoroshi backoffice |\n| `otoroshi.headers.request.clientid` | string | \"Otoroshi-Client-Id\" | header to pass apikey client id |\n| `otoroshi.headers.request.clientsecret` | string | \"Otoroshi-Client-Secret\" | header to pass apikey client secret |\n| `otoroshi.headers.request.id` | string | \"Otoroshi-Request-Id\" | header containing the id of the current request |\n| `otoroshi.headers.response.proxyhost` | string | \"Otoroshi-Proxied-Host\" | header containing the proxied host |\n| `otoroshi.headers.response.error` | string | \"Otoroshi-Error\" | header containing whether or not the request generated an error |\n| `otoroshi.headers.response.errormsg` | string | \"Otoroshi-Error-Msg\" | header containing error message if some |\n| `otoroshi.headers.response.proxylatency` | string | \"Otoroshi-Proxy-Latency\" | header containing the current latency induced by Otoroshi |\n| `otoroshi.headers.response.upstreamlatency` | string | \"Otoroshi-Upstream-Latency\" | header containing the current latency from Otoroshi to service backend |\n| `otoroshi.headers.response.dailyquota` | string | \"Otoroshi-Daily-Calls-Remaining\" | header containing the number of remaining daily call (apikey) |\n| `otoroshi.headers.response.monthlyquota` | string | \"Otoroshi-Monthly-Calls-Remaining\" | header containing the number of remaining monthly call (apikey) |\n| `otoroshi.headers.comm.state` | string | \"Otoroshi-State\" | header containing a random value for secured mode |\n| `otoroshi.headers.comm.stateresp` | string | \"Otoroshi-State-Resp\" | header containing a random value for secured mode |\n| `otoroshi.headers.comm.claim` | string | \"Otoroshi-Claim\" | header containing a JWT token for secured mode |\n| `otoroshi.headers.healthcheck.test` | string | \"Otoroshi-Health-Check-Logic-Test\" | header containing a logic test for healthcheck |\n| `otoroshi.headers.healthcheck.testresult` | string | \"Otoroshi-Health-Check-Logic-Test-Result\" | header containing the result of a logic test for healthcheck |\n| `otoroshi.headers.jwt.issuer` | string | \"Otoroshi\" | the name of the issuer for the JWT token |\n| `otoroshi.headers.canary.tracker` | string | \"Otoroshi-Canary-Id\" | header containing the ID of the canary session if enabled |\n\n## Play specific configuration\n\nAs Otoroshi is a [Play app](https://www.playframework.com/), you should take a look at [Play configuration documentation](https://www.playframework.com/documentation/2.6.x/Configuration) to tune its internal configuration\n\n| name | type | default value | description |\n| ---- |:----:| -------------- | ----- |\n| `http.port` | number | 8080 | the http port used by Otoroshi. You can use 'disabled' as value if you don't want to use http |\n| `https.port` | number | disabled | the https port used by Otoroshi. You can use 'disabled' as value if you don't want to use https |\n| `http2.enabled` | boolean | false | whether or not http2 is enabled on the Otoroshi server. You need to configure https (listed bellow) to be able to use it |\n| `play.http.secret.key` | string | \"secret\" | the secret used to sign Otoroshi session cookie |\n| `play.http.session.secure` | boolean | false | whether or not the Otoroshi backoffice session will be served over https only |\n| `play.http.session.httpOnly` | boolean | true | whether or not the Otoroshi backoffice session will be accessible from Javascript |\n| `play.http.session.maxAge` | number | 259200000 | the number of seconds before Otoroshi backoffice session expired |\n| `play.http.session.domain` | string | \".oto.tools\" | the domain on which the Otoroshi backoffice session is authorized |\n| `play.http.session.cookieName` | string | \"otoroshi-session\" | the name of the Otoroshi backoffice session |\n| `play.ws.play.ws.useragent` | string | \"Otoroshi\" | the user agent sent by Otoroshi if not present on the original http request |\n| `play.server.https.keyStore.path` | string | | the path to the keystore containing the private key and certificate, if not provided generates a keystore for you |\n| `play.server.https.keyStore.type` | string | JKS | the key store type, defaults to JKS |\n| `play.server.https.keyStore.password` | string | '' | the password, defaults to a blank password |\n| `play.server.https.keyStore.algorithm` | string | | the key store algorithm, defaults to the platforms default algorithm |\n\n## More config. options\n\nSee https://github.com/MAIF/otoroshi/blob/master/otoroshi/conf/base.conf and https://github.com/MAIF/otoroshi/blob/master/otoroshi/conf/application.conf\n\nif you want to configure https on your Otoroshi server, just read [PlayFramework documentation about it](https://www.playframework.com/documentation/2.6.x/ConfiguringHttps)\n\n## Example of a custom. configuration file\n\n```conf\ninclude \"application.conf\"\n\nhttp.port = 8080\n\napp {\n storage = \"file\"\n importFrom = \"./my-state.json\"\n env = \"prod\"\n domain = \"oto.tools\"\n rootScheme = \"http\"\n snowflake {\n seed = 0\n }\n events {\n maxSize = 1000\n }\n backoffice {\n subdomain = \"otoroshi\"\n session {\n exp = 86400000\n }\n }\n privateapps {\n subdomain = \"privateapps\"\n session {\n exp = 86400000\n }\n }\n adminapi {\n targetSubdomain = \"otoroshi-admin-internal-api\"\n exposedSubdomain = \"otoroshi-api\"\n defaultValues {\n backOfficeGroupId = \"admin-api-group\"\n backOfficeApiKeyClientId = \"admin-api-apikey-id\"\n backOfficeApiKeyClientSecret = \"admin-api-apikey-secret\"\n backOfficeServiceId = \"admin-api-service\"\n }\n }\n claim {\n sharedKey = \"mysecret\"\n }\n filedb {\n path = \"./filedb/state.ndjson\"\n }\n}\n\nplay.http {\n session {\n secure = false\n httpOnly = true\n maxAge = 2592000000\n domain = \".oto.tools\"\n cookieName = \"oto-sess\"\n }\n}\n```\n\n## Reference configuration\n\n@@snip [reference.conf](../snippets/reference.conf) "},{"name":"datastore.md","id":"/firstrun/datastore.md","url":"/firstrun/datastore.html","title":"Choose your datastore","content":"# Choose your datastore\n\nRight now, Otoroshi supports multiple datastore.\n\nYou can choose one datastore over another depending on your use case.\n\nAvailable datastores are the following :\n\n* in memory\n* redis\n* cassandra (experimental support, should be used in cluster mode for leaders)\n* postgresql or any postgresql compatible databse like cockroachdb for instance (experimental support, should be used in cluster mode for leaders)\n* filedb (not suitable for production usage)\n\nThe **filedb** datastore is pretty handy for testing purposes, but is not supposed to be used in production mode.\n\nThe **in-memory** datastore is kind of interesting... It can be used for testing purposes, but it is also a good candidate for production because of its fastness. You can check the clustering documentation to find more about it.\n\nThe **redis** datastore is quite nice when you want to easily deploy several Otoroshi instances.\n\nIf you need a datastore more scalable than redis, then you can use the **postgresql** or **cassandra** datastore.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"env.md","id":"/firstrun/env.md","url":"/firstrun/env.html","title":"Config. with ENVs","content":"# Config. with ENVs\n\nNow that you know @ref:[how to configure Otoroshi with the config. file](./configfile.md) every property in the following block can be overriden by an environment variable (an env. variable is written like `${?ENV_VARIABLE}`).\n\n## Reference configuration for env. variables\n\n@@snip [reference-env.conf](../snippets/reference-env.conf) \n"},{"name":"host.md","id":"/firstrun/host.md","url":"/firstrun/host.html","title":"Setup your hosts","content":"# Setup your hosts\n\nBy default, Otoroshi starts with domain `oto.tools` that targets `127.0.0.1`. Of course you can change the domain, you have to add the values in your `/etc/hosts` file according to the setting you put in Otoroshi configuration\n\n* `app.domain` => `oto.tools`\n* `app.backoffice.subdomain` => `otoroshi`\n* `app.privateapps.subdomain` => `privateapps`\n* `app.adminapi.exposedSubdomain` => `otoroshi-api`\n* `app.adminapi.targetSubdomain` => `otoroshi-admin-internal-api`\n\nfor instance if you want to change the default domain and use something like `otoroshi.mydomain.org`, then start otoroshi like \n\n```sh\njava -Dapp.domain=mydomain.org -jar otoroshi.jar\n```\n\n@@@ warning\nOtoroshi cannot be accessed using `http://127.0.0.1:8080` or `http://localhost:8080` because Otoroshi uses Otoroshi to serve it's own UI and API. When otoroshi starts with an empty database, it will create a service descriptor for that using `app.domain` and the settings listed on this page and in the * @ref:[Config. with files page](./configfile.md) that serve Otoroshi API and UI on `http://otoroshi-api.${app.domain}` and `http://otoroshi.${app.domain}`.\nOnce the descriptor is saved in database, if you want to change `app.domain`, you'll have to edit the descriptor in the database or restart Otoroshi with an empty database.\n@@@\n"},{"name":"index.md","id":"/firstrun/index.md","url":"/firstrun/index.html","title":"First run","content":"# First run\n\nNow that you have your own distro of Otoroshi, it's time to run it. \n\nBut before doing so, you'll have to make some choices about some essential stuff in order to have your own customized version of Otoroshi.\n\nLet's start with the datastore\n\n\n@@@ index\n\n* [choose a datastore](./datastore.md)\n* [use custom config file](./configfile.md)\n* [use ENV](./env.md)\n* [initial state](./initialstate.md)\n* [Hosts](./host.md)\n* [Run](./run.md)\n\n@@@"},{"name":"initialstate.md","id":"/firstrun/initialstate.md","url":"/firstrun/initialstate.html","title":"Import initial state","content":"# Import initial state\n\nNow you are almost ready to run Otoroshi for the first time, but maybe you want to import data from previous Otoroshi installation in your current datastore.\n\nTo do that, you need to add the `app.importFrom` setting to the Otoroshi configuration (of `$APP_IMPORT_FROM` env).\n\nIt can be a file path or a URL\n\n## Example of export\n\n```json\n{\n \"config\": {\n \"lines\": [\"prod\"], \n \"limitConcurrentRequests\": true,\n \"maxConcurrentRequests\": 500,\n \"useCircuitBreakers\": true,\n \"apiReadOnly\": false,\n \"registerFromCleverHook\": false,\n \"u2fLoginOnly\": true,\n \"ipFiltering\": {\n \"whitelist\": [],\n \"blacklist\": []\n },\n \"throttlingQuota\": 100000,\n \"perIpThrottlingQuota\": 500,\n \"analyticsEventsUrl\": null,\n \"analyticsWebhooks\": [],\n \"alertsWebhooks\": [],\n \"alertsEmails\": [],\n \"endlessIpAddresses\": []\n },\n \"admins\": [],\n \"simpleAdmins\": [\n {\n \"username\": \"admin@otoroshi.io\",\n \"password\": \"xxxxxxxxxxxxxxxxx\",\n \"label\": \"Otoroshi Admin\",\n \"createdAt\": 1493971715708\n }\n ],\n \"serviceGroups\": [\n {\n \"id\": \"default\",\n \"name\": \"default-group\",\n \"description\": \"The default group\"\n },\n {\n \"id\": \"admin-api-group\",\n \"name\": \"Otoroshi Admin Api group\",\n \"description\": \"No description\"\n }\n ],\n \"apiKeys\": [\n {\n \"clientId\": \"admin-api-apikey-id\",\n \"clientSecret\": \"admin-api-apikey-secret\",\n \"clientName\": \"Otoroshi Backoffice ApiKey\",\n \"authorizedEntities\": [\"group_admin-api-group\"],\n \"enabled\": true,\n \"throttlingQuota\": 10000000,\n \"dailyQuota\": 10000000,\n \"monthlyQuota\": 10000000,\n \"metadata\": {}\n }\n ],\n \"serviceDescriptors\": [\n {\n \"id\": \"admin-api-service\",\n \"groupId\": \"admin-api-group\",\n \"name\": \"otoroshi-admin-api\",\n \"env\": \"prod\",\n \"domain\": \"oto.tools\",\n \"subdomain\": \"otoroshi-api\",\n \"targets\": [\n {\n \"host\": \"localhost:8080\",\n \"scheme\": \"http\"\n }\n ],\n \"root\": \"/\",\n \"enabled\": true,\n \"privateApp\": false,\n \"forceHttps\": false,\n \"maintenanceMode\": false,\n \"buildMode\": false,\n \"enforceSecureCommunication\": true,\n \"publicPatterns\": [],\n \"privatePatterns\": [],\n \"additionalHeaders\": {\n \"Host\": \"otoroshi-admin-internal-api.oto.tools\"\n },\n \"matchingHeaders\": {},\n \"ipFiltering\": {\n \"whitelist\": [],\n \"blacklist\": []\n },\n \"api\": {\n \"exposeApi\": false\n },\n \"healthCheck\": {\n \"enabled\": false,\n \"url\": \"/\"\n },\n \"metadata\": {}\n }\n ],\n \"errorTemplates\": []\n}\n```\n"},{"name":"run.md","id":"/firstrun/run.md","url":"/firstrun/run.html","title":"Run Otoroshi","content":"# Run Otoroshi\n\nNow you are ready to run Otoroshi. You can run the following command with some tweaks depending on the way you want to configure Otoroshi. If you want to pass a custom configuration file, use the `-Dconfig.file=/path/to/file.conf` flag in the following commands.\n\n## From .zip file\n\n```sh\nunzip otoroshi-dist.zip\ncd otoroshi-vx.x.x\n./bin/otoroshi\n```\n\n## From .jar file\n\nFor Java 8 & Java 11\n\n```sh\njava -jar otoroshi.jar\n```\n\n## From docker\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi\n```\n\nYou can also pass useful args like :\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi -Dconfig.file=/usr/app/otoroshi/conf/otoroshi.conf -Dlogger.file=/usr/app/otoroshi/conf/otoroshi.xml\n```\n\nIf you want to provide your own config file, you can read @ref:[the documentation about config files](../firstrun/configfile.md).\n\nYou can also provide some ENV variable using the `--env` flag to customize your Otoroshi instance.\n\nThe list of possible env variables is available @ref:[here](../firstrun/env.md).\n\nYou can use a volume to provide configuration like :\n\n```sh\ndocker run -p \"8080:8080\" -v \"$(pwd):/usr/app/otoroshi/conf\" maif/otoroshi\n```\n\nYou can also use a volume if you choose to use `filedb` datastore like :\n\n```sh\ndocker run -p \"8080:8080\" -v \"$(pwd)/filedb:/usr/app/otoroshi/filedb\" maif/otoroshi -Dapp.storage=file\n```\n\nYou can also use a volume if you choose to use exports files :\n\n```sh\ndocker run -p \"8080:8080\" -v \"$(pwd):/usr/app/otoroshi/imports\" maif/otoroshi -Dapp.importFrom=/usr/app/otoroshi/imports/export.json\n```\n\n## Run examples\n\n```sh\n$ java \\\n -Xms2G \\\n -Xmx8G \\\n -Dhttp.port=8080 \\\n -Dapp.importFrom=/home/user/otoroshi.json \\\n -Dconfig.file=/home/user/otoroshi.conf \\\n -jar ./otoroshi.jar\n\n[warn] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[warn] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[warn] otoroshi-env - Importing from: /home/user/otoroshi.json\n[info] play.api.Play - Application started (Prod)\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n```\n\nIf you choose to start Otoroshi without importing existing data, Otoroshi will create a new admin user and print the login details in the log. When you will log into the admin dashboard, Otoroshi will ask you to create another account to avoid security issues.\n\n```sh\n$ java \\\n -Xms2G \\\n -Xmx8G \\\n -Dhttp.port=8080 \\\n -jar otoroshi.jar\n\n[warn] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[warn] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[warn] otoroshi-env - You can log into the Otoroshi admin console with the following credentials: admin@otoroshi.io / HHUsiF2UC3OPdmg0lGngEv3RrbIwWV5W\n[info] play.api.Play - Application started (Prod)\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n```\n"},{"name":"frombinaries.md","id":"/getotoroshi/frombinaries.md","url":"/getotoroshi/frombinaries.html","title":"From binaries","content":"# From binaries\n\nIf you want to download the last version of Otoroshi and its CLI, you can grab them from the release page of the Otoroshi github page :\n\nGo to https://github.com/MAIF/otoroshi/releases and get the last version of the `otoroshi-dist.zip` file or `otoroshi.jar` file\n"},{"name":"fromdocker.md","id":"/getotoroshi/fromdocker.md","url":"/getotoroshi/fromdocker.html","title":"From docker","content":"# From docker\n\nIf you're a Docker aficionado, Otoroshi is provided as a Docker image that your can pull directly from Official repos.\n\nfirst, fetch the last Docker image of Otoroshi :\n\n```sh\ndocker pull maif/otoroshi:1.5.0-alpha.6\n# or \ndocker pull maif/otoroshi:latest\n# or \ndocker pull maif/otoroshi:jdk8-1.5.0-alpha.6\n# or \ndocker pull maif/otoroshi:jdk11-1.5.0-alpha.6\n# or \ndocker pull maif/otoroshi:jdk12-1.5.0-alpha.6\n# or \ndocker pull maif/otoroshi:jdk13-1.5.0-alpha.6\n# or \ndocker pull maif/otoroshi:jdk14-1.5.0-alpha.6\n```"},{"name":"fromsources.md","id":"/getotoroshi/fromsources.md","url":"/getotoroshi/fromsources.html","title":"From sources","content":"# From sources\n\nto build Otoroshi from sources, you need the following tools :\n\n* git\n* JDK 8\n* SBT\n* node\n* yarn\n\nOnce you've installed all those tools, go to the [Otoroshi github page](https://github.com/MAIF/otoroshi) and clone the sources :\n\n```sh\ngit clone https://github.com/MAIF/otoroshi.git --depth=1\n```\n\nthen you need to run the `build.sh` script to build the documentation, the React UI and the server :\n\n```sh\nsh ./scripts/build.sh\n```\n\nand that's all, you can grab your Otoroshi package at `otoroshi/target/scala-2.12/otoroshi` or `otoroshi/target/universal/`.\n\nFor those who want to build only parts of Otoroshi, read the following.\n\n## Build the documentation only\n\nGo to the `documentation` folder and run :\n\n```sh\nsbt ';clean;paradox'\n```\n\nThe documentation is located at `manual/target/paradox/site/main/`\n\n## Build the React UI\n\nGo to the `otoroshi/javascript` folder and run :\n\n```sh\nyarn install\nyarn build\n```\n\nYou will find the JS bundle at `otoroshi/public/javascripts/bundle/bundle.js`.\n\n## Build the Otoroshi server\n\nGo to the `otoroshi` folder and run :\n\n```sh\nexport SBT_OPTS=\"-Xmx2G -Xss6M\"\nsbt ';clean;compile;dist;assembly'\n```\n\nYou will find your Otoroshi package at `otoroshi/target/scala-2.12/otoroshi` or `otoroshi/target/universal/`.\n"},{"name":"index.md","id":"/getotoroshi/index.md","url":"/getotoroshi/index.html","title":"Get Otoroshi","content":"# Get Otoroshi\n\nThere are several ways to get Otoroshi to run it on your system.\n\nLet's start with a good old build from sources :)\n\n@@@ index\n\n* [from sources](./fromsources.md)\n* [from binaries](./frombinaries.md)\n* [from docker](./fromdocker.md)\n\n@@@"},{"name":"index.md","id":"/index.md","url":"/index.html","title":"Otoroshi","content":"# Otoroshi\n\n**Otoroshi** is a layer of lightweight api management on top of a modern http reverse proxy written in Scala and developped by the MAIF OSS team that can handle all the calls to and between your microservices without service locator and let you change configuration dynamicaly at runtime.\n\n\n> *The Otoroshi is a large hairy monster that tends to lurk on the top of the torii gate in front of Shinto shrines. It's a hostile creature, but also said to be the guardian of the shrine and is said to leap down from the top of the gate to devour those who approach the shrine for only self-serving purposes.*\n\n@@@ div { .centered-img }\n[![Build Status](https://travis-ci.org/MAIF/otoroshi.svg?branch=master)](https://travis-ci.org/MAIF/otoroshi) [![Join the chat at https://gitter.im/MAIF/otoroshi](https://badges.gitter.im/MAIF/otoroshi.svg)](https://gitter.im/MAIF/otoroshi?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [ ![Download](https://img.shields.io/github/release/MAIF/otoroshi.svg) ](hhttps://github.com/MAIF/otoroshi/releases/download/v1.5.0-alpha.6/otoroshi.jar)\n@@@\n\n@@@ div { .centered-img }\n\n@@@\n\n## Installation\n\nYou can download the latest build of Otoroshi as a [fat jar](https://github.com/MAIF/otoroshi/releases/download/v1.5.0-alpha.6/otoroshi.jar), as a [zip package](https://github.com/MAIF/otoroshi/releases/download/v1.5.0-alpha.6/otoroshi-dist.zip) or as a @ref:[docker image](./getotoroshi/fromdocker.md).\n\nYou can install and run Otoroshi with this little bash snippet\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v1.5.0-alpha.6/otoroshi.jar'\njava -jar otoroshi.jar\n```\n\nor using docker\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi:1.5.0-alpha.6\n```\n\nnow open your browser to http://otoroshi.oto.tools:8080/, **log in with the credential generated in the logs** and explore by yourself, if you want better instructions, just go to the @ref:[Quick Start](./quickstart.md) or directly to the @ref:[installation instructions](./getotoroshi/index.md)\n\n## Documentation\n\n* @ref:[About Otoroshi](./about.md)\n* @ref:[Architecture](./archi.md)\n* @ref:[Features](./features.md)\n* @ref:[Try Otoroshi in 5 minutes](./quickstart.md)\n* @ref:[Get Otoroshi](./getotoroshi/index.md)\n* @ref:[First run](./firstrun/index.md)\n* @ref:[Setup Otoroshi](./setup/index.md)\n* @ref:[Using Otoroshi](./usage/index.md)\n* @ref:[Third party Integrations](./integrations/index.md)\n* @ref:[Detailed topics](./topics/index.md)\n* @ref:[Admin REST API](./api.md)\n* @ref:[Deploy to production](./deploy/index.md)\n* @ref:[Developing Otoroshi](./dev.md)\n\n## Discussion\n\nJoin the [Otoroshi](https://gitter.im/MAIF/otoroshi) channel on the [MAIF Gitter](https://gitter.im/MAIF)\n\n## Sources\n\nThe sources of Otoroshi are available on [Github](https://github.com/MAIF/otoroshi).\n\n## Logo\n\nYou can find the official Otoroshi logo [on GitHub](https://github.com/MAIF/otoroshi/blob/master/resources/otoroshi-logo.png). The Otoroshi logo has been created by François Galioto ([@fgalioto](https://twitter.com/fgalioto))\n\n## Changelog\n\nEvery release, along with the migration instructions, is documented on the [Github Releases](https://github.com/MAIF/otoroshi/releases) page.\n\n## Patrons\n\nThe work on Otoroshi was funded by MAIF with the help of the community.\n\n## Licence\n\nOtoroshi is Open Source and available under the [Apache 2 License](https://opensource.org/licenses/Apache-2.0)\n\n@@@ index\n\n* [About Otoroshi](about.md)\n* [Architecture](archi.md)\n* [Features](features.md)\n* [Quickstart](quickstart.md)\n* [Get otoroshi](getotoroshi/index.md)\n* [First run](firstrun/index.md)\n* [Setup](setup/index.md)\n* [Using Otoroshi](usage/index.md)\n* [Integrations](integrations/index.md)\n* [Detailed topics](topics/index.md)\n* [Admin REST API](api.md)\n* [Deploy to production](deploy/index.md)\n* [Developing Otoroshi](./dev.md)\n\n@@@\n"},{"name":"analytics.md","id":"/integrations/analytics.md","url":"/integrations/analytics.html","title":"Analytics","content":"# Analytics\n\nEach action and request on Otoroshi creates events that can be sent outside of Otoroshi for further usage. Those events can be sent using a webhook and/or through a Kafka topic.\n\n## Push events to Elasticsearch\n\n@@@ warning\nOtoroshi supports only Elasticsearch versions under 7.0\n@@@\n\nYou can use elastic search to store otoroshi events. To do this you have to configure the access to elasticsearch from `settings (cog icon) / Danger Zone` and expand the `Analytics: Elastic cluster (write)` section.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Read events from Elasticsearch\n\nYou can use elastic search to store otoroshi events. To do this you have to configure the access to elasticsearch from `settings (cog icon) / Danger Zone` and expand the `Analytics: Elastic dashboard datasource (read)` section.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Push events to WebHooks\n\nGo to `settings (cog icon) / Danger Zone` and expand the `Analytics: Webhooks` section.\n\n@@@ div { .centered-img }\n\n@@@\n\nHere you can configure the URL of the webhook and its headers if needed.\n\n## Push events to Kafka\n\nEvents can also be sent through a Kafka topic. Go to `settings (cog icon) / Danger Zone` and expand the `Analytics: Kafka` section.\n\n@@@ div { .centered-img }\n\n@@@\n\nFill the form, default values for topic names are :\n\n* `otoroshi-alerts`\n* `otoroshi-analytics`\n* `otoroshi-audits`\n\n@@@ warning\nIf you use trustore/keystore to access your kafka instances, the paths should be absolute and refers to host paths. You can also choose a client certificate from otoroshi for client authentication.\n@@@\n"},{"name":"clevercloud.md","id":"/integrations/clevercloud.md","url":"/integrations/clevercloud.html","title":"Clever Cloud","content":"# Clever Cloud\n\nOtoroshi provides an integration with Clever Cloud to create easily services based on application deployed on your Clever Cloud account.\nGo to `settings (cog icon) / Danger Zone` and expand the `CleverCloud settings` section.\n\n@@@ div { .centered-img }\n\n@@@\n\nFill the form with your CleverCloud credentials (https://www.clever-cloud.com/doc/clever-cloud-apis/cc-api/) and your CleverCloud `organization id`.\n\nOnce it's done, you will see a new menu in the side bar.\n\n@@@ div { .centered-img }\n\n@@@\n\nIf you click on it, you'll see a page listing all your apps deployed on Clever Cloud with buttons to create new services with the app as the target.\n\n@@@ div { .centered-img }\n\n@@@\n\nYou will also see a new button in the `Target` section of services to attach Clever Cloud applications as target for a service.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"index.md","id":"/integrations/index.md","url":"/integrations/index.html","title":"Third party Integrations","content":"# Third party Integrations\n\nOtoroshi provides some settings to interact with some third party systems.\n\n@@@ index\n\n* [Analytics](./analytics.md)\n* [Mailgun / Mailjet](./mailgun.md)\n* [StatsD / Datadog](./statsd.md)\n* [clevercloud](./clevercloud.md)\n\n@@@\n"},{"name":"mailgun.md","id":"/integrations/mailgun.md","url":"/integrations/mailgun.html","title":"Mailgun","content":"# Mailgun\n\nIf you want to receive Otoroshi alert by emails, you have to configure Otoroshi with your Mailgun credentials. Go to `settings (cog icon) / Danger Zone` and expand the `Mailgun settings` section.\n\n@@@ div { .centered-img }\n\n@@@\n\nFill the form with provided information on the `domain informations` page on Mailgun located at https://app.mailgun.com/app/domains/my.domain.\n\nThen, expand the `Alert settings` section and add email addresses separated by comma in the `Alert emails` field. **Don't forget to save.**\n\n@@@ div { .centered-img }\n\n@@@\n\n# Mailjet\n\nOtoroshi also supports Mailjet. Just select `Mailjet` in `Mailer settings type` and fill the requested fields."},{"name":"statsd.md","id":"/integrations/statsd.md","url":"/integrations/statsd.html","title":"StatsD / Datadog","content":"# StatsD / Datadog\n\nOtoroshi provides a StatsD integration to monitor some technical metrics across all your Otoroshi instances.\nGo to `settings (cog icon) / Danger Zone` and expand the `Statsd settings` section.\n\n@@@ div { .centered-img }\n\n@@@\n\nAdd the host and port of the Statsd agent on your system.\nIf you're using Datadog, don't forget to check the `Datadog` switch.\n"},{"name":"quickstart.md","id":"/quickstart.md","url":"/quickstart.html","title":"Try Otoroshi in 5 minutes","content":"# Try Otoroshi in 5 minutes\n\nwhat you will need :\n\n* JDK 11\n* curl\n* jq\n* 5 minutes of free time\n\n## The elevator pitch\n\nOtoroshi is an awesome reverse proxy built with Scala that handles all the calls to and between your microservices without service locator and lets you change configuration dynamically at runtime.\n\n## Download otoroshi\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v1.5.0-alpha.6/otoroshi.jar'\n```\n\nIf you don’t/can’t have these tools on your machine, you can start a sandboxed environment using here with the following command\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi\n```\n\n## Start otoroshi\n\nto start otoroshi, just run the following command \n\n```sh\njava -jar otoroshi.jar\n```\n\nthis will start an in-memory otoroshi instance with a generated password that will be printed in the logs. You can set the password with the following flags\n\n```sh\njava -Dapp.adminLogin=admin@foo.bar -Dapp.adminPassword=password -jar otoroshi.jar\n```\n\nif you want to have otoroshi content persisted between launch without having to setup a datastore, just usse the following flag\n\n```sh\njava -Dapp.storage=file -jar otoroshi.jar\n```\n\nas the result, you will see something like\n\n```log\n$ java -jar otoroshi.jar\n\n[info] otoroshi-env - Otoroshi version 1.5.0-alpha.6\n[info] otoroshi-env - Admin API exposed on http://otoroshi-api.oto.tools:8080\n[info] otoroshi-env - Admin UI exposed on http://otoroshi.oto.tools:8080\n[warn] otoroshi-env - Scripting is enabled on this Otoroshi instance !\n[info] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[info] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[info] otoroshi-env - You can log into the Otoroshi admin console with the following credentials: admin@otoroshi.io / xol1Kwjzqe9OXjqDxxPPbPb9p0BPjhCO\n[info] play.api.Play - Application started (Prod)\n[info] otoroshi-script-manager - Compiling and starting scripts ...\n[info] otoroshi-script-manager - Finding and starting plugins ...\n[info] otoroshi-script-manager - Compiling and starting scripts done in 18 ms.\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n[info] p.c.s.AkkaHttpServer - Listening for HTTPS on /0:0:0:0:0:0:0:0:8443\n[info] otoroshi-script-manager - Finding and starting plugins done in 4681 ms.\n[info] otoroshi-env - Generating CA certificate for Otoroshi self signed certificates ...\n[info] otoroshi-env - Generating a self signed SSL certificate for https://*.oto.tools ...\n```\n\n## Log into the admin UI\n\njust go to http://otoroshi.oto.tools:8080 and log in with the credentials printed in the logs\n\n## Create you first service\n\nto create your first service you can either do it using the admin UI or using the admin API. Let's use the admin API.\n\nBy default, otoroshi registers an admin apikey with `admin-api-apikey-id:admin-api-apikey-secret` value (those values can be tuned at first startup). Of course you can create your own with\n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/apikeys/_template \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n -d '{\n \"clientId\": \"quickstart\",\n \"clientSecret\": \"secret\",\n \"clientName\": \"quickstart-apikey\",\n \"authorizedEntities\": [\"group_admin-api-group\"]\n}' | jq\n```\n\nnow let create a new service to proxy `https://maif.gitub.io` on domain `maif.oto.tools`. This service will be public and will not require an apikey to pass\n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/_template \\\n -u quickstart:secret \\\n -d '{\n \"name\": \"quickstart-service\", \n \"hosts\": [\"maif.oto.tools\"], \n \"targets\": [{ \"host\": \"maif.github.io\", \"scheme\": \"https\" }], \n \"publicPatterns\": [\"/.*\"]\n}' | jq\n```\n\nnow just go to `http://maif.oto.tools:8080` to check if it works\n\n## Create a service to proxy an api\n\nnow will we proxy the api at `https://aws.random.cat/meow` that returns random cat pictures and make it use apikeys.\n\n```sh\n$ curl https://aws.random.cat/meow | jq\n\n{\n \"file\": \"https://purr.objects-us-east-1.dream.io/i/20161003_163413.jpg\"\n}\n```\n\nFirst let's create the service \n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/_template \\\n -u quickstart:secret \\\n -d '{\n \"id\": \"cats-api\",\n \"name\": \"cats-api\", \n \"hosts\": [\"cats.oto.tools\"], \n \"targets\": [{ \"host\": \"aws.random.cat\", \"scheme\": \"https\" }],\n \"root\": \"/meow\"\n}' | jq\n```\n\nbut if you try to use it, you will have something like :\n\n```sh\n$ curl http://cats.oto.tools:8080 | jq\n\n{\n \"Otoroshi-Error\": \"No ApiKey provided\"\n}\n```\n\nthat's because the api is not public and needs apikeys to access it. So let's create an apikey\n\n```sh\ncurl -X POST -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/apikeys/_template \\\n -u quickstart:secret \\\n -d '{\n \"clientId\": \"apikey1\",\n \"clientSecret\": \"secret\",\n \"clientName\": \"quickstart-apikey-1\",\n \"authorizedEntities\": [\"group_default\"]\n}' | jq\n``` \n\nand try again\n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret | jq\n\n{\n \"file\": \"https://purr.objects-us-east-1.dream.io/i/vICG4.gif\"\n}\n```\n\nnow let's try to play with quotas. First, we need to know what is the current state of the apikey quotas by enabling otoroshi headers about consumptions\n\n```sh\ncurl -X PATCH -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/cats-api \\\n -u quickstart:secret \\\n -d '[\n { \"op\": \"replace\", \"path\": \"/sendOtoroshiHeadersBack\", \"value\": true }\n]' | jq\n```\n\nand retry the call with \n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret --include\n\nHTTP/1.1 200 OK\nDate: Tue, 10 Mar 2020 12:56:08 GMT\nServer: Apache\nExpires: Mon, 26 Jul 1997 05:00:00 GMT\nCache-Control: no-cache, must-revalidate\nOtoroshi-Request-Id: 1237361356529729796\nOtoroshi-Proxy-Latency: 79\nOtoroshi-Upstream-Latency: 416\nOtoroshi-Request-Timestamp: 2020-03-10T13:55:11.195+01:00\nAccess-Control-Allow-Origin: *\nAccess-Control-Allow-Methods: GET\nOtoroshi-Daily-Calls-Remaining: 9999998\nOtoroshi-Monthly-Calls-Remaining: 9999998\nContent-Type: application/json\nContent-Length: 71\n\n{\"file\":\"https:\\/\\/purr.objects-us-east-1.dream.io\\/i\\/beerandcat.jpg\"}\n```\n\nnow let's try to allow only 10 request per day on the apikey\n\n```sh\ncurl -X PATCH -H 'Content-Type: application/json' \\\n http://otoroshi-api.oto.tools:8080/api/services/cats-api/apikeys/apikey1 \\\n -u quickstart:secret \\\n -d '[\n { \"op\": \"replace\", \"path\": \"/dailyQuota\", \"value\": 10 }\n]' | jq\n```\n\nthen try to call you api again\n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret --include\n\nHTTP/1.1 200 OK\nDate: Tue, 10 Mar 2020 13:00:01 GMT\nServer: Apache\nExpires: Mon, 26 Jul 1997 05:00:00 GMT\nCache-Control: no-cache, must-revalidate\nOtoroshi-Request-Id: 1237362334930829633\nOtoroshi-Proxy-Latency: 71\nOtoroshi-Upstream-Latency: 92\nOtoroshi-Request-Timestamp: 2020-03-10T13:59:04.456+01:00\nAccess-Control-Allow-Origin: *\nAccess-Control-Allow-Methods: GET\nOtoroshi-Daily-Calls-Remaining: 7\nOtoroshi-Monthly-Calls-Remaining: 9999997\nContent-Type: application/json\nContent-Length: 66\n\n{\"file\":\"https:\\/\\/purr.objects-us-east-1.dream.io\\/i\\/C1XNK.jpg\"}\n```\n\neventually you will get something like\n\n```sh\n$ curl http://cats.oto.tools:8080 -u apikey1:secret --include\n\nHTTP/1.1 429 Too Many Requests\nOtoroshi-Error: true\nOtoroshi-Error-Msg: You performed too much requests\nOtoroshi-State-Resp: --\nDate: Tue, 10 Mar 2020 12:59:11 GMT\nContent-Type: application/json\nContent-Length: 52\n\n{\"Otoroshi-Error\":\"You performed too much requests\"}\n```"},{"name":"admin.md","id":"/setup/admin.md","url":"/setup/admin.html","title":"Manage admin users","content":"# Manage admin users\n\n@@@ warning\nThis section is under rewrite. The following content is deprecated and UI may have changed\n@@@\n\n## Create admin user after the first run\n\nClick on the `Create an admin user` warning popup, or go to `settings (cog icon) / Admins`.\n\n@@@ div { .centered-img }\n\n@@@\n\nYou will see the list of registered admin users.\n\n@@@ div { .centered-img }\n\n@@@\n\nClick on `Register admin.`\n\n@@@ div { .centered-img }\n\n@@@\n\nNow, enter informations about the new admin you want to create.\n\n@@@ div { .centered-img }\n\n@@@\n\nClick on `Register Admin`.\n\n@@@ div { .centered-img }\n\n@@@\n\nNow, you can discard the generated admin, confirm, then logout, login with the admin user you have just created and the danger popup will go away\n\n@@@ div { .centered-img }\n\n@@@\n\n## Create admin user with U2F device login\n\nGo to `settings (cog icon) / Admins`, click on `Register Admin`.\n\n@@@ div { .centered-img }\n\n@@@\n\nEnter informations about the new admin you want to create.\n\n@@@ div { .centered-img }\n\n@@@\n\nClick on `Register Admin with WebAuthn`.\n\nOtoroshi will ask you to plug your FIDO U2F device and touch it to complete registration.\n\n@@@ div { .centered-img }\n\n@@@\n\n@@@ warning\nTo be able to use FIDO U2F devices, Otoroshi must be served over https\n@@@\n\n## Discard admin user\n\nGo to `settings (cog icon) / Admins`, at the bottom of the page, you will see a list of admin users that you can discard. Just click on the `Discard User` button on the right side of the row and confirm that you actually want to discard an admin user.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Admin sessions management\n\nGo to `settings (cog icon) / Admins sessions`, you will see a list of active admin user sessions\n\n@@@ div { .centered-img }\n\n@@@\n\nYou can either discard single sessions one by one using the `Discard Session` on each targeted row of the list or discard all active sessions using the `Discard all sessions` button at the top of the page.\n"},{"name":"dangerzone.md","id":"/setup/dangerzone.md","url":"/setup/dangerzone.html","title":"Configure the Danger zone","content":"# Configure the Danger zone\n\n@@@ warning\nThis section is under rewrite. The following content is deprecated and UI may have changed\n@@@\n\nNow that you have an actual admin account, go to `setting (cog icon) / Danger Zone` in order to configure your Otoroshi instance.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Commons settings\n\nThis part allows you to configure various things :\n\n* `No Auth0 login` => allow you to disabled Auth0 login to the Otoroshi admin dashboard\n* `API read only` => disable `writes` on the Otoroshi admin api\n* `Use HTTP streaming` => use http streaming for each response. It should always be true\n* `Auto link default` => when no group is specified on a service, it will be assigned to default one\n* `Use circuit breakers` => allow usage of circuit breakers for each service\n* `Log analytics on servers` => all analytics will be logged on the servers\n* `Use new http client as the default Http client` => all http call will use the new http client client by default\n* `Enable live metrics` => enable live metrics in the Otoroshi cluster. Performs a lot of writes in the datastore\n* `Digitus medius` => change the character of endless HTTP responses from `0` to `🖕`\n* `Limit concurrent requests` => allow you to specify a max number of concurrent requests on an Otoroshi instance to avoid overloading\n* `Max concurrent requests` => max allowed number of concurrent requests on an Otoroshi instance to avoid overloading\n* `Max HTTP/1.0 response size` => max size of an HTTP/1.0 responses, because they are memory mapped\n* `Max local events` => number of events stored localy (alerts and audits)\n* `lines` => at least one (`prod`). for other, it will allow you to declare urls like `service.line.domain.tld`. For prod it will be `service.domain.tld`\n\n@@@ div { .centered-img }\n\n@@@\n\n## Whitelist / blacklist settings\n\nOtoroshi is capable of filtering request by ip address, allowing or blocking requests.\n\nOtoroshi also provides a fun feature called `Endless HTTP responses`. If you put an ip address in that field, then, for any http request on Otoroshi, every response will be 128 GB of `0`.\n\n@@@ div { .centered-img }\n\n@@@\n\n@@@ note\nNote that you may provide ip address with wildcard like the following `42.42.*.42` or `42.42.42.*` or `42.42.*.*`\n@@@\n\n## Global throttling settings\n\nOtoroshi is capable of managing throttling at a global level. Here you can configure number of authorized requests per second on a single Otoroshi instance and the number of authorized request per second for a unique ip address.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Analytics settings\n\nOne on the major features of Otoroshi is being able of generating internal events. Those events are not stored in Otoroshi's datastore but can be sent using `WebHooks`. You can configure those `WebHooks` from the `Danger Zone`.\n\nOtoroshi is also capable of reading some analytics and displays it from another MAIF product called `Omoïkane`. As Omoikane is not publicly available yet, is capable of storing events in an [Elastic](https://www.elastic.co/) cluster. For more information about analytics and what it does, just go to the @ref:[detailed chapter](../integrations/analytics.md)\n\n## Kafka settings\n\nOne on the major features of Otoroshi is being able of generating internal events. These events are not stored in Otoroshi's datastore but can be sent using a [Kafka message broker](https://kafka.apache.org/). You can configure Kafka access from the `Danger Zone`.\n\nBy default, Otoroshi's alert events will be sent on `otoroshi-alerts` topic, Otoroshi's audit events will be sent on `otoroshi-audits` topic and Otoroshi's traffic events will be sent on `otoroshi-analytics` topic.\n\n@@@ warning\nKeystore and truststore paths are optional local path on the server hosting Otoroshi\n@@@\n\n@@@ div { .centered-img }\n\n@@@\n\nFor more information about Kafka integration and what it does, just go to the @ref:[detailed chapter](../integrations/analytics.md)\n\n## Alerts settings\n\nEach time a dangerous action or something unusual is performed on Otoroshi, it will create an alert and store it. You can be notified for each of these alerts using `WebHooks` or emails. To do so, just add the `WebHook` URL and optional headers in the `Danger Zone` or any email address you want (you can add more than one email address).\n\nYou can enable mutual authentication via the `Use mTLS` button and add your certificates. The `TLS loose` option will block all untrustful ssl configs, the `TrustAll` option allows any server certificates even the self-signed ones.\n\n@@@ div { .centered-img }\n\n@@@\n\n## StatsD settings\n\nOtoroshi is capable of sending internal metrics to a StatsD agent. Just put the host and port of you StatsD agent in the `Danger Zone` to collect these metrics. If you using [Datadog](https://www.datadoghq.com), don't forget to check the dedicated button :)\n\n@@@ div { .centered-img }\n\n@@@\n\nFor more information about StatsD integration and what it does, just go to the @ref:[detailed chapter](../integrations/statsd.md)\n\n## Mailer settings\n\nIf you want to send emails for every alert generated by Otoroshi, you need to configure your Mailgun credentials in the `Danger Zone`. These parameters are provided in you Mailgun domain dashboard (i.e. https://app.mailgun.com/app/domains/my.domain.oto.tools) in the information section.\n\n@@@ div { .centered-img }\n\n@@@\n\nFor more information about Mailgun integration and what it does, just go to the @ref:[detailed chapter](../integrations/mailgun.md)\n\n## CleverCloud settings\n\nAs we built our products to run on Clever-Cloud, Otoroshi has a close integration with Clever-Cloud. In this section of `Danger Zone` you can configure how to access Clever-Cloud API.\n\nTo generate the needed value, please refers to [Clever-Cloud documentation](https://www.clever-cloud.com/doc/clever-cloud-apis/cc-api/)\n\n@@@ div { .centered-img }\n\n@@@\n\nFor more information about Clever-Cloud integration and what it does, just go to the @ref:[detailed chapter](../integrations/clevercloud.md)\n\n## Import / exports and panic mode\n\nFor more details about imports and exports, please go to the @ref:[dedicated chapter](../usage/8-importsexports.md)\n\nAbout panic mode, it's an unusual feature that allows you to discard all current admin. sessions, allows only admin users with U2F devices to log back, and pass the API in read-only mode. Only a person who has access to Otoroshi's datastore will be able to turn it back on.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"index.md","id":"/setup/index.md","url":"/setup/index.html","title":"Setup Otoroshi","content":"# Setup Otoroshi\n\nNow that Otoroshi is running, you are ready to log into the Otoroshi admin dashboard and setup your instance. Just go to :\n\nhttp://otoroshi.oto.tools:8080\n\nand you will see the login page\n\n@@@ div { .centered-img }\n\n@@@\n\n@@@ warning\nUse the credentials generated in Otoroshi **logs** during **first run**.\n@@@\n\n@@@ div { .centered-img #first-login-example }\n\n@@@\n\n(of course, you can change this url dependending on the configuration you provided to Otoroshi).\n\nOnce logged in, the first screen you'll see should look like :\n\n@@@ div { .centered-img #first-login }\n\n@@@\n\nAs you can see, Otoroshi is not really happy about you being logged with a generated admin account.\n\nBut we will fix that in the next chapter\n\n@@@ index\n\n* [create admins](./admin.md)\n* [configure danger zone](./dangerzone.md)\n\n@@@\n"},{"name":"clustering.md","id":"/topics/clustering.md","url":"/topics/clustering.html","title":"Otoroshi clustering","content":"# Otoroshi clustering\n\nOtoroshi can work as a cluster by default as you can spin many Otoroshi servers using the same datastore or datastore cluster. In that case any instance is capable of serving services, Otoroshi admin UI, Otoroshi admin API, etc.\n\nBut sometimes, this is not enough. So Otoroshi provides an additional clustering model named `Leader / Workers` where there is a leader cluster ([control plane](https://en.wikipedia.org/wiki/Control_plane)), composed of Otoroshi instances backed by a datastore like Redis, PostgreSQL or Cassandra, that is in charge of all `writes` to the datastore through Otoroshi admin UI and API, and a worker cluster ([data plane](https://en.wikipedia.org/wiki/Forwarding_plane)) composed of horizontally scalable Otoroshi instances, backed by a super fast in memory datastore, with the sole purpose of routing traffic to your services based on data synced from the leader cluster. With this distributed Otoroshi version, you can reach your goals of high availability, scalability and security.\n\nOtoroshi clustering only uses http internally (right now) to make communications between leaders and workers instances so it is fully compatible with PaaS providers like [Clever-Cloud](https://www.clever-cloud.com/en/) that only provide one external port for http traffic.\n\n@@@ div { .centered-img }\n\n\n*Fig. 1: Simplified view*\n@@@\n\n@@@ div { .centered-img }\n\n\n*Fig. 2: Deployment view*\n@@@\n\n## Cluster configuration\n\n```hocon\notoroshi {\n cluster {\n mode = \"leader\" # can be \"off\", \"leader\", \"worker\"\n compression = 4 # compression of the data sent between leader cluster and worker cluster. From -1 (disabled) to 9\n leader {\n name = ${?CLUSTER_LEADER_NAME} # name of the instance, if none, it will be generated\n urls = [\"http://127.0.0.1:8080\"] # urls to contact the leader cluster\n host = \"otoroshi-api.oto.tools\" # host of the otoroshi api in the leader cluster\n clientId = \"apikey-id\" # otoroshi api client id\n clientSecret = \"secret\" # otoroshi api client secret\n cacheStateFor = 4000 # state is cached during (ms)\n }\n worker {\n name = ${?CLUSTER_WORKER_NAME} # name of the instance, if none, it will be generated\n retries = 3 # number of retries when calling leader cluster\n timeout = 2000 # timeout when calling leader cluster\n state {\n retries = ${otoroshi.cluster.worker.retries} # number of retries when calling leader cluster on state sync\n pollEvery = 10000 # interval of time (ms) between 2 state sync\n timeout = ${otoroshi.cluster.worker.timeout} # timeout when calling leader cluster on state sync\n }\n quotas {\n retries = ${otoroshi.cluster.worker.retries} # number of retries when calling leader cluster on quotas sync\n pushEvery = 2000 # interval of time (ms) between 2 quotas sync\n timeout = ${otoroshi.cluster.worker.timeout} # timeout when calling leader cluster on quotas sync\n }\n }\n }\n}\n```\n\nyou can also use many env. variables to configure Otoroshi cluster\n\n```hocon\notoroshi {\n cluster {\n mode = ${?CLUSTER_MODE}\n compression = ${?CLUSTER_COMPRESSION}\n leader {\n name = ${?CLUSTER_LEADER_NAME}\n host = ${?CLUSTER_LEADER_HOST}\n url = ${?CLUSTER_LEADER_URL}\n clientId = ${?CLUSTER_LEADER_CLIENT_ID}\n clientSecret = ${?CLUSTER_LEADER_CLIENT_SECRET}\n groupingBy = ${?CLUSTER_LEADER_GROUP_BY}\n cacheStateFor = ${?CLUSTER_LEADER_CACHE_STATE_FOR}\n stateDumpPath = ${?CLUSTER_LEADER_DUMP_PATH}\n }\n worker {\n name = ${?CLUSTER_WORKER_NAME}\n retries = ${?CLUSTER_WORKER_RETRIES}\n timeout = ${?CLUSTER_WORKER_TIMEOUT}\n state {\n retries = ${?CLUSTER_WORKER_STATE_RETRIES}\n pollEvery = ${?CLUSTER_WORKER_POLL_EVERY}\n timeout = ${?CLUSTER_WORKER_POLL_TIMEOUT}\n }\n quotas {\n retries = ${?CLUSTER_WORKER_QUOTAS_RETRIES}\n pushEvery = ${?CLUSTER_WORKER_PUSH_EVERY}\n timeout = ${?CLUSTER_WORKER_PUSH_TIMEOUT}\n }\n }\n }\n}\n```\n\n@@@ warning\nYou **should** use HTTPS exposition for the Otoroshi API that will be used for data sync as sensitive informations are exchanged between control plane and data plane.\n@@@\n\n@@@ warning\nYou **must** have the same cluster configuration on every Otoroshi instance (worker/leader) with only names and mode changed for each instance. Some things in leader/worker are computed using configuration of their counterpart worker/leader.\n@@@\n\n## Cluster UI\n\nOnce an Otoroshi instance is launcher as cluster Leader, a new row of live metrics tile will be available on the home page of Otoroshi admin UI.\n\n@@@ div { .centered-img }\n\n@@@\n\nyou can also access a more detailed view of the cluster at `Settings (cog icon) / Cluster View`\n\n@@@ div { .centered-img }\n\n@@@\n\n## Run examples\n\nfor leader \n\n```sh\njava -Dhttp.port=8091 -Dhttps.port=9091 -Dotoroshi.cluster.mode=leader -jar otoroshi.jar\n```\n\nfor worker\n\n```sh\njava -Dhttp.port=8092 -Dhttps.port=9092 -Dotoroshi.cluster.mode=worker \\\n -Dotoroshi.cluster.leader.urls.0=http://127.0.0.1:8091 -jar otoroshi.jar\n```\n"},{"name":"index.md","id":"/topics/index.md","url":"/topics/index.html","title":"Detailed topics","content":"# Detailed topics\n\nIn this sections, you will find informations about various topics supported by Otoroshi\n\n@@@ index\n\n* [Chaos engineering with the Snow Monkey](./snow-monkey.md)\n* [JWT Tokens verification](./jwt-verifications.md)\n* [SSL/TLS termination with Otoroshi](./ssl.md)\n* [Mutual TLS with Otoroshi](./mtls.md)\n* [Otoroshi clustering](./clustering.md)\n* [Otoroshi plugins](./plugins.md)\n* [Otoroshi monitoring](./monitoring.md)\n\n@@@\n"},{"name":"jwt-verifications.md","id":"/topics/jwt-verifications.md","url":"/topics/jwt-verifications.html","title":"JWT Tokens verification","content":"# JWT Tokens verification\n\nSometimes, it can be pretty useful to verify Jwt tokens coming from other provider on some services. Otoroshi provides a tool to do that per service. In the Service descriptor page, you can find a `Jwt token Verification` section dedicated to this topic.\n\n## Service descriptor local verifications\n\n@@@ div { .centered-img }\n\n@@@\n\nin this section you can select the type of verification you can choose if the verifier is local to the `Service descriptor` or reference a global one.\n\nYou can also enabled/disable jwt verification and activate strict mode. In strict mode, requests will be rejected if the jwt token is not found.\n\n### Jwt token location\n\nYou can use the `Source` selector to specify where the Jwt token can be found. \n\n* in a query string param\n\n@@@ div { .centered-img }\n\n@@@\n\n* in a header\n\n@@@ div { .centered-img }\n\n@@@\n\n* in a cookie\n\n@@@ div { .centered-img }\n\n@@@\n\n### Jwt signing\n\nYou can use the `Algo.` selector to specify the signing algorithm to use to verifiy the token\n\n@@@ div { .centered-img }\n\n@@@\n\nyou can choose between\n\n* Hmac + SHA256\n* Hmac + SHA384\n* Hmac + SHA512\n* RSA + SHA256\n* RSA + SHA384\n* RSA + SHA512\n* Elliptic Curve + SHA256\n* Elliptic Curve + SHA384\n* Elliptic Curve + SHA512\n\n@@@ div { .centered-img }\n\n@@@\n\nYou can use syntax like `${env.MY_ENV_VAR}` or `${config.my.config.path}` to provide secret/keys values. \n\n\n### Just verify signature and fields value\n\nUsing the `Verif. strategy` selector, you can choose `Verify jwt token`. This will verify if the token is signed using the settings from `jwt signing` section and the value of the fields provided in `Verify token fields`. Then the token will be send to the target just like that.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Re-sign the token\n\nUsing the `Verif. strategy` selector, you can choose `Verify and re-sign jwt token`. This will verify if the token is signed using the settings from `jwt signing` section and the value of the fields provided in `Verify token fields`. Then the token will be re-signed using the settings provided in `Re-sign algo` and will be send to the target.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Transform the token\n\nUsing the `Verif. strategy` selector, you can choose `Verify, re-sign and transform jwt token`. This will verify if the token is signed using the settings from `jwt signing` section and the value of the fields provided in `Verify token fields`. Then the token will be re-signed using the settings provided in `Re-sign algo`. You can also change the location of the token using `Token location`, remove fields using `Remove token fields`, set fields value using `Set token fields` and even rename fields using `Rename token fields`.\n\n@@@ div { .centered-img }\n\n@@@\n\nYou can also use a mini expression language in `Set token fields`. You just have to add expressions in values like `${expression}`. Supported expressions are the following :\n\n* `${date}` => set the current date\n* `${date.format('dd/MM/yyyy')}` => set the current date formatted with the format you want\n* `${token.fieldName}` => get the value of the field named `fieldName`\n* `${token.fieldName.replace('a', 'b')}` => get the value of the field named `fieldName` and replace `a` with `b`\n* `${token.fieldName.replaceAll('[0-9]', '-')}` => get the value of the field named `fieldName` and replace digits with `-`\n\nyou can of course use multiple expressions in one field like `my-value-is-${date}-with${token.user}`\n\n## Global verifications\n\nYou can create global jwt verifiers and reference them in your services (from the `Type` selector). When you set the type of verification to `Reference to a global definition`, you can choose an existing global jwt verifier\n\n@@@ div { .centered-img }\n\n@@@\n\nTo create a global verifier, go to `Settings (cog icon) / Global Jwt Verifiers` and it will display the list of global verifiers.\n\n@@@ div { .centered-img }\n\n@@@\n\nyou can them create, edit or delete verifiers\n\n@@@ div { .centered-img }\n\n@@@\n\n"},{"name":"monitoring.md","id":"/topics/monitoring.md","url":"/topics/monitoring.html","title":"Monitoring Otoroshi","content":"# Monitoring Otoroshi\n\nThe Otoroshi API exposes two endpoints for \n\n* `/health`: the health of the Otoroshi instance\n* `/metrics`: the metrics of the Otoroshi instance, either in JSON or Prometheus format using the `Accept` header (with `application/json` / `application/prometheus` values) or the `format` query param (with `json` or `prometheus` values)\n\n## Endpoints security\n\nThe two endpoints are exposed publicly on the Otoroshi admin api. But you can remove the corresponding public pattern and query the endpoints using standard apikeys. If you don't want to use apikeys but don't want to expose the endpoints publicly, you can defined two config. variables (`app.health.accessKey` or `HEALTH_ACCESS_KEY` and `otoroshi.metrics.accessKey` or `OTOROSHI_METRICS_ACCESS_KEY`) that will hold an access key for the endpoints. Then you can call the endpoints with an `access_key` query param with the value defined in the config. If you don't defined `otoroshi.metrics.accessKey` but define `app.health.accessKey`, `otoroshi.metrics.accessKey` will have the value of `app.health.accessKey`.\n \n## Examples\n\nlet say `app.health.accessKey` has value `MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY`\n\n```sh\n$ curl http://otoroshi-api.oto.tools:8080/health\\?access_key\\=MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY\n{\"otoroshi\":\"healthy\",\"datastore\":\"healthy\"}\n\n$ curl -H 'Accept: application/json' http://otoroshi-api.oto.tools:8080/metrics\\?access_key\\=MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY\n{\"version\":\"4.0.0\",\"gauges\":{\"attr.app.commit\":{\"value\":\"xxxx\"},\"attr.app.id\":{\"value\":\"xxxx\"},\"attr.cluster.mode\":{\"value\":\"Leader\"},\"attr.cluster.name\":{\"value\":\"otoroshi-leader-0\"},\"attr.instance.env\":{\"value\":\"prod\"},\"attr.instance.id\":{\"value\":\"xxxx\"},\"attr.instance.number\":{\"value\":\"0\"},\"attr.jvm.cpu.usage\":{\"value\":136},\"attr.jvm.heap.size\":{\"value\":1409},\"attr.jvm.heap.used\":{\"value\":112},\"internals.0.concurrent-requests\":{\"value\":1},\"internals.global.throttling-quotas\":{\"value\":2},\"jvm.attr.name\":{\"value\":\"2085@xxxx\"},\"jvm.attr.uptime\":{\"value\":2296900},\"jvm.attr.vendor\":{\"value\":\"JDK11\"},\"jvm.gc.PS-MarkSweep.count\":{\"value\":3},\"jvm.gc.PS-MarkSweep.time\":{\"value\":261},\"jvm.gc.PS-Scavenge.count\":{\"value\":12},\"jvm.gc.PS-Scavenge.time\":{\"value\":161},\"jvm.memory.heap.committed\":{\"value\":1477967872},\"jvm.memory.heap.init\":{\"value\":1690304512},\"jvm.memory.heap.max\":{\"value\":3005218816},\"jvm.memory.heap.usage\":{\"value\":0.03916456777568639},\"jvm.memory.heap.used\":{\"value\":117698096},\"jvm.memory.non-heap.committed\":{\"value\":166445056},\"jvm.memory.non-heap.init\":{\"value\":7667712},\"jvm.memory.non-heap.max\":{\"value\":994050048},\"jvm.memory.non-heap.usage\":{\"value\":0.1523920694986979},\"jvm.memory.non-heap.used\":{\"value\":151485344},\"jvm.memory.pools.CodeHeap-'non-nmethods'.committed\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'non-nmethods'.init\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'non-nmethods'.max\":{\"value\":5832704},\"jvm.memory.pools.CodeHeap-'non-nmethods'.usage\":{\"value\":0.28408093398876405},\"jvm.memory.pools.CodeHeap-'non-nmethods'.used\":{\"value\":1656960},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.committed\":{\"value\":11796480},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.init\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.max\":{\"value\":122912768},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.usage\":{\"value\":0.09536102872567315},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.used\":{\"value\":11721088},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.committed\":{\"value\":37355520},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.init\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.max\":{\"value\":122912768},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.usage\":{\"value\":0.2538573047187417},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.used\":{\"value\":31202304},\"jvm.memory.pools.Compressed-Class-Space.committed\":{\"value\":14942208},\"jvm.memory.pools.Compressed-Class-Space.init\":{\"value\":0},\"jvm.memory.pools.Compressed-Class-Space.max\":{\"value\":367001600},\"jvm.memory.pools.Compressed-Class-Space.usage\":{\"value\":0.033858838762555805},\"jvm.memory.pools.Compressed-Class-Space.used\":{\"value\":12426248},\"jvm.memory.pools.Metaspace.committed\":{\"value\":99794944},\"jvm.memory.pools.Metaspace.init\":{\"value\":0},\"jvm.memory.pools.Metaspace.max\":{\"value\":375390208},\"jvm.memory.pools.Metaspace.usage\":{\"value\":0.25168142904782426},\"jvm.memory.pools.Metaspace.used\":{\"value\":94478744},\"jvm.memory.pools.PS-Eden-Space.committed\":{\"value\":349700096},\"jvm.memory.pools.PS-Eden-Space.init\":{\"value\":422576128},\"jvm.memory.pools.PS-Eden-Space.max\":{\"value\":1110966272},\"jvm.memory.pools.PS-Eden-Space.usage\":{\"value\":0.07505125052077188},\"jvm.memory.pools.PS-Eden-Space.used\":{\"value\":83379408},\"jvm.memory.pools.PS-Eden-Space.used-after-gc\":{\"value\":0},\"jvm.memory.pools.PS-Old-Gen.committed\":{\"value\":1127219200},\"jvm.memory.pools.PS-Old-Gen.init\":{\"value\":1127219200},\"jvm.memory.pools.PS-Old-Gen.max\":{\"value\":2253914112},\"jvm.memory.pools.PS-Old-Gen.usage\":{\"value\":0.014950035505168354},\"jvm.memory.pools.PS-Old-Gen.used\":{\"value\":33696096},\"jvm.memory.pools.PS-Old-Gen.used-after-gc\":{\"value\":23791152},\"jvm.memory.pools.PS-Survivor-Space.committed\":{\"value\":1048576},\"jvm.memory.pools.PS-Survivor-Space.init\":{\"value\":70254592},\"jvm.memory.pools.PS-Survivor-Space.max\":{\"value\":1048576},\"jvm.memory.pools.PS-Survivor-Space.usage\":{\"value\":0.59375},\"jvm.memory.pools.PS-Survivor-Space.used\":{\"value\":622592},\"jvm.memory.pools.PS-Survivor-Space.used-after-gc\":{\"value\":622592},\"jvm.memory.total.committed\":{\"value\":1644412928},\"jvm.memory.total.init\":{\"value\":1697972224},\"jvm.memory.total.max\":{\"value\":3999268864},\"jvm.memory.total.used\":{\"value\":269184904},\"jvm.thread.blocked.count\":{\"value\":0},\"jvm.thread.count\":{\"value\":82},\"jvm.thread.daemon.count\":{\"value\":11},\"jvm.thread.deadlock.count\":{\"value\":0},\"jvm.thread.deadlocks\":{\"value\":[]},\"jvm.thread.new.count\":{\"value\":0},\"jvm.thread.runnable.count\":{\"value\":25},\"jvm.thread.terminated.count\":{\"value\":0},\"jvm.thread.timed_waiting.count\":{\"value\":10},\"jvm.thread.waiting.count\":{\"value\":47}},\"counters\":{},\"histograms\":{},\"meters\":{},\"timers\":{}}\n\n$ curl -H 'Accept: application/prometheus' http://otoroshi-api.oto.tools:8080/metrics\\?access_key\\=MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY\n# TYPE attr_jvm_cpu_usage gauge\nattr_jvm_cpu_usage 83.0\n# TYPE attr_jvm_heap_size gauge\nattr_jvm_heap_size 1409.0\n# TYPE attr_jvm_heap_used gauge\nattr_jvm_heap_used 220.0\n# TYPE internals_0_concurrent_requests gauge\ninternals_0_concurrent_requests 1.0\n# TYPE internals_global_throttling_quotas gauge\ninternals_global_throttling_quotas 3.0\n# TYPE jvm_attr_uptime gauge\njvm_attr_uptime 2372614.0\n# TYPE jvm_gc_PS_MarkSweep_count gauge\njvm_gc_PS_MarkSweep_count 3.0\n# TYPE jvm_gc_PS_MarkSweep_time gauge\njvm_gc_PS_MarkSweep_time 261.0\n# TYPE jvm_gc_PS_Scavenge_count gauge\njvm_gc_PS_Scavenge_count 12.0\n# TYPE jvm_gc_PS_Scavenge_time gauge\njvm_gc_PS_Scavenge_time 161.0\n# TYPE jvm_memory_heap_committed gauge\njvm_memory_heap_committed 1.477967872E9\n# TYPE jvm_memory_heap_init gauge\njvm_memory_heap_init 1.690304512E9\n# TYPE jvm_memory_heap_max gauge\njvm_memory_heap_max 3.005218816E9\n# TYPE jvm_memory_heap_usage gauge\njvm_memory_heap_usage 0.07680553268571043\n# TYPE jvm_memory_heap_used gauge\njvm_memory_heap_used 2.30817432E8\n# TYPE jvm_memory_non_heap_committed gauge\njvm_memory_non_heap_committed 1.66510592E8\n# TYPE jvm_memory_non_heap_init gauge\njvm_memory_non_heap_init 7667712.0\n# TYPE jvm_memory_non_heap_max gauge\njvm_memory_non_heap_max 9.94050048E8\n# TYPE jvm_memory_non_heap_usage gauge\njvm_memory_non_heap_usage 0.15262878997416435\n# TYPE jvm_memory_non_heap_used gauge\njvm_memory_non_heap_used 1.51720656E8\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__committed gauge\njvm_memory_pools_CodeHeap__non_nmethods__committed 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__init gauge\njvm_memory_pools_CodeHeap__non_nmethods__init 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__max gauge\njvm_memory_pools_CodeHeap__non_nmethods__max 5832704.0\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__usage gauge\njvm_memory_pools_CodeHeap__non_nmethods__usage 0.28408093398876405\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__used gauge\njvm_memory_pools_CodeHeap__non_nmethods__used 1656960.0\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__committed gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__committed 1.1862016E7\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__init gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__init 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__max gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__max 1.22912768E8\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__usage gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__usage 0.09610562183417755\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__used gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__used 1.1812608E7\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__committed gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__committed 3.735552E7\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__init gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__init 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__max gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__max 1.22912768E8\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__usage gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__usage 0.25493618368435084\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__used gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__used 3.1334912E7\n# TYPE jvm_memory_pools_Compressed_Class_Space_committed gauge\njvm_memory_pools_Compressed_Class_Space_committed 1.4942208E7\n# TYPE jvm_memory_pools_Compressed_Class_Space_init gauge\njvm_memory_pools_Compressed_Class_Space_init 0.0\n# TYPE jvm_memory_pools_Compressed_Class_Space_max gauge\njvm_memory_pools_Compressed_Class_Space_max 3.670016E8\n# TYPE jvm_memory_pools_Compressed_Class_Space_usage gauge\njvm_memory_pools_Compressed_Class_Space_usage 0.03386023385184152\n# TYPE jvm_memory_pools_Compressed_Class_Space_used gauge\njvm_memory_pools_Compressed_Class_Space_used 1.242676E7\n# TYPE jvm_memory_pools_Metaspace_committed gauge\njvm_memory_pools_Metaspace_committed 9.9794944E7\n# TYPE jvm_memory_pools_Metaspace_init gauge\njvm_memory_pools_Metaspace_init 0.0\n# TYPE jvm_memory_pools_Metaspace_max gauge\njvm_memory_pools_Metaspace_max 3.75390208E8\n# TYPE jvm_memory_pools_Metaspace_usage gauge\njvm_memory_pools_Metaspace_usage 0.25170985813247426\n# TYPE jvm_memory_pools_Metaspace_used gauge\njvm_memory_pools_Metaspace_used 9.4489416E7\n# TYPE jvm_memory_pools_PS_Eden_Space_committed gauge\njvm_memory_pools_PS_Eden_Space_committed 3.49700096E8\n# TYPE jvm_memory_pools_PS_Eden_Space_init gauge\njvm_memory_pools_PS_Eden_Space_init 4.22576128E8\n# TYPE jvm_memory_pools_PS_Eden_Space_max gauge\njvm_memory_pools_PS_Eden_Space_max 1.110966272E9\n# TYPE jvm_memory_pools_PS_Eden_Space_usage gauge\njvm_memory_pools_PS_Eden_Space_usage 0.17698545577448457\n# TYPE jvm_memory_pools_PS_Eden_Space_used gauge\njvm_memory_pools_PS_Eden_Space_used 1.96624872E8\n# TYPE jvm_memory_pools_PS_Eden_Space_used_after_gc gauge\njvm_memory_pools_PS_Eden_Space_used_after_gc 0.0\n# TYPE jvm_memory_pools_PS_Old_Gen_committed gauge\njvm_memory_pools_PS_Old_Gen_committed 1.1272192E9\n# TYPE jvm_memory_pools_PS_Old_Gen_init gauge\njvm_memory_pools_PS_Old_Gen_init 1.1272192E9\n# TYPE jvm_memory_pools_PS_Old_Gen_max gauge\njvm_memory_pools_PS_Old_Gen_max 2.253914112E9\n# TYPE jvm_memory_pools_PS_Old_Gen_usage gauge\njvm_memory_pools_PS_Old_Gen_usage 0.014950035505168354\n# TYPE jvm_memory_pools_PS_Old_Gen_used gauge\njvm_memory_pools_PS_Old_Gen_used 3.3696096E7\n# TYPE jvm_memory_pools_PS_Old_Gen_used_after_gc gauge\njvm_memory_pools_PS_Old_Gen_used_after_gc 2.3791152E7\n# TYPE jvm_memory_pools_PS_Survivor_Space_committed gauge\njvm_memory_pools_PS_Survivor_Space_committed 1048576.0\n# TYPE jvm_memory_pools_PS_Survivor_Space_init gauge\njvm_memory_pools_PS_Survivor_Space_init 7.0254592E7\n# TYPE jvm_memory_pools_PS_Survivor_Space_max gauge\njvm_memory_pools_PS_Survivor_Space_max 1048576.0\n# TYPE jvm_memory_pools_PS_Survivor_Space_usage gauge\njvm_memory_pools_PS_Survivor_Space_usage 0.59375\n# TYPE jvm_memory_pools_PS_Survivor_Space_used gauge\njvm_memory_pools_PS_Survivor_Space_used 622592.0\n# TYPE jvm_memory_pools_PS_Survivor_Space_used_after_gc gauge\njvm_memory_pools_PS_Survivor_Space_used_after_gc 622592.0\n# TYPE jvm_memory_total_committed gauge\njvm_memory_total_committed 1.644478464E9\n# TYPE jvm_memory_total_init gauge\njvm_memory_total_init 1.697972224E9\n# TYPE jvm_memory_total_max gauge\njvm_memory_total_max 3.999268864E9\n# TYPE jvm_memory_total_used gauge\njvm_memory_total_used 3.82665128E8\n# TYPE jvm_thread_blocked_count gauge\njvm_thread_blocked_count 0.0\n# TYPE jvm_thread_count gauge\njvm_thread_count 82.0\n# TYPE jvm_thread_daemon_count gauge\njvm_thread_daemon_count 11.0\n# TYPE jvm_thread_deadlock_count gauge\njvm_thread_deadlock_count 0.0\n# TYPE jvm_thread_new_count gauge\njvm_thread_new_count 0.0\n# TYPE jvm_thread_runnable_count gauge\njvm_thread_runnable_count 25.0\n# TYPE jvm_thread_terminated_count gauge\njvm_thread_terminated_count 0.0\n# TYPE jvm_thread_timed_waiting_count gauge\njvm_thread_timed_waiting_count 10.0\n# TYPE jvm_thread_waiting_count gauge\njvm_thread_waiting_count 47.0\n```"},{"name":"mtls.md","id":"/topics/mtls.md","url":"/topics/mtls.html","title":"Mutual TLS with Otoroshi","content":"# Mutual TLS with Otoroshi\n\n@@@ warning\nThis section is under rewrite. The following content is deprecated\n@@@\n\nOtoroshi support mutual TLS out of the box. mTLS from client to Otoroshi and from Otoroshi to targets are supported. In this article we will see how to configure Otoroshi to use end-to-end mTLS. All code and files used in this articles can be found on the [Otoroshi github](https://github.com/MAIF/otoroshi/tree/master/demos/mtls)\n\n@@@ note { title=\"Experimental Feature\" }\nDynamic Mutual TLS is an experimental feature. It can change until it becomess an official feature\n@@@\n\n## End-to-end mTLS\n\nThe use case is the following :\n\n@@@ div { .centered-img }\n\n@@@\n\nfor this demo you will have to edit your `/etc/hosts` file to add the following entries\n\n```\n127.0.0.1 api.backend.lol api.frontend.lol www.backend.lol www.frontend.lol validation.backend.lol\n```\n\n### Create certificates\n\nBut first we need to generate some certificates to make the demo work\n\n```sh\nmkdir mtls-demo\ncd mtls-demo\nmkdir ca\nmkdir server\nmkdir client\n\n# create a certificate authority key, use password as pass phrase\nopenssl genrsa -out ./ca/ca-backend.key 4096\n# remove pass phrase\nopenssl rsa -in ./ca/ca-backend.key -out ./ca/ca-backend.key\n# generate the certificate authority cert\nopenssl req -new -x509 -sha256 -days 730 -key ./ca/ca-backend.key -out ./ca/ca-backend.cer -subj \"/CN=MTLSB\"\n\n\n# create a certificate authority key, use password as pass phrase\nopenssl genrsa -out ./ca/ca-frontend.key 2048\n# remove pass phrase\nopenssl rsa -in ./ca/ca-frontend.key -out ./ca/ca-frontend.key\n# generate the certificate authority cert\nopenssl req -new -x509 -sha256 -days 730 -key ./ca/ca-frontend.key -out ./ca/ca-frontend.cer -subj \"/CN=MTLSF\"\n\n\n# now create the backend cert key, use password as pass phrase\nopenssl genrsa -out ./server/_.backend.lol.key 2048\n# remove pass phrase\nopenssl rsa -in ./server/_.backend.lol.key -out ./server/_.backend.lol.key\n# generate the csr for the certificate\nopenssl req -new -key ./server/_.backend.lol.key -sha256 -out ./server/_.backend.lol.csr -subj \"/CN=*.backend.lol\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./server/_.backend.lol.csr -CA ./ca/ca-backend.cer -CAkey ./ca/ca-backend.key -set_serial 1 -out ./server/_.backend.lol.cer\n# verify the certificate, should output './server/_.backend.lol.cer: OK'\nopenssl verify -CAfile ./ca/ca-backend.cer ./server/_.backend.lol.cer\n\n\n# now create the frontend cert key, use password as pass phrase\nopenssl genrsa -out ./server/_.frontend.lol.key 2048\n# remove pass phrase\nopenssl rsa -in ./server/_.frontend.lol.key -out ./server/_.frontend.lol.key\n# generate the csr for the certificate\nopenssl req -new -key ./server/_.frontend.lol.key -sha256 -out ./server/_.frontend.lol.csr -subj \"/CN=*.frontend.lol\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./server/_.frontend.lol.csr -CA ./ca/ca-frontend.cer -CAkey ./ca/ca-frontend.key -set_serial 1 -out ./server/_.frontend.lol.cer\n# verify the certificate, should output './server/_.frontend.lol.cer: OK'\nopenssl verify -CAfile ./ca/ca-frontend.cer ./server/_.frontend.lol.cer\n\n\n# now create the client cert key for backend, use password as pass phrase\nopenssl genrsa -out ./client/_.backend.lol.key 2048\n# remove pass phrase\nopenssl rsa -in ./client/_.backend.lol.key -out ./client/_.backend.lol.key\n# generate the csr for the certificate\nopenssl req -new -key ./client/_.backend.lol.key -out ./client/_.backend.lol.csr -subj \"/CN=*.backend.lol\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./client/_.backend.lol.csr -CA ./ca/ca-backend.cer -CAkey ./ca/ca-backend.key -set_serial 2 -out ./client/_.backend.lol.cer\n# generate a pkcs12 version of the cert and key, use password as password\nopenssl pkcs12 -export -clcerts -in client/_.backend.lol.cer -inkey client/_.backend.lol.key -out client/_.backend.lol.p12\n\n\n# now create the client cert key for frontend, use password as pass phrase\nopenssl genrsa -out ./client/_.frontend.lol.key 2048\n# remove pass phrase\nopenssl rsa -in ./client/_.frontend.lol.key -out ./client/_.frontend.lol.key\n# generate the csr for the certificate\nopenssl req -new -key ./client/_.frontend.lol.key -out ./client/_.frontend.lol.csr -subj \"/CN=*.frontend.lol\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./client/_.frontend.lol.csr -CA ./ca/ca-frontend.cer -CAkey ./ca/ca-frontend.key -set_serial 2 -out ./client/_.frontend.lol.cer\n# generate a pkcs12 version of the cert and key, use password as password\nopenssl pkcs12 -export -clcerts -in client/_.frontend.lol.cer -inkey client/_.frontend.lol.key -out client/_.frontend.lol.p12\n```\n\nonce it's done, you should have something like\n\n```sh\n$ tree\n.\n├── backend.js\n├── ca\n│ ├── ca-backend.cer\n│ ├── ca-backend.key\n│ ├── ca-frontend.cer\n│ └── ca-frontend.key\n├── client\n│ ├── _.backend.lol.cer\n│ ├── _.backend.lol.csr\n│ ├── _.backend.lol.key\n│ ├── _.backend.lol.p12\n│ ├── _.frontend.lol.cer\n│ ├── _.frontend.lol.csr\n│ ├── _.frontend.lol.key\n│ └── _.frontend.lol.p12\n└── server\n ├── _.backend.lol.cer\n ├── _.backend.lol.csr\n ├── _.backend.lol.key\n ├── _.frontend.lol.cer\n ├── _.frontend.lol.csr\n └── _.frontend.lol.key\n\n3 directories, 18 files\n```\n\n### The backend service \n\nnow, let's create a backend service using nodejs. Create a file named `backend.js`\n\n```sh\ntouch backend.js\n```\n\nand put the following content\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\nconst options = { \n key: fs.readFileSync('./server/_.backend.lol.key'), \n cert: fs.readFileSync('./server/_.backend.lol.cer'), \n ca: fs.readFileSync('./ca/ca-backend.cer'), \n}; \n\nhttps.createServer(options, (req, res) => { \n res.writeHead(200, {\n 'Content-Type': 'application/json'\n }); \n res.end(JSON.stringify({ message: 'Hello World!' }) + \"\\n\"); \n}).listen(8444);\n```\n\nto run the server, just do \n\n```sh\nnode ./backend.js\n```\n\nnow you can try your server with\n\n```sh\ncurl --cacert ./ca/ca-backend.cer https://api.backend.lol:8444/\n# will print {\"message\":\"Hello World!\"}\n```\n\nnow modify your backend server to ensure that the client provides a client certificate like:\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\nconst options = { \n key: fs.readFileSync('./server/_.backend.lol.key'), \n cert: fs.readFileSync('./server/_.backend.lol.cer'), \n ca: fs.readFileSync('./ca/ca-backend.cer'), \n requestCert: true, \n rejectUnauthorized: true\n}; \n\nhttps.createServer(options, (req, res) => { \n console.log('Client certificate CN: ', req.socket.getPeerCertificate().subject.CN);\n res.writeHead(200, {\n 'Content-Type': 'application/json'\n }); \n res.end(JSON.stringify({ message: 'Hello World!' }) + \"\\n\"); \n}).listen(8444);\n```\n\nyou can test your new server with\n\n```sh\ncurl --cacert ./ca/ca-backend.cer --cert-type pkcs12 --cert ./client/_.backend.lol.p12:password https://api.backend.lol:8444/\n# will print {\"message\":\"Hello World!\"}\n```\n\n### Otoroshi setup\n\nDownload the latest version of the Otoroshi jar and run it like\n\n```sh\njava -jar otoroshi.jar\n\n[info] otoroshi-env - Admin API exposed on http://otoroshi-api.oto.tools:8080\n[info] otoroshi-env - Admin UI exposed on http://otoroshi.oto.tools:8080\n[info] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[info] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[info] otoroshi-env - You can log into the Otoroshi admin console with the following credentials: admin@otoroshi.io / xxxxxxxxxxxx\n[info] play.api.Play - Application started (Prod)\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n[info] p.c.s.AkkaHttpServer - Listening for HTTPS on /0:0:0:0:0:0:0:0:8443\n[info] otoroshi-env - Generating a self signed SSL certificate for https://*.oto.tools ...\n```\n\nand log into otoroshi with the tuple `admin@otoroshi.io / xxxxxxxxxxxx` displayed in the logs. Once logged in, create a new public service exposed on `http://api.frontend.lol` that targets `ahttps://api.backend.lol:8444/`.\n\n@@@ div { .centered-img }\n\n@@@\n\nand test it\n\n```sh\ncurl http://api.frontend.lol:8080/\n# the following error should be returned: {\"Otoroshi-Error\":\"Something went wrong, you should try later. Thanks for your understanding.\"}\n```\n\n@@@ warning\nAs seen before, the target of the otoroshi service is `ahttps://api.backend.lol:8444/`. `ahttps://` is not a typo and is intended. This tells otoroshi to use its experimental `http client` with dynamic tls support to fetch this resource.\n@@@\n\nyou should get an error due to the fact that Otoroshi doesn't know about the server certificate or the client certificate expected by the server.\n\nWe have to add the client certificate for `https://api.backend.lol` to Otoroshi. Go to http://otoroshi.oto.tools:8080/bo/dashboard/certificates and create a new item. Copy and paste the content of `./client/_.backend.lol.cer` and `./client/_.backend.lol.key` respectively in `Certificate full chain` and `Certificate private key`.\n\n@@@ div { .centered-img }\n\n@@@\n\nand retry the following curl command \n\n```sh\ncurl http://api.frontend.lol:8080/\n# the output should be: {\"message\":\"Hello World!\"}\n```\n\nnow we have to expose `https://api.frontend.lol:8443` using otoroshi. Go to http://otoroshi.oto.tools:8080/bo/dashboard/certificates and create a new item. Copy and paste the content of `./server/_.frontend.lol.cer` and `./server/_.frontend.lol.key` respectively in `Certificate full chain` and `Certificate private key`.\n\nand try the following command\n\n```sh\ncurl --cacert ./ca/ca-frontend.cer https://api.frontend.lol:8443/\n# the output should be: {\"message\":\"Hello World!\"}\n```\n\nnow we have to enforce the fact that we want client certificate for `api.frontend.lol`. To do that, we have to create a `Validation authority` in otoroshi and use it on the `api.frontend.lol` service. Go to http://otoroshi.oto.tools:8080/bo/dashboard/validation-authorities and create a new item. A validation authority is supposed to be a remote service that will say if the client certificate is valid. Here we don't really care if the certificate is valid or not, but we want to enforce the fact that there is a client certificate. So just check the `All cert. valid button`.\n\n@@@ div { .centered-img }\n\n@@@\n\nnow go back on your `api.frontend.lol` service, in the `Validation authority` section and select the authority you just created.\n\n@@@ div { .centered-img }\n\n@@@\n\nnow if you retry \n\n```sh\ncurl --cacert ./ca/ca-frontend.cer https://api.frontend.lol:8443/\n# the output should be: {\"Otoroshi-Error\":\"You're not authorized here !\"}\n```\n\nyou should get an error because no client cert. is passed with the request. But if you pass the `./client/_.frontend.lol.p12` client cert in your curl call\n\n```sh\ncurl --cacert ./ca/ca-frontend.cer --cert-type pkcs12 --cert ./client/_.frontend.lol.p12:password https://api.frontend.lol:8443/\n# the output should be: {\"message\":\"Hello World!\"}\n```\n\n### End to end test\n\nNow we can try to write a small nodejs client that uses our client certificates. Create a `client.js` file with the following code\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\nprocess.env['NODE_TLS_REJECT_UNAUTHORIZED'] = 0;\n\nconst options = { \n hostname: 'api.frontend.lol', \n port: 8443, \n path: '/', \n method: 'GET', \n key: fs.readFileSync('./client/_.frontend.lol.key'), \n cert: fs.readFileSync('./client/_.frontend.lol.cer'), \n ca: fs.readFileSync('./ca/ca-frontend.cer'), \n}; \n\nconst req = https.request(options, (res) => { \n console.log('statusCode', res.statusCode);\n console.log('headers', res.headers);\n console.log('body:');\n res.on('data', (data) => { \n process.stdout.write(data); \n }); \n}); \n\nreq.end(); \n\nreq.on('error', (e) => { \n console.error(e); \n});\n```\n\nand run the following command\n\n```sh\n$ node client.js\n# statusCode 200\n# headers { date: 'Mon, 10 Dec 2018 16:01:11 GMT',\n# connection: 'close',\n# 'transfer-encoding': 'chunked',\n# 'content-type': 'application/json' }\n# body:\n# {\"message\":\"Hello World!\"}\n```\n\nAnd that's it \n\n## Validating client certificates based on user identity\n\n@@@ note { title=\"Experimental Feature\" }\nValidation authorities is an experimental feature. It can change until it becomess an official feature\n@@@\n\nThe use case is the following :\n\n@@@ div { .centered-img }\n\n@@@\n\nthe idea here is to provide a unique client certificate per device that can access Otoroshi and use a validation authority to check if the user is allowed to access the underlying app with a specific device.\n\n### Generate client certificates for devices\n\nTo do that we are going to create two client certificates, one per device (let say for a laptop and a desktop computer). We are going to use the device serial number as common name of the certificate to be able to identify the device behind the certificate.\n\n```sh\nopenssl genrsa -out ./client/device-1.key 2048\nopenssl rsa -in ./client/device-1.key -out ./client/device-1.key\nopenssl req -new -key ./client/device-1.key -out ./client/device-1.csr -subj \"/CN=mbp-123456789\"\nopenssl x509 -req -days 365 -sha256 -in ./client/device-1.csr -CA ./ca/ca-frontend.cer -CAkey ./ca/ca-frontend.key -set_serial 3 -out ./client/device-1\nopenssl pkcs12 -export -clcerts -in client/device-1 -inkey client/device-1.key -out client/device-1.p12\n\nopenssl genrsa -out ./client/device-2.key 2048\nopenssl rsa -in ./client/device-2.key -out ./client/device-2.key\nopenssl req -new -key ./client/device-2.key -out ./client/device-2.csr -subj \"/CN=nuc-987654321\"\nopenssl x509 -req -days 365 -sha256 -in ./client/device-2.csr -CA ./ca/ca-frontend.cer -CAkey ./ca/ca-frontend.key -set_serial 4 -out ./client/device-2\nopenssl pkcs12 -export -clcerts -in client/device-2 -inkey client/device-2.key -out client/device-2.p12\n```\n\n### Setup actual validation\n\nnow we are going to write an validation authority (with mTLS too) that is going to respond on `https://validation.backend.lol:8445`. The server has access to a list of apps, users and devices to check if everything is correct. In this implementation, the lists are hardcoded, but you can write your own implementation that will fetch data from your corporate LDAP, CA, etc. Create a `validation.js` file and add the following content. Don't forget to do `yarn add x509` before running the server with `node validation.js`\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \nconst x509 = require('x509');\n\n// list of knwon apps\nconst apps = [\n {\n \"id\": \"iogOIDH09EktFhydTp8xspGvdaBq961DUDr6MBBNwHO2EiBMlOdafGnImhbRGy8z\",\n \"name\": \"my-web-service\",\n \"description\": \"A service that says hello\",\n \"host\": \"www.frontend.lol\"\n }\n];\n\n// list of known users\nconst users = [\n {\n \"name\": \"Mathieu\",\n \"email\": \"mathieu@oto.tools\",\n \"appRights\": [\n {\n \"id\": \"iogOIDH09EktFhydTp8xspGvdaBq961DUDr6MBBNwHO2EiBMlOdafGnImhbRGy8z\",\n \"profile\": \"user\",\n \"forbidden\": false\n },\n {\n \"id\": \"PqgOIDH09EktFhydTp8xspGvdaBq961DUDr6MBBNwHO2EiBMlOdafGnImhbRGy8z\",\n \"profile\": \"none\",\n \"forbidden\": true\n },\n ],\n \"ownedDevices\": [\n \"mbp-123456789\",\n \"nuc-987654321\",\n ]\n }\n];\n\n// list of known devices\nconst devices = [\n {\n \"serialNumber\": \"mbp-123456789\",\n \"hardware\": \"Macbook Pro 2018 13 inc. with TouchBar, 2.6 GHz, 16 Gb\",\n \"acquiredAt\": \"2018-10-01\",\n },\n {\n \"serialNumber\": \"nuc-987654321\",\n \"hardware\": \"Intel NUC i7 3.0 GHz, 32 Gb\",\n \"acquiredAt\": \"2018-09-01\",\n },\n {\n \"serialNumber\": \"iphone-1234\",\n \"hardware\": \"Iphone XS, 256 Gb\",\n \"acquiredAt\": \"2018-12-01\",\n }\n];\n\nconst options = { \n key: fs.readFileSync('./server/_.backend.lol.key'), \n cert: fs.readFileSync('./server/_.backend.lol.cer'), \n ca: fs.readFileSync('./ca/ca-backend.cer'), \n requestCert: true, \n rejectUnauthorized: true\n}; \n\nfunction readBody(request) {\n return new Promise((success, failure) => {\n const body = [];\n request.on('data', (chunk) => {\n body.push(chunk);\n }).on('end', () => {\n const bodyStr = Buffer.concat(body).toString();\n success(JSON.parse(bodyStr));\n });\n });\n}\n\nfunction chainIsValid(chain) {\n // validate cert dates\n // validate cert against clr\n // validate whatever you want here\n return true;\n}\n\nfunction call(req, res) {\n readBody(req).then(body => {\n const service = body.service;\n const email = (body.user || { email: 'mathieu@oto.tools' }).email; // here, should not be null if used with an otoroshi auth. module\n // common name should be device serial number\n const commonName = x509.getSubject(body.chain).commonName\n // search for a known device\n const device = devices.filter(d => d.serialNumber === commonName)[0];\n // search for a known user\n const user = users.filter(d => d.email === email)[0];\n // search for a known application\n const app = apps.filter(d => d.id === service.id)[0];\n res.writeHead(200, { 'Content-Type': 'application/json' }); \n if (chainIsValid(body.chain.map(x509.parseCert)) && user && device && app) {\n // check if the user actually owns the device\n const userOwnsDevice = user.ownedDevices.filter(d => d === device.serialNumber)[0];\n // check if the user has rights to access the app\n const rights = user.appRights.filter(d => d.id === app.id)[0];\n const hasRightToUseApp = !rights.forbidden\n if (userOwnsDevice && hasRightToUseApp) {\n // yeah !!!!\n console.log(`Call from user \"${user.email}\" with device \"${device.hardware}\" on app \"${app.name}\" with profile \"${rights.profile}\" authorized`)\n res.end(JSON.stringify({ status: 'good', profile: rights.profile }) + \"\\n\"); \n } else {\n // nope !!! nope, nope nope\n console.log(`Call from user \"${user.email}\" with device \"${device.hardware}\" on app \"${app.name}\" unauthorized because user doesn't owns the hardware or has no rights`)\n res.end(JSON.stringify({ status: 'unauthorized' }) + \"\\n\"); \n }\n } else {\n console.log(`Call unauthorized`)\n res.end(JSON.stringify({ status: 'unauthorized' }) + \"\\n\"); \n }\n });\n}\n\nhttps.createServer(options, call).listen(8445);\n```\n\nthe corresponding authority validation can be created in Otoroshi like \n\n```json\n{\n \"id\": \"r7m8j31rh66hhdia3ormfm0wfevu1kvg0zgaxsp3oxb6ivf7fy8kvygmvnrlxv81\",\n \"name\": \"Actual validation authority\",\n \"description\": \"Actual validation authority\",\n \"url\": \"ahttps://validation.backend.lol:8445\",\n \"host\": \"validation.backend.lol\",\n \"goodTtl\": 600000,\n \"badTtl\": 60000,\n \"method\": \"POST\",\n \"path\": \"/certificates/_validate\",\n \"timeout\": 10000,\n \"noCache\": false,\n \"alwaysValid\": false,\n \"headers\": {}\n}\n```\n\nbut you don't need to create it right now.\n\nTypically, a validation authority server is a server with a route on `POST /certificates/_validate` that accepts `application/json` and returns `application/json` with a body like\n\n```json\n{\n \"apikey\": nullable {\n \"clientId\": String,\n \"clientName\": String,\n \"authorizedEntities\": Seq[String],\n \"enabled\": Boolean,\n \"readOnly\": Boolean,\n \"allowClientIdOnly\": Boolean,\n \"throttlingQuota\": Long,\n \"dailyQuota\": Long,\n \"monthlyQuota\": Long,\n \"metadata\": Map[String, String]\n },\n \"user\": nullable {\n \"email\": String,\n \"name\": String,\n },\n \"service\": {\n \"id\": String,\n \"name\": String,\n \"groups\": Seq[String],\n \"domain\": String,\n \"env\": String,\n \"subdomain\": String,\n \"root\": String,\n \"metadata\": String\n },\n \"chain\": PemFormattedCertificateChainString,\n \"fingerprints\": Array[String]\n}\n```\n\n\n### Setup Otoroshi\n\nYou can start Otoroshi and import data from the `state.json` file in the demo folder. The login tuple is `admin@otoroshi.io / password`. The `state.json` file contains everything you need for the demo, like certificates, service descriptors, auth. modules, etc ...\n\n```sh\njava -Dapp.importFrom=$(pwd)/state.json -Dapp.privateapps.port=8080 -jar otoroshi.jar\n\n[info] otoroshi-env - Admin API exposed on http://otoroshi-api.oto.tools:8080\n[info] otoroshi-env - Admin UI exposed on http://otoroshi.oto.tools:8080\n[info] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[info] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[info] otoroshi-env - Importing from: /pwd/state.json\n[info] play.api.Play - Application started (Prod)\n[info] otoroshi-env - Successful import !\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n[info] p.c.s.AkkaHttpServer - Listening for HTTPS on /0:0:0:0:0:0:0:0:8443\n```\n\n### Testing \n\nYou can test the service with curl like\n\n```sh\ncurl --cacert ./ca/ca-frontend.cer --cert-type pkcs12 --cert ./client/device-1.p12:password https://www.frontend.lol:8443/\n# output: Hello World !!!
\ncurl --cacert ./ca/ca-frontend.cer --cert-type pkcs12 --cert ./client/device-2.p12:password https://www.frontend.lol:8443/\n# output: Hello World !!!
\ncurl --cacert ./ca/ca-frontend.cer --cert-type pkcs12 --cert ./client/_.frontend.lol.p12:password https://www.frontend.lol:8443/\n# output: {\"Otoroshi-Error\":\"You're not authorized here !\"}\n```\n\nas expected, the first two call works as their common name is known by the validation server. The last one fails as it's not known.\n\n### Validate user identity\n\nNow let's try to setup firefox to provide the client certificate. Open firefox settings, go to `privacy settings and security` and click on `display certificates` at the bottom of the page. Here you can add the frontend CA (`./ca/ca-frontend.cer`) in the `Authorities` tab, check the 'authorize this CA to identify websites', and then in the `certificates` tab, import one of the devices `.p12` file (like `./client/device-1.p12`). Firefox will ask for the files password (it should be `password`).\n\n@@@ div { .centered-img }\n\n@@@\n\nNow restart firefox.\n\nNext, go to the `my-web-service` service in otoroshi (log in with `admin@otoroshi.io / password`) and activate `Enforce user login` in the Authentication section. It means that now, you'll have to log in when you'll go to https://www.frontend.lol:8443. With authentication activated on otoroshi, the user identity will be sent to the validation authority, so you can change the following line in the file `validation.js`\n\n```js\nconst email = (body.user || { email: 'mathieu@oto.tools' }).email; // here, should not be null if used with an otoroshi auth. module\n```\n\nto\n\n```js\nconst email = body.user.email;\n```\n\nThen, in Firefox, go to https://www.frontend.lol:8443/, firefox will ask which client certificate to use. Select the one you imported (in the process, maybe firefox will warn you that the certificate of the site is auto signed, just ignore it and continue ;) )\n\n@@@ div { .centered-img }\n\n@@@\n\nthen, you'll see a login screen from otoroshi. You can log in with `mathieu@oto.tools / password` and then you should see the hello world message.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Going further with user authentication\n\nFor stronger user authentication, you can try to use an auth. module baked by a keycloak instance with yubikey as a strong second factor authentication instead of the basic auth. module we used previously in this article.\n"},{"name":"plugins.md","id":"/topics/plugins.md","url":"/topics/plugins.html","title":"Otoroshi plugins","content":"# Otoroshi plugins\n\n@@@ warning\nThis section is under rewrite. The following content is deprecated\n@@@\n\nWhen everything has failed and you absolutely need a feature in Otoroshi to make your use case work, there is a solution. Plugins is the feature in Otoroshi that allow you to code how Otoroshi should behave when receiving, validating and routing an http request. With request plugin, you can change request / response headers and request / response body the way you want, provide your own apikey, etc.\n\n## Plugin types\n\nthere are many plugin types\n\n* `request sinks` plugins: used when no services are matched in otoroshi. Can reply with any content\n* `pre-routes` plugins: used to extract values (like custom apikeys) and provide them to other plugins or otoroshi engine\n* `access validation` plugins: used to validate if a request can pass or not based on whatever you want\n* `request transformer` plugins: used to transform request, responses and their body. Can be used to return arbitrary content\n* `event listener` plugins: any plugin type can listen to otoroshi internal events and react to thems\n* `job` plugins: tasks taht can run automatically once, on be scheduled with a cron expression or every defined interval\n\n## Code and signatures\n\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/requestsink.scala#L11-L16\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/routing.scala#L60-L63\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/accessvalidator.scala#L63-L82\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/script.scala#L314-L455\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/eventlistener.scala#L27-L48\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/job.scala#L74-L81\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/job.scala#L108-L110\n\n\nfor more information about APIs you can use\n\n* https://www.playframework.com/documentation/2.6.x/api/scala/index.html#package\n* https://www.playframework.com/documentation/2.6.x/api/scala/index.html#play.api.mvc.Results\n* https://github.com/MAIF/otoroshi\n* https://doc.akka.io/docs/akka/2.5/stream/index.html\n* https://doc.akka.io/api/akka/current/akka/stream/index.html\n* https://doc.akka.io/api/akka/current/akka/stream/scaladsl/Source.html\n\n## Plugin examples\n\nA lot of plugins comes with otoroshi, you can find it on [github](https://github.com/MAIF/otoroshi/tree/master/otoroshi/app/plugins)\n\n## Writing a plugin from Otoroshi UI\n\nLog into Otoroshi and go to `Settings (cog icon) / Plugins`. Here you can create multiple request transformer scripts and associate it with service descriptor later.\n\n@@@ div { .centered-img }\n\n@@@\n\nwhen you write for instance a transformer in the Otoroshi UI, do the following\n\n```scala\nimport akka.stream.Materializer\nimport env.Env\nimport models.{ApiKey, PrivateAppsUser, ServiceDescriptor}\nimport otoroshi.script._\nimport play.api.Logger\nimport play.api.mvc.{Result, Results}\nimport scala.util._\nimport scala.concurrent.{ExecutionContext, Future}\n\nclass MyTransformer extends RequestTransformer {\n\n val logger = Logger(\"my-transformer\")\n\n // implements the methods you want\n}\n\n// WARN: do not forget this line to provide a working instance of your transformer to Otoroshi\nnew MyTransformer()\n```\n\nYou can use the compile button to check if the script compiles, or code the transformer in your IDE (see next point).\n\nThen go to a service descriptor, scroll to the bottom of the page, and select your transformer in the list\n\n@@@ div { .centered-img }\n\n@@@\n\n## Providing a transformer from Java classpath\n\nYou can write your own transformer using your favorite IDE. Just create an SBT project with the following dependencies. It can be quite handy to manage the source code like any other piece of code, and it avoid the compilation time for the script at Otoroshi startup.\n\n```scala\nlazy val root = (project in file(\".\")).\n settings(\n inThisBuild(List(\n organization := \"com.example\",\n scalaVersion := \"2.12.7\",\n version := \"0.1.0-SNAPSHOT\"\n )),\n name := \"request-transformer-example\",\n resolvers += Resolver.bintrayRepo(\"maif\", \"maven\"),\n libraryDependencies += \"fr.maif.otoroshi\" %% \"otoroshi\" % \"1.x.x\"\n )\n```\n\nWhen your code is ready, create a jar file \n\n```\nsbt package\n```\n\nand add the jar file to the Otoroshi classpath\n\n```sh\njava -cp \"/path/to/transformer.jar:$/path/to/otoroshi.jar\" play.core.server.ProdServerStart\n```\n\nthen, in your service descriptor, you can chose your transformer in the list. If you want to do it from the API, you have to defined the transformerRef using `cp:` prefix like \n\n```json\n{\n \"transformerRef\": \"cp:my.class.package.MyTransformer\"\n}\n```\n\n## Getting custom configuration from the Otoroshi config. file\n\nLet say you need to provide custom configuration values for a script, then you can customize a configuration file of Otoroshi\n\n```hocon\ninclude \"application.conf\"\n\notoroshi {\n scripts {\n enabled = true\n }\n}\n\nmy-transformer {\n env = \"prod\"\n maxRequestBodySize = 2048\n maxResponseBodySize = 2048\n}\n```\n\nthen start Otoroshi like\n\n```sh\njava -Dconfig.file=/path/to/custom.conf -jar otoroshi.jar\n```\n\nthen, in your transformer, you can write something like \n\n```scala\npackage com.example.otoroshi\n\nimport akka.stream.Materializer\nimport akka.stream.scaladsl._\nimport akka.util.ByteString\nimport env.Env\nimport models.{ApiKey, PrivateAppsUser, ServiceDescriptor}\nimport otoroshi.script._\nimport play.api.Logger\nimport play.api.mvc.{Result, Results}\nimport scala.util._\nimport scala.concurrent.{ExecutionContext, Future}\n\nclass BodyLengthLimiter extends RequestTransformer {\n\n override def def transformResponseWithCtx(ctx: TransformerResponseContext)(implicit env: Env, ec: ExecutionContext, mat: Materializer): Source[ByteString, _] = {\n val max = env.configuration.getOptional[Long](\"my-transformer.maxResponseBodySize\").getOrElse(Long.MaxValue)\n ctx.body.limitWeighted(max)(_.size)\n }\n\n override def transformRequestWithCtx(ctx: TransformerRequestContext)(implicit env: Env, ec: ExecutionContext, mat: Materializer): Source[ByteString, _] = {\n val max = env.configuration.getOptional[Long](\"my-transformer.maxRequestBodySize\").getOrElse(Long.MaxValue)\n ctx.body.limitWeighted(max)(_.size)\n }\n}\n```\n\n## Using a library that is not embedded in Otoroshi\n\nJust use the `classpath` option when running Otoroshi\n\n```sh\njava -cp \"/path/to/library.jar:$/path/to/otoroshi.jar\" play.core.server.ProdServerStart\n```\n\nBe carefull as your library can conflict with other libraries used by Otoroshi and affect its stability\n\n## Enabling plugins\n\nplugins can be enabled per service from the service settings page or globally from the danger zone in the plugins section.\n"},{"name":"snow-monkey.md","id":"/topics/snow-monkey.md","url":"/topics/snow-monkey.html","title":"Chaos engineering with the Snow Monkey","content":"# Chaos engineering with the Snow Monkey\n\nNihonzaru (the Snow Monkey) is the chaos engineering tool provided by Otoroshi. You can access it at `Settings (cog icon) / Snow Monkey`.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Chaos engineering\n\nOtoroshi offers some tools to introduce [chaos engineering](https://principlesofchaos.org/) in your everyday life. With chaos engineering, you will improve the resilience of your architecture by creating faults in production on running systems. With [Nihonzaru (the snow monkey)](https://en.wikipedia.org/wiki/Japanese_macaque) Otoroshi helps you to create faults on http request/response handled by Otoroshi. \n\n@@@ div { .centered-img }\n\n@@@\n\n## Settings\n\n@@@ div { .centered-img }\n\n@@@\n\nThe snow monkey let you define a few settings to work properly :\n\n* **Include user facing apps.**: you want to create fault in production, but maybe you don't want your users to enjoy some nice snow monkey generated error pages. Using this switch let you include of not user facing apps (ui apps). Each service descriptor has a `User facing app switch` that will be used by the snow monkey.\n* **Dry run**: when dry run is enabled, outages will be registered and will generate events and alerts (in the otoroshi eventing system) but requests won't be actualy impacted. It's a good way to prepare applications to the snow monkey habits\n* **Outage strategy**: Either `AllServicesPerGroup` or `OneServicePerGroup`. It means that only one service per group or all services per groups will have `n` outages (see next bullet point) during the snow monkey working period\n* **Outages per day**: during snow monkey working period, each service per group or one service per group will have only `n` outages registered \n* **Working period**: the snow monkey only works during a working period. Here you can defined when it starts and when it stops\n* **Outage duration**: here you can defined the bounds for the random outage duration when an outage is created on a service\n* **Impacted groups**: here you can define a list of service groups impacted by the snow monkey. If none is specified, then all service groups will be impacted\n\n## Faults\n\nWith the snow monkey, you can generate four types of faults\n\n* **Large request fault**: Add trailing bytes at the end of the request body (if one)\n* **Large response fault**: Add trailing bytes at the end of the response body\n* **Latency injection fault**: Add random response latency between two bounds\n* **Bad response injection fault**: Create predefined responses with custom headers, body and status code\n\nEach fault let you define a ratio for impacted requests. If you specify a ratio of `0.2`, then 20% of the requests for the impacte service will be impacted by this fault\n\n@@@ div { .centered-img }\n\n@@@\n\nThen you juste have to start the snow monkey and enjoy the show ;)\n\n@@@ div { .centered-img }\n\n@@@\n\n## Current outages\n\nIn the last section of the snow monkey page, you can see current outages (per service), when they started, their duration, etc ...\n\n@@@ div { .centered-img }\n\n@@@"},{"name":"ssl.md","id":"/topics/ssl.md","url":"/topics/ssl.html","title":"SSL/TLS termination with Otoroshi","content":"# SSL/TLS termination with Otoroshi\n\nOtoroshi can be used as an SSL/TLS termination. It is enabled by default but you can customise HTTPS port with `https.port` config. and env. var `HTTPS_PORT`. You can create upload any certificate you want in the Otoroshi UI or using the API. Just go to `settings (cog icon) / SSL/TLSS certificates`.\n\n@@@ note { title=\"Experimental Feature\" }\nDynamic SSL/TLS termination is an experimental feature. It can change until it becomess an official feature\n@@@\n\n@@@ note { title=\"TLS 1.3 support\" }\nOtoroshi does support TLS 1.3 when used in combination with JDK 11\n\n\n@@@\n\n@@@ div { .centered-img }\n\n@@@\n\nHere you can add your own certificates, your own CA and even create self signed certificates or certificates from CAs. You can enable auto renewal of thoses self signed certificates or certificates generated. Certificates have to be created with the certificate chain and the private key in PEM format with no password on the private key.\n\nYou can remove the password of a key with the following command\n\n```sh\nopenssl rsa -in keywithpassword.key -out keywithoutpassword.key\n```\n\n@@@ div { .centered-img }\n\n@@@\n\n"},{"name":"1-groups.md","id":"/usage/1-groups.md","url":"/usage/1-groups.html","title":"Managing service groups","content":"# Managing service groups\n\nGo to `settings (cog icon) / All service groups` to access the list of service groups.\n\n@@@ div { .centered-img }\n\n@@@\n\nAnd you should see the list of existing `Service groups`.\n\n@@@ div { .centered-img }\n\n@@@\n\nBut what is a `Service group` anyway ?\n\n## Otoroshi entities\n\nThere are 3 major entities at the core of Otoroshi :\n\n* **service groups**\n* service descriptors\n* api keys\n\n@@@ div { .centered-img }\n\n@@@\n\nA `service group` is just some kind of logical container for `service descriptors`. A `service group` also has some `api keys` assigned that will be used to access all the `service descriptors` contained in the `service group`.\n\n## Create a service group\n\nA `service group` is a really simple structure with an `id`, a name and a description. To create a new one, just click on the `Add item` button.\n\n@@@ div { .centered-img }\n\n@@@\n\nmodify the name and the description of the group\n\n@@@ div { .centered-img }\n\n@@@\n\nand click on `Create group`\n\n@@@ div { .centered-img }\n\n@@@\n\nThen, you should find your brand new `Service group` in the list of `Service groups`\n\n@@@ div { .centered-img }\n\n@@@\n\n## Update a service\n\nTo update a `Service group`, just click on the edit button of your `Service group`\n\n@@@ div { .centered-img }\n\n@@@\n\nUpdate the name and description of the `Service group` and click on the `Update group` button to validate name update.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Delete a service group\n\nTo delete a `Service group`, just click on the delete button of your `Service group`\n\n@@@ div { .centered-img }\n\n@@@\n\nFinally confirm the command\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"2-services.md","id":"/usage/2-services.md","url":"/usage/2-services.html","title":"Managing services","content":"# Managing services\n\nNow let's create services. Services or `service descriptor` let you declare how to proxy a call from a domain name to another domain name (or multiple domain names). Let's say you have an API exposed on `http://192.168.0.42` and I want to expose it on `https://my.api.foo`. Otoroshi will proxy all calls to `https://my.api.foo` and forward them to `http://192.168.0.42`. While doing that, it will also log everyhting, control accesses, etc.\n\n## Otoroshi entities\n\nThere are 3 major entities at the core of Otoroshi\n\n* service groups\n* **service descriptors**\n* api keys\n\n@@@ div { .centered-img }\n\n@@@\n\nA `service descriptor` is contained in one or multiple `service group`s and is allowed to be accessed by all the `api key`s authorized on those `service group`s or apikeys directly authorized on the service itself.\n\n## Create a service descriptor\n\nTo create a `service descriptor`, click on `Add service` on the Otoroshi sidebar. Then you will be asked to choose a name for the service and the group of the service. You also have two buttons to create a new group and assign it to the service and create a new group with a name based on the service name.\n\nYou will have a serie of toggle buttons to\n\n* activate / deactivate a service\n* display maintenance page for a service\n* display contruction page for a service\n* enable otoroshi custom response headers containing request id, latency, etc \n* force https usage on the exposed service\n* enable read only flag : this service will only be used with `HEAD`, `OPTIONS` and `GET` http verbs. You can also active the same flag on `ApiKey`s to be more specific on who cannot use write http verbs.\n\nThen, you will be able to choose the URL that will be used to reach your new service on Otoroshi.\n\n@@@ div { .centered-img #service-flags }\n\n@@@\n\nIn the `service targets` section, you will be able to choose where the call will be forwarded. You can use multiple targets, in that case, Otoroshi will perform a round robin load balancing between the targets. If the `override Host header` toggle is on, the host header will be changed for the host of the target. For example, if you request `http://www.oto.tools/api` with a target to `http://www-internal.service.local/api`, the target will receive a `Host: www-internal.service.local` instead of `Host: www.oto.tools`.\n\nYou can also specify a target root, if you say that the target root is `/foo/`, then any call to `https://my.api.foo` will call `http://192.168.0.42/foo/` and nay call to `https://my.api.foo/bar` will call `http://192.168.0.42/foo/bar`.\n\nIn the URL patterns section, you will be able to choose, URL by URL which is private and which is public. By default, all services are private and each call must provide an `api key`. But sometimes, you need to access a service publicly. In that case, you can provide patterns (regex) to make some or all URL public (for example with the pattern `/.*`). You also have a `private pattern` field to restrict public patterns.\n\n@@@ div { .centered-img #targets }\n\n@@@\n\n### Otoroshi exchange protocol\n\n#### V1 challenge\n\nIf you enable secure communication for a given service with `V1 - simple values exchange` activated, you will have to add a filter on the target application that will take the `Otoroshi-State` header and return it in a header named `Otoroshi-State-Resp`. \n\n@@@ div { .centered-img }\n\n@@@\n\n#### V2 challenge\n\nIf you enable secure communication for a given service with `V2 - signed JWT token exhange` activated, you will have to add a filter on the target application that will take the `Otoroshi-State` header value containing a JWT token, verify it's content signature then extract a claim named `state` and return a new JWT token in a header named `Otoroshi-State-Resp` with the `state` value in a claim named `state-resp`. By default, the signature algorithm is HMAC+SHA512 but can you can choose your own. The sent and returned JWT tokens have short TTL to avoid being replayed. You must be validate the tokens TTL.\n\n@@@ div { .centered-img }\n\n@@@\n\n#### Info. token\n\nOtoroshi is also sending a JWT token in a header named `Otoroshi-Claim` that the target app can validate too.\n\nThe `Otoroshi-Claim` is a JWT token containing some informations about the service that is called and the client if available. You can choose between a legacy version of the token and a new one that is more clear and structured.\n\nBy default, the otoroshi jwt token is signed with the `app.claim.sharedKey` config property (or using the `$CLAIM_SHAREDKEY` env. variable) and uses the `HMAC512` signing algorythm. But it is possible to customize how the token is signed from the service descriptor page in the `Otoroshi exchange protocol` section. \n\n@@@ div { .centered-img }\n\n@@@\n\nusing another signing algo.\n\n@@@ div { .centered-img }\n\n@@@\n\nhere you can choose the signing algorithm and the secret/keys used. You can use syntax like `${env.MY_ENV_VAR}` or `${config.my.config.path}` to provide secret/keys values. \n\nFor example, for a service named `my-service` with a signing key `secret` with `HMAC512` signing algorythm, the basic JWT token that will be sent should look like the following\n\n```\neyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJzdWIiOiItLSIsImF1ZCI6Im15LXNlcnZpY2UiLCJpc3MiOiJPdG9yb3NoaSIsImV4cCI6MTUyMTQ0OTkwNiwiaWF0IjoxNTIxNDQ5ODc2LCJqdGkiOiI3MTAyNWNjMTktMmFjNy00Yjk3LTljYzctMWM0ODEzYmM1OTI0In0.mRcfuFVFPLUV1FWHyL6rLHIJIu0KEpBkKQCk5xh-_cBt9cb6uD6enynDU0H1X2VpW5-bFxWCy4U4V78CbAQv4g\n```\n\nif you decode it, the payload will look something like\n\n```json\n{\n \"sub\": \"apikey_client_id\",\n \"aud\": \"my-service\",\n \"iss\": \"Otoroshi\",\n \"exp\": 1521449906,\n \"iat\": 1521449876,\n \"jti\": \"71025cc19-2ac7-4b97-9cc7-1c4813bc5924\"\n}\n```\n\nIf you want to validate the `Otoroshi-Claim` on the target app side to ensure that the input requests only comes from `Otoroshi`, you will have to write an HTTP filter to do the job. For instance, if you want to write a filter to make sure that requests only comes from Otoroshi, you can write something like the following (using playframework 2.6).\n\nScala\n: @@snip [filter.scala](../snippets/filter.scala)\n\nJava\n: @@snip [filter.java](../snippets/filter.java)\n\n\n### Canary mode\n\nOtoroshi provides a feature called `Canary mode`. It lets you define new targets for a service, and route a percentage of the traffic on those targets. It's a good way to test a new version of a service before public release. As any client need to be routed to the same version of targets any time, Otoroshi will issue a special header and a cookie containing a `session id`. The header is named `Otoroshi-Canary-Id`.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Service health check\n\nOtoroshi is also capable of checking the health of a service. You can define a URL that will be tested, and Otoroshi will ping that URL regularly. Will doing so, Otoroshi will pass a numeric value in a header named `Otoroshi-Health-Check-Logic-Test`. You can respond with a header named `Otoroshi-Health-Check-Logic-Test-Result` that contains the value of `Otoroshi-Health-Check-Logic-Test` + 42 to indicate that the service is working properly.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Service circuit breaker\n\nIn Otoroshi, each service has its own client settings with a circuit breaker and some retry capabilities. In the `Client settings` section, you will be able to customize the client's behavior.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Service settings\n\nYou can also provide some additionnal information about a given service, like an `Open API` descriptor, some metadata, a list of whitelisted/blacklisted ip addresses, etc.\n\n@@@ div { .centered-img #service-meta }\n\n@@@\n\n### HTTP Headers\n\nHere you can define some headers that will be added to each request to client requests or responses. \nYou will also be able to define headers to route the call only if the defined header is present on the request.\n\n@@@ div { .centered-img #service-meta }\n\n@@@\n\n### CORS \n\nIf you enabled this section, CORS will be automatically supported on the current service provider. The pre-flight request will be handled by Otoroshi. You can customize every CORS headers :\n\n@@@ div { .centered-img }\n\n@@@\n\n### Service authentication\n\nSee @ref:[Aauthentication](./9-auth.md)\n\n### Custom error templates\n\nFinally, you can define custom error templates that will be displayed when an error occurs when Otoroshi try to reach the target or when Otoroshi itself has an error. You can also define custom templates for maintenance and service pages.\n"},{"name":"3-apikeys.md","id":"/usage/3-apikeys.md","url":"/usage/3-apikeys.html","title":"Managing API keys","content":"# Managing API keys\n\nNow that you know how to create service groups and service descriptors, we will see how to create API keys.\n\n## Otoroshi entities\n\nThere are 3 major entities at the core of Otoroshi.\n\n* service groups\n* service descriptors\n* **api keys**\n\n@@@ div { .centered-img }\n\n@@@\n\nAn `API key` is linked to one or more `service group` and `service descriptor` to allow you to access any `service descriptor` linked or contained in one of the linked `service group`. You can, of course, create multiple `API key` for given `service group`s/`service descriptor`s.\n\nIn the Otoroshi admin dashboard, we chose to access `API keys` from `service descriptors` only, but when you access `API keys` for a `service descriptor`, you actually access `API keys` for the `service group` containing the `service descriptor`.\n\n`API keys` can be provided to Otoroshi through :\n\n* `Otoroshi-Authorization: Basic $base64(client_id:client_secret)` header, in that case, the `Otoroshi-Authorization` header will **not** be sent to the target. `Basic ` is optional.\n* `Authorization: Basic $base64(client_id:client_secret)` header, in that case, the `Authorization` header **will** be sent to the target\n* `Otoroshi-Token: Bearer $jwt_token` where the JWT token has been signed with the `API key` client secret, in that case, the `Otoroshi-Token` header will **not** be sent to the target. `Bearer ` is optional.\n* `Authorization: Bearer $jwt_token` where the JWT token has been signed with the `API key` client secret, in that case, the `Authorization` header **will** be sent to the target\n* `Cookie: access_token=$jwt_token;` where the JWT token has been signed with the `API key` client secret, in that case, the cookie named `access_token` **will** be sent to the target\n* `Otoroshi-Client-Id` and `Otoroshi-Client-Secret` headers, in that case the `Otoroshi-Client-Id` and `Otoroshi-Client-Secret` headers will not be sent to the target.\n\n## List API keys for a service descriptor\n\nGo to a service descriptor using `All services` quick link in the sidebar or the search box.\n\n@@@ div { .centered-img }\n\n@@@\n\nSelect a `service descriptor`.\n\n@@@ div { .centered-img }\n\n@@@\n\nClick on `API keys` in the sidebar\n\n@@@ div { .centered-img }\n\n@@@\n\nYou should see the list of API keys for that `service descriptor`\n\n@@@ div { .centered-img }\n\n@@@\n\n## Create an API key for a service descriptor\n\n@@@ div { .centered-img }\n\n@@@\n\nYou can add a name for your new API key, you can also change client's id and client's secret. You can also configure the throttling rate of the API key (calls per second), and the authorized number of call per day and per month. You may also activate or de-activate the api key from that screen.\n\nInformations about current quotas usage will be returned in response headers.\n\n* `Otoroshi-Daily-Calls-Remaining` : authorized calls remaining for this day\n* `Otoroshi-Monthly-Calls-Remaining` : authorized calls remaining for this month\n* `Otoroshi-Proxy-Latency` : latency induced by Otoroshi\n* `Otoroshi-Upstream-Latency` : latency between Otoroshi and target\n\n@@@ div { .centered-img #quotas }\n\n@@@\n\n@@@ warning\nDaily and monthly quotas are based on the following rules :\n\n* daily quota is computed between 00h00:00.000 and 23h59:59.999\n* monthly qutoas is computed between the first day of the month at 00h00:00.000 and the last day of the month at 23h59:59.999\n@@@\n\n## Update an API key\n\nTo update an `API key`, just click on the edit button of your `API key`\n\n@@@ div { .centered-img }\n\n@@@\n\nUpdate the name, secret, state and quotas (if needed) of the `API key` and click on the `Update API key` button\n\n@@@ div { .centered-img }\n\n@@@\n\n## Delete an API key\n\nTo delete an `API key`, just click on the delete button of your `API key`\n\n@@@ div { .centered-img }\n\n@@@\n\nand confirm the command\n\n@@@ div { .centered-img }\n\n@@@\n\n### Read only\n\nThe read only flag on an `ApiKey` this apikey can only use allowed services with `HEAD`, `OPTIONS` and `GET` http verbs.\n\n## Use a JWT token to pass an API key\n\nYou can use a JWT token to pass an API key to Otoroshi. \nYou can use `Otoroshi-Authorization: Bearer $jwt_token`, `Authorization: Bearer $jwt_token` header and `Cookie: access_token=$jwt_token;` to pass the JWT token.\nYou have to create a JWT token with a signing algorythm that can be `HS256` or `HS512`. Then you have to provide an `iss` claim with the value of your API key `clientId` and sign the JWT token with your API key `clientSecret`.\n\nFor example, with an API key like `clientId=abcdef` and `clientSecret=1234456789`, your JWT token should look like\n\n```json\n{\n \"alg\": \"HS256\",\n \"typ\": \"JWT\"\n}\n{\n \"iss\":\"abcdef\",\n \"name\": \"John Doe\",\n \"admin\": true\n}\n```\n\nin that case, when you sign the token with the secret of the API key `1234456789`, the signature will be `_eancnYCD3makSSox2v2xErjNYkRtcX558QiJGCbino`, resulting in a encoded JWT header like\n\n```\neyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.\neyJpc3MiOiJhYmNkZWYiLCJuYW1lIjoiSm9obiBEb2UiLCJhZG1pbiI6dHJ1ZX0.\n_eancnYCD3makSSox2v2xErjNYkRtcX558QiJGCbino\n```\n"},{"name":"4-monitor.md","id":"/usage/4-monitor.md","url":"/usage/4-monitor.html","title":"Monitoring services","content":"# Monitoring services\n\nOnce you have declared services, you can monitor them with Otoroshi.\n\n@@@ warning\nYou have to use [Elastic](https://www.elastic.co) to enable analytics features in Otoroshi\n@@@\n\nOnce you have setup @ref:[Otoroshi events push to an elastic cluster](../integrations/analytics.md) (through webhooks, kafka, or elastic integration) you can setup Otoroshi events read from an elastic cluster. Go to `settings (cog icon) / Danger Zone` and expand the `Analytics: Elastic cluster (write)` section.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Service healthcheck\n\nIf you have defined an health check URL in the service descriptor, you can access the health check page from the sidebar of the service page.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Service live stats\n\nYou can also monitor live stats like total of served request, average response time, average overhead, etc. The live stats page can be accessed from the sidebar of the service page.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Service analytics\n\nYou can also get some aggregated metrics. The analytics page can be accessed from the sidebar of the service page.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"5-sessions.md","id":"/usage/5-sessions.md","url":"/usage/5-sessions.html","title":"Managing sessions","content":"# Managing sessions\n\nWith Otoroshi you can manage sessions of connected users and you can discard sessions whenever you want. Session last 24h by default and you can customize them with `app.backoffice.session.exp` and `app.privateapps.session.exp` @ref:[config keys](../firstrun/configfile.md)\n\n## Admin. sessions\n\nTo see last current admin session on Otoroshi from the UI, go to `settings (cog icon) / Admins sessions`. Here you can discard individual sessions or all sessions at once using `Discard session` and `Discard all sessions` buttons.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Private apps. session\n\nTo see last current admin session on Otoroshi from the UI, go to `settings (cog icon) / Priv. apps sessions`. Here you can discard individual sessions or all sessions at once using `Discard session` and `Discard all sessions` buttons.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"6-audit.md","id":"/usage/6-audit.md","url":"/usage/6-audit.html","title":"Auditing Otoroshi","content":"# Auditing Otoroshi\n\nWith Otoroshi, any admin action and any sucpicious/alert action is recorded. These records are stored in Otoroshi's datastore (only the last n records, defined by the `app.events.maxSize` @ref:[config key](../firstrun/configfile.md)). All the records can be send through the analytics mechanism (WebHook, Kafka, Elastic) for external and/or further usage. We recommand sending away those records for security reasons.\n\n@@@ warning\nYou have to use [Elastic](https://www.elastic.co) to enable analytics features in Otoroshi. See @ref:[Elastic setup section](../integrations/analytics.md)\n@@@\n\n## Audit trail\n\nTo see last `app.events.maxSize` admin actions on Otoroshi from the UI, go to `settings (cog icon) / Audit log`.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Alerts\n\nTo see last `app.events.maxSize` alerts on Otoroshi from the UI, go to `settings (cog icon) / Alerts log`.\n\n@@@ div { .centered-img }\n\n@@@\n\nYou can also have a look at the payload sent to the Otoroshi server by clicking the `content` button\n\n@@@ div { .centered-img }\n\n@@@\n\n## List of possible alerts\n\n```\nMaxConcurrentRequestReachedAlert\nCircuitBreakerOpenedAlert\nCircuitBreakerClosedAlert\nSessionDiscardedAlert\nSessionsDiscardedAlert\nPanicModeAlert\nOtoroshiExportAlert\nU2FAdminDeletedAlert\nBlackListedBackOfficeUserAlert\nAdminLoggedInAlert\nAdminFirstLogin\nAdminLoggedOutAlert\nDbResetAlert\nDangerZoneAccessAlert\nGlobalConfigModification\nRevokedApiKeyUsageAlert\nServiceGroupCreatedAlert\nServiceGroupUpdatedAlert\nServiceGroupDeletedAlert\nServiceCreatedAlert\nServiceUpdatedAlert\nServiceDeletedAlert\nApiKeyCreatedAlert\nApiKeyUpdatedAlert\nApiKeyDeletedAlert\n```\n"},{"name":"7-metrics.md","id":"/usage/7-metrics.md","url":"/usage/7-metrics.html","title":"Otoroshi global metrics","content":"# Otoroshi global metrics\n\nOtoroshi provide some global metrics about services usage. Go to `settings (cog icon) / Global Ananlytics`\n\n@@@ warning\nYou have to use [Elastic](https://www.elastic.co) to enable analytics features in Otoroshi. See @ref:[Elastic setup section](../integrations/analytics.md)\n@@@\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"8-importsexports.md","id":"/usage/8-importsexports.md","url":"/usage/8-importsexports.html","title":"Import and export","content":"# Import and export\n\nWith Otoroshi you can easily save the current state of the proxy and restore it later. Go to `settings (cog icon) / Danger Zone` and scroll to the bottom of the page\n\n## Full export\n\nClick on the `Full export` button.\n\n@@@ div { .centered-img }\n\n@@@\n\nYour browser will start to download a JSON file containing the internal state of your Otoroshi cluster.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Full import\n\nIf you want to restore an export, go to `settings (cog icon) / Danger Zone` and scroll to the bottom of the page. Click on the `Recover from full export file` button\n\n@@@ div { .centered-img }\n\n@@@\n\nChoose export file on your system.\n\n@@@ div { .centered-img }\n\n@@@\n\nClick on the `Flush datastore and import ...` button, confirm and you will be logged out.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"9-auth.md","id":"/usage/9-auth.md","url":"/usage/9-auth.html","title":"Authentication","content":"# Authentication\n\nYou can create auth. configuration in Otoroshi. Just go to `settings (cog icon) / Authentication configs`.\n\n## OAuth 2\n\nCreate a new `Generic oauth2 provider` config and customize the following informations:\n\n```json\n{\n \"clientId\": \"xxxx\",\n \"clientSecret\": \"xxxx\",\n \"authorizeUrl\": \"http://yourOAuthServer/oauth/authorize\",\n \"tokenUrl\": \"http://yourOAuthServer/oauth/token\",\n \"userInfoUrl\": \"http://yourOAuthServer/userinfo\",\n \"loginUrl\": \"http://yourOAuthServer/login\",\n \"logoutUrl\": \"http://yourOAuthServer/logout?redirectQueryParamName=${redirect}\",\n \"accessTokenField\": \"access_token\",\n \"nameField\": \"name\",\n \"emailField\": \"email\",\n \"callbackUrl\": \"http://privateapps.oto.tools/privateapps/generic/callback\"\n}\n```\n\nIf used for BackOffice authentication, the callback url should be `http://otoroshi.oto.tools/backoffice/auth0/callback`.\n\nFor `logoutUrl`, `redirectQueryParamName` is a parameter with a name specific to your OAuth2 provider (for example, in Auth0, this parameter is called `returnTo`, in Kecloak it is called `redirect_uri`).\n\nif you are using a [KeyCloak](https://www.keycloak.org/) server, you can configure it this way, assuming you are using the master realm and you created a new client with a client secret, callback urls set to `http://privateapps.oto.tools/*`.\n\n```json\n{\n \"clientId\": \"clientId\",\n \"clientSecret\": \"clientSecret\",\n \"authorizeUrl\": \"http://keycloakHost/auth/realms/master/protocol/openid-connect/auth\",\n \"tokenUrl\": \"http://keycloakHost/auth/realms/master/protocol/openid-connect/token\",\n \"userInfoUrl\": \"http://keycloakHost/auth/realms/master/protocol/openid-connect/userinfo\",\n \"loginUrl\": \"http://keycloakHost/auth/realms/master/protocol/openid-connect/auth\",\n \"logoutUrl\": \"http://keycloakHost/auth/realms/master/protocol/openid-connect/logout?redirect_uri=${redirect}\",\n \"accessTokenField\": \"access_token\",\n \"nameField\": \"name\",\n \"emailField\": \"email\",\n \"callbackUrl\": \"http://privateapps.oto.tools/privateapps/generic/callback\"\n}\n```\n\n## Ldap\n\nCreate a new `Ldap auth. provider` config and customize the following informations:\n\n```json\n{\n \"serverUrl\": \"ldap://ldap.forumsys.com:389\",\n \"searchBase\": \"dc=example,dc=com\",\n \"groupFilter\": \"ou=chemists\",\n \"searchFilter\": \"(mail=${username})\",\n \"adminUsername\": \"cn=read-only-admin,dc=example,dc=com\",\n \"adminPassword\": \"password\",\n \"nameField\": \"cn\",\n \"emailField\": \"mail\"\n}\n```\n\n## In Memory\n\nCreate a new `In memory auth. provider` config and then you will be able to create new users. To set the password, just click on the `Set password` button. It will generate a BCrypt hash of the password you typed.\n\n## Auth0\n\nCreate a new OAuth 2 config and add the following informations:\n\n```json\n{\n \"clientId\": \"yourAuth0ClientId\",\n \"clientSecret\": \"yourAuth0ClientSecret\",\n \"authorizeUrl\": \"https://yourAuth0Domain/authorize\",\n \"tokenUrl\": \"https://yourAuth0Domain/oauth/token\",\n \"userInfoUrl\": \"https://yourAuth0Domain/userinfo\",\n \"loginUrl\": \"https://yourAuth0Domain/authorize\",\n \"logoutUrl\": \"https://yourAuth0Domain/v2/logout?returnTo=${redirect}\",\n \"accessTokenField\": \"access_token\",\n \"nameField\": \"name\",\n \"emailField\": \"email\",\n \"otoroshiDataField\": \"app_metadata | otoroshi_data\",\n \"callbackUrl\": \"http://privateapps.oto.tools/privateapps/generic/callback\"\n}\n```\n\nIf you enable Otoroshi exchange protocol, the JWT xill have the following fields (all optional)\n\n* `email`\n* `name`\n* `picture`\n* `user_id`\n* `given_name`\n* `family_name`\n* `gender`\n* `locale`\n* `nickname`\n\nIn Auth0, the metadata is a flat object placed in the `profile / http://yourdomain/app_metadata / otoroshi_data`. You might need to write an Auth0 rule to copy app metadata under `http://yourdomain/app_metadata`, the `http://yourdomain/app_metadata` value is a config property `app.appMeta`. The rule could be something like the following\n\n```js\nfunction (user, context, callback) {\n var namespace = 'http://yourdomain/';\n context.idToken[namespace + 'user_id'] = user.user_id;\n context.idToken[namespace + 'user_metadata'] = user.user_metadata;\n context.idToken[namespace + 'app_metadata'] = user.app_metadata;\n callback(null, user, context);\n}\n```"},{"name":"index.md","id":"/usage/index.md","url":"/usage/index.html","title":"Using Otoroshi","content":"# Using Otoroshi\n\nNow we will see how to use Otoroshi for basic tasks that will be useful for your day to day work with Otoroshi.\n\n@@@ index\n\n* [create group](./1-groups.md)\n* [create service](./2-services.md)\n* [create API Keys](./3-apikeys.md)\n* [monitor service](./4-monitor.md)\n* [sessions management](./5-sessions.md)\n* [Audit trail and alerts](./6-audit.md)\n* [Global metrics](./7-metrics.md)\n* [Exports and imports](./8-importsexports.md)\n* [Authentication](./9-auth.md)\n\n@@@\n"}]
\ No newline at end of file