diff --git a/docs/devmanual/deploy/kubernetes.html b/docs/devmanual/deploy/kubernetes.html index b781f583a0..2a25e2965f 100644 --- a/docs/devmanual/deploy/kubernetes.html +++ b/docs/devmanual/deploy/kubernetes.html @@ -2172,7 +2172,7 @@

\n include \"application.conf\"\n app {\n storage = \"redis\"\n domain = \"apis.my.domain\"\n }\n```\n\nand mount it in the otoroshi container\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: otoroshi-deployment\nspec:\n selector:\n matchLabels:\n run: otoroshi-deployment\n template:\n metadata:\n labels:\n run: otoroshi-deployment\n spec:\n serviceAccountName: otoroshi-admin-user\n terminationGracePeriodSeconds: 60\n hostNetwork: false\n containers:\n - image: maif/otoroshi:16.5.2\n imagePullPolicy: IfNotPresent\n name: otoroshi\n args: ['-Dconfig.file=/usr/app/otoroshi/conf/oto.conf']\n ports:\n - containerPort: 8080\n name: \"http\"\n protocol: TCP\n - containerPort: 8443\n name: \"https\"\n protocol: TCP\n volumeMounts:\n - name: otoroshi-config\n mountPath: \"/usr/app/otoroshi/conf\"\n readOnly: true\n volumes:\n - name: otoroshi-config\n secret:\n secretName: otoroshi-config\n ...\n```\n\nYou can also create several secrets for each placeholder, mount them to the otoroshi container then use their file path as value\n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: 'file:///the/path/of/the/secret/file'\n```\n\nyou can use the same trick in the config. file itself\n\n### Note on bare metal kubernetes cluster installation\n\n@@@ note\nBare metal kubernetes clusters don't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples below.\n@@@\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@\n\n### Common manifests\n\nthe following manifests are always needed. They create otoroshi CRDs, tokens, role, etc. Redis deployment is not mandatory, it's just an example. You can use your own existing setup.\n\nrbac.yaml\n: @@snip [rbac.yaml](../snippets/kubernetes/kustomize/base/rbac.yaml) \n\ncrds.yaml\n: @@snip [crds.yaml](../snippets/kubernetes/kustomize/base/crds.yaml) \n\nredis.yaml\n: @@snip [redis.yaml](../snippets/kubernetes/kustomize/base/redis.yaml) \n\n\n### Deploy a simple otoroshi instanciation on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. We use a service of type `LoadBalancer` to expose otoroshi to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple/dns.example) \n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. The otoroshi instance are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/dns.example) \n\n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster using a DaemonSet\n\nHere we have one otoroshi instance on each kubernetes node (with the `otoroshi-kind: instance` label) with redis persistance. The otoroshi instances are exposed as `hostPort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/dns.example) \n\n### Deploy an otoroshi cluster on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas of an otoroshi leader connected to a redis instance and 2 replicas of an otoroshi worker connected to the leader. We use a service of type `LoadBalancer` to expose otoroshi leader/worker to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster\n\nHere we have 2 replicas of otoroshi leader connected to the same redis instance and 2 replicas for otoroshi worker. The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster using DaemonSet\n\nHere we have 1 otoroshi leader instance on each kubernetes node (with the `otoroshi-kind: leader` label) connected to the same redis instance and 1 otoroshi worker instance on each kubernetes node (with the `otoroshi-kind: worker` label). The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\n## Using Otoroshi as an Ingress Controller\n\nIf you want to use Otoroshi as an [Ingress Controller](https://kubernetes.io/fr/docs/concepts/services-networking/ingress/), just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Ingress Controller`.\n\nThen add the following configuration for the job (with your own tweaks of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": true, // sync ingresses\n \"crds\": false, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities. can be \"default\" to use otoroshi default templates\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n \"organization\": {},\n \"team\": {},\n \"data-exporter\": {},\n \"routes\": {},\n \"route-compositions\": {},\n \"backends\": {}\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nNow you can deploy your first service ;)\n\n### Deploy an ingress route\n\nnow let's say you want to deploy an http service and route to the outside world through otoroshi\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: kennethreitz/httpbin\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 80\n name: \"http\"\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8080\n targetPort: http\n name: http\n selector:\n run: http-app-deployment\n---\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nonce deployed, otoroshi will sync with kubernetes and create the corresponding service to route your app. You will be able to access your app with\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get\n```\n\n### Support for Ingress Classes\n\nSince Kubernetes 1.18, you can use `IngressClass` type of manifest to specify which ingress controller you want to use for a deployment (https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#extended-configuration-with-ingress-classes). Otoroshi is fully compatible with this new manifest `kind`. To use it, configure the Ingress job to match your controller\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClasses\": [\"otoroshi.io/ingress-controller\"],\n ...\n }\n}\n```\n\nthen you have to deploy an `IngressClass` to declare Otoroshi as an ingress controller\n\n```yaml\napiVersion: \"networking.k8s.io/v1beta1\"\nkind: \"IngressClass\"\nmetadata:\n name: \"otoroshi-ingress-controller\"\nspec:\n controller: \"otoroshi.io/ingress-controller\"\n parameters:\n apiGroup: \"proxy.otoroshi.io/v1alpha\"\n kind: \"IngressParameters\"\n name: \"otoroshi-ingress-controller\"\n```\n\nand use it in your `Ingress`\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\nspec:\n ingressClassName: otoroshi-ingress-controller\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\n### Use multiple ingress controllers\n\nIt is of course possible to use multiple ingress controller at the same time (https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#using-multiple-ingress-controllers) using the annotation `kubernetes.io/ingress.class`. By default, otoroshi reacts to the class `otoroshi`, but you can make it the default ingress controller with the following config\n\n```json\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClass\": \"*\",\n ...\n }\n}\n```\n\n### Supported annotations\n\nif you need to customize the service descriptor behind an ingress rule, you can use some annotations. If you need better customisation, just go to the CRDs part. The following annotations are supported :\n\n- `ingress.otoroshi.io/groups`\n- `ingress.otoroshi.io/group`\n- `ingress.otoroshi.io/groupId`\n- `ingress.otoroshi.io/name`\n- `ingress.otoroshi.io/targetsLoadBalancing`\n- `ingress.otoroshi.io/stripPath`\n- `ingress.otoroshi.io/enabled`\n- `ingress.otoroshi.io/userFacing`\n- `ingress.otoroshi.io/privateApp`\n- `ingress.otoroshi.io/forceHttps`\n- `ingress.otoroshi.io/maintenanceMode`\n- `ingress.otoroshi.io/buildMode`\n- `ingress.otoroshi.io/strictlyPrivate`\n- `ingress.otoroshi.io/sendOtoroshiHeadersBack`\n- `ingress.otoroshi.io/readOnly`\n- `ingress.otoroshi.io/xForwardedHeaders`\n- `ingress.otoroshi.io/overrideHost`\n- `ingress.otoroshi.io/allowHttp10`\n- `ingress.otoroshi.io/logAnalyticsOnServer`\n- `ingress.otoroshi.io/useAkkaHttpClient`\n- `ingress.otoroshi.io/useNewWSClient`\n- `ingress.otoroshi.io/tcpUdpTunneling`\n- `ingress.otoroshi.io/detectApiKeySooner`\n- `ingress.otoroshi.io/letsEncrypt`\n- `ingress.otoroshi.io/publicPatterns`\n- `ingress.otoroshi.io/privatePatterns`\n- `ingress.otoroshi.io/additionalHeaders`\n- `ingress.otoroshi.io/additionalHeadersOut`\n- `ingress.otoroshi.io/missingOnlyHeadersIn`\n- `ingress.otoroshi.io/missingOnlyHeadersOut`\n- `ingress.otoroshi.io/removeHeadersIn`\n- `ingress.otoroshi.io/removeHeadersOut`\n- `ingress.otoroshi.io/headersVerification`\n- `ingress.otoroshi.io/matchingHeaders`\n- `ingress.otoroshi.io/ipFiltering.whitelist`\n- `ingress.otoroshi.io/ipFiltering.blacklist`\n- `ingress.otoroshi.io/api.exposeApi`\n- `ingress.otoroshi.io/api.openApiDescriptorUrl`\n- `ingress.otoroshi.io/healthCheck.enabled`\n- `ingress.otoroshi.io/healthCheck.url`\n- `ingress.otoroshi.io/jwtVerifier.ids`\n- `ingress.otoroshi.io/jwtVerifier.enabled`\n- `ingress.otoroshi.io/jwtVerifier.excludedPatterns`\n- `ingress.otoroshi.io/authConfigRef`\n- `ingress.otoroshi.io/redirection.enabled`\n- `ingress.otoroshi.io/redirection.code`\n- `ingress.otoroshi.io/redirection.to`\n- `ingress.otoroshi.io/clientValidatorRef`\n- `ingress.otoroshi.io/transformerRefs`\n- `ingress.otoroshi.io/transformerConfig`\n- `ingress.otoroshi.io/accessValidator.enabled`\n- `ingress.otoroshi.io/accessValidator.excludedPatterns`\n- `ingress.otoroshi.io/accessValidator.refs`\n- `ingress.otoroshi.io/accessValidator.config`\n- `ingress.otoroshi.io/preRouting.enabled`\n- `ingress.otoroshi.io/preRouting.excludedPatterns`\n- `ingress.otoroshi.io/preRouting.refs`\n- `ingress.otoroshi.io/preRouting.config`\n- `ingress.otoroshi.io/issueCert`\n- `ingress.otoroshi.io/issueCertCA`\n- `ingress.otoroshi.io/gzip.enabled`\n- `ingress.otoroshi.io/gzip.excludedPatterns`\n- `ingress.otoroshi.io/gzip.whiteList`\n- `ingress.otoroshi.io/gzip.blackList`\n- `ingress.otoroshi.io/gzip.bufferSize`\n- `ingress.otoroshi.io/gzip.chunkedThreshold`\n- `ingress.otoroshi.io/gzip.compressionLevel`\n- `ingress.otoroshi.io/cors.enabled`\n- `ingress.otoroshi.io/cors.allowOrigin`\n- `ingress.otoroshi.io/cors.exposeHeaders`\n- `ingress.otoroshi.io/cors.allowHeaders`\n- `ingress.otoroshi.io/cors.allowMethods`\n- `ingress.otoroshi.io/cors.excludedPatterns`\n- `ingress.otoroshi.io/cors.maxAge`\n- `ingress.otoroshi.io/cors.allowCredentials`\n- `ingress.otoroshi.io/clientConfig.useCircuitBreaker`\n- `ingress.otoroshi.io/clientConfig.retries`\n- `ingress.otoroshi.io/clientConfig.maxErrors`\n- `ingress.otoroshi.io/clientConfig.retryInitialDelay`\n- `ingress.otoroshi.io/clientConfig.backoffFactor`\n- `ingress.otoroshi.io/clientConfig.connectionTimeout`\n- `ingress.otoroshi.io/clientConfig.idleTimeout`\n- `ingress.otoroshi.io/clientConfig.callAndStreamTimeout`\n- `ingress.otoroshi.io/clientConfig.callTimeout`\n- `ingress.otoroshi.io/clientConfig.globalTimeout`\n- `ingress.otoroshi.io/clientConfig.sampleInterval`\n- `ingress.otoroshi.io/enforceSecureCommunication`\n- `ingress.otoroshi.io/sendInfoToken`\n- `ingress.otoroshi.io/sendStateChallenge`\n- `ingress.otoroshi.io/secComHeaders.claimRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateResponseName`\n- `ingress.otoroshi.io/secComTtl`\n- `ingress.otoroshi.io/secComVersion`\n- `ingress.otoroshi.io/secComInfoTokenVersion`\n- `ingress.otoroshi.io/secComExcludedPatterns`\n- `ingress.otoroshi.io/secComSettings.size`\n- `ingress.otoroshi.io/secComSettings.secret`\n- `ingress.otoroshi.io/secComSettings.base64`\n- `ingress.otoroshi.io/secComUseSameAlgo`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.size`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.base64`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.size`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.base64`\n- `ingress.otoroshi.io/secComAlgoInfoToken.size`\n- `ingress.otoroshi.io/secComAlgoInfoToken.secret`\n- `ingress.otoroshi.io/secComAlgoInfoToken.base64`\n- `ingress.otoroshi.io/securityExcludedPatterns`\n\nfor more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n\nwith the previous example, the ingress does not define any apikey, so the route is public. If you want to enable apikeys on it, you can deploy the following descriptor\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\n ingress.otoroshi.io/group: http-app-group\n ingress.otoroshi.io/forceHttps: 'true'\n ingress.otoroshi.io/sendOtoroshiHeadersBack: 'true'\n ingress.otoroshi.io/overrideHost: 'true'\n ingress.otoroshi.io/allowHttp10: 'false'\n ingress.otoroshi.io/publicPatterns: ''\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nnow you can use an existing apikey in the `http-app-group` to access your app\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get -u existing-apikey-1:secret-1\n```\n\n## Use Otoroshi CRDs for a better/full integration\n\nOtoroshi provides some Custom Resource Definitions for kubernetes in order to manage Otoroshi related entities in kubernetes\n\n- `service-groups`\n- `service-descriptors`\n- `apikeys`\n- `certificates`\n- `global-configs`\n- `jwt-verifiers`\n- `auth-modules`\n- `scripts`\n- `tcp-services`\n- `data-exporters`\n- `admins`\n- `teams`\n- `organizations`\n\nusing CRDs, you will be able to deploy and manager those entities from kubectl or the kubernetes api like\n\n```sh\nsudo kubectl get apikeys --all-namespaces\nsudo kubectl get service-descriptors --all-namespaces\ncurl -X GET \\\n -H 'Authorization: Bearer eyJhbGciOiJSUzI....F463SrpOehQRaQ' \\\n -H 'Accept: application/json' -k \\\n https://127.0.0.1:6443/apis/proxy.otoroshi.io/v1/apikeys | jq\n```\n\nYou can see this as better `Ingress` resources. Like any `Ingress` resource can define which controller it uses (using the `kubernetes.io/ingress.class` annotation), you can chose another kind of resource instead of `Ingress`. With Otoroshi CRDs you can even define resources like `Certificate`, `Apikey`, `AuthModules`, `JwtVerifier`, etc. It will help you to use all the power of Otoroshi while using the deployment model of kubernetes.\n \n@@@ warning\nwhen using Otoroshi CRDs, Kubernetes becomes the single source of truth for the synced entities. It means that any value in the descriptors deployed will overrides the one in Otoroshi datastore each time it's synced. So be careful if you use the Otoroshi UI or the API, some changes in configuration may be overriden by CRDs sync job.\n@@@\n\n### Resources examples\n\ngroup.yaml\n: @@snip [group.yaml](../snippets/crds/group.yaml) \n\napikey.yaml\n: @@snip [apikey.yaml](../snippets/crds/apikey.yaml) \n\nservice-descriptor.yaml\n: @@snip [service.yaml](../snippets/crds/service-descriptor.yaml) \n\ncertificate.yaml\n: @@snip [cert.yaml](../snippets/crds/certificate.yaml) \n\njwt.yaml\n: @@snip [jwt.yaml](../snippets/crds/jwt.yaml) \n\nauth.yaml\n: @@snip [auth.yaml](../snippets/crds/auth.yaml) \n\norganization.yaml\n: @@snip [orga.yaml](../snippets/crds/organization.yaml) \n\nteam.yaml\n: @@snip [team.yaml](../snippets/crds/team.yaml) \n\n\n### Configuration\n\nTo configure it, just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Otoroshi CRDs Controller`. Then add the following configuration for the job (with your own tweak of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"crds\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": false, // sync ingresses\n \"crds\": true, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities. can be \"default\" to use otoroshi default templates\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n \"organization\": {},\n \"team\": {},\n \"data-exporter\": {}\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nyou can find a more complete example of the configuration object [here](https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/plugins/jobs/kubernetes/config.scala#L134-L163)\n\n### Note about `apikeys` and `certificates` resources\n\nApikeys and Certificates are a little bit different than the other resources. They have ability to be defined without their secret part, but with an export setting so otoroshi will generate the secret parts and export the apikey or the certificate to kubernetes secret. Then any app will be able to mount them as volumes (see the full example below)\n\nIn those resources you can define \n\n```yaml\nexportSecret: true \nsecretName: the-secret-name\n```\n\nand omit `clientSecret` for apikey or `publicKey`, `privateKey` for certificates. For certificate you will have to provide a `csr` for the certificate in order to generate it\n\n```yaml\ncsr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n - httpapps.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n```\n\nwhen apikeys are exported as kubernetes secrets, they will have the type `otoroshi.io/apikey-secret` with values `clientId` and `clientSecret`\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: apikey-1\ntype: otoroshi.io/apikey-secret\ndata:\n clientId: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n clientSecret: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n```\n\nwhen certificates are exported as kubernetes secrets, they will have the type `kubernetes.io/tls` with the standard values `tls.crt` (the full cert chain) and `tls.key` (the private key). For more convenience, they will also have a `cert.crt` value containing the actual certificate without the ca chain and `ca-chain.crt` containing the ca chain without the certificate.\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: certificate-1\ntype: kubernetes.io/tls\ndata:\n tls.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n tls.key: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n cert.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n ca-chain.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA== \n```\n\n## Full CRD example\n\nthen you can deploy the previous example with better configuration level, and using mtls, apikeys, etc\n\nLet say the app looks like :\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\n// here we read the apikey to access http-app-2 from files mounted from secrets\nconst clientId = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientId').toString('utf8')\nconst clientSecret = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientSecret').toString('utf8')\n\nconst backendKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/tls.key').toString('utf8')\nconst backendCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/cert.crt').toString('utf8')\nconst backendCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/ca-chain.crt').toString('utf8')\n\nconst clientKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/tls.key').toString('utf8')\nconst clientCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/cert.crt').toString('utf8')\nconst clientCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/ca-chain.crt').toString('utf8')\n\nfunction callApi2() {\n return new Promise((success, failure) => {\n const options = { \n // using the implicit internal name (*.global.otoroshi.mesh) of the other service descriptor passing through otoroshi\n hostname: 'http-app-service-descriptor-2.global.otoroshi.mesh', \n port: 433, \n path: '/', \n method: 'GET',\n headers: {\n 'Accept': 'application/json',\n 'Otoroshi-Client-Id': clientId,\n 'Otoroshi-Client-Secret': clientSecret,\n },\n cert: clientCert,\n key: clientKey,\n ca: clientCa\n }; \n let data = '';\n const req = https.request(options, (res) => { \n res.on('data', (d) => { \n data = data + d.toString('utf8');\n }); \n res.on('end', () => { \n success({ body: JSON.parse(data), res });\n }); \n res.on('error', (e) => { \n failure(e);\n }); \n }); \n req.end();\n })\n}\n\nconst options = { \n key: backendKey, \n cert: backendCert, \n ca: backendCa, \n // we want mtls behavior\n requestCert: true, \n rejectUnauthorized: true\n}; \nhttps.createServer(options, (req, res) => { \n res.writeHead(200, {'Content-Type': 'application/json'});\n callApi2().then(resp => {\n res.write(JSON.stringify{ (\"message\": `Hello to ${req.socket.getPeerCertificate().subject.CN}`, api2: resp.body })); \n });\n}).listen(433);\n```\n\nthen, the descriptors will be :\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: foo/http-app\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 443\n name: \"https\"\n volumeMounts:\n - name: apikey-volume\n # here you will be able to read apikey from files \n # - /var/run/secrets/kubernetes.io/apikeys/clientId\n # - /var/run/secrets/kubernetes.io/apikeys/clientSecret\n mountPath: \"/var/run/secrets/kubernetes.io/apikeys\"\n readOnly: true\n volumeMounts:\n - name: backend-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/backend/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/backend/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/backend\"\n readOnly: true\n - name: client-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/client/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/client/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/client\"\n readOnly: true\n volumes:\n - name: apikey-volume\n secret:\n # here we reference the secret name from apikey http-app-2-apikey-1\n secretName: secret-2\n - name: backend-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-backend\n secretName: http-app-certificate-backend-secret\n - name: client-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-client\n secretName: http-app-certificate-client-secret\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8443\n targetPort: https\n name: https\n selector:\n run: http-app-deployment\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ServiceGroup\nmetadata:\n name: http-app-group\n annotations:\n otoroshi.io/id: http-app-group\nspec:\n description: a group to hold services about the http-app\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ApiKey\nmetadata:\n name: http-app-apikey-1\n# this apikey can be used to access the app\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-1\n authorizedEntities: \n - group_http-app-group\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-1\n# this apikey can be used to access another app in a different group\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-2\n authorizedEntities: \n - group_http-app-2-group\n---\napiVersion: proxy.otoroshi.io/v1\nkind: Certificate\nmetadata:\n name: http-app-certificate-frontend\nspec:\n description: certificate for the http-app on otorshi frontend\n autoRenew: true\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1\nkind: Certificate\nmetadata:\n name: http-app-certificate-backend\nspec:\n description: certificate for the http-app deployed on pods\n autoRenew: true\n # a secret name http-app-certificate-backend-secret will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: http-app-certificate-backend-secret\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - http-app-service \n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-back, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1\nkind: Certificate\nmetadata:\n name: http-app-certificate-client\nspec:\n description: certificate for the http-app\n autoRenew: true\n secretName: http-app-certificate-client-secret\n csr:\n issuer: CN=Otoroshi Root\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-client, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ServiceDescriptor\nmetadata:\n name: http-app-service-descriptor\nspec:\n description: the service descriptor for the http app\n groups: \n - http-app-group\n forceHttps: true\n hosts:\n - httpapp.foo.bar # hostname exposed oustide of the kubernetes cluster\n # - http-app-service-descriptor.global.otoroshi.mesh # implicit internal name inside the kubernetes cluster \n matchingRoot: /\n targets:\n - url: https://http-app-service:8443\n # alternatively, you can use serviceName and servicePort to use pods ip addresses\n # serviceName: http-app-service\n # servicePort: https\n mtlsConfig:\n # use mtls to contact the backend\n mtls: true\n certs: \n # reference the DN for the client cert\n - UID=httpapp-client, O=OtoroshiApps\n trustedCerts: \n # reference the DN for the CA cert \n - CN=Otoroshi Root\n sendOtoroshiHeadersBack: true\n xForwardedHeaders: true\n overrideHost: true\n allowHttp10: false\n publicPatterns:\n - /health\n additionalHeaders:\n x-foo: bar\n# here you can specify everything supported by otoroshi like jwt-verifiers, auth config, etc ... for more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n```\n\nnow with this descriptor deployed, you can access your app with a command like \n\n```sh\nCLIENT_ID=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientId}\" | base64 --decode`\nCLIENT_SECRET=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientSecret}\" | base64 --decode`\ncurl -X GET https://httpapp.foo.bar/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n## Expose Otoroshi to outside world\n\nIf you deploy Otoroshi on a kubernetes cluster, the Otoroshi service is deployed as a loadbalancer (service type: `LoadBalancer`). You'll need to declare in your DNS settings any name that can be routed by otoroshi going to the loadbalancer endpoint (CNAME or ip addresses) of your kubernetes distribution. If you use a managed kubernetes cluster from a cloud provider, it will work seamlessly as they will provide external loadbalancers out of the box. However, if you use a bare metal kubernetes cluster, id doesn't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples in the installation section.\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@ \n\n## Access a service from inside the k8s cluster\n\n### Using host header overriding\n\nYou can access any service referenced in otoroshi, through otoroshi from inside the kubernetes cluster by using the otoroshi service name (if you use a template based on https://github.com/MAIF/otoroshi/tree/master/kubernetes/base deployed in the otoroshi namespace) and the host header with the service domain like :\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET -H 'Host: httpapp.foo.bar' https://otoroshi-service.otoroshi.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using dedicated services\n\nit's also possible to define services that targets otoroshi deployment (or otoroshi workers deployment) and use then as valid hosts in otoroshi services \n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-awesome-service\nspec:\n selector:\n # run: otoroshi-deployment\n # or in cluster mode\n run: otoroshi-worker-deployment\n ports:\n - port: 8080\n name: \"http\"\n targetPort: \"http\"\n - port: 8443\n name: \"https\"\n targetPort: \"https\"\n```\n\nand access it like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-namspace.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using coredns integration\n\nYou can also enable the coredns integration to simplify the flow. You can use the the following keys in the plugin config :\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"coreDnsIntegration\": true, // enable coredns integration for intra cluster calls\n \"kubeSystemNamespace\": \"kube-system\", // the namespace where coredns is deployed\n \"corednsConfigMap\": \"coredns\", // the name of the coredns configmap\n \"otoroshiServiceName\": \"otoroshi-service\", // the name of the otoroshi service, could be otoroshi-workers-service\n \"otoroshiNamespace\": \"otoroshi\", // the namespace where otoroshi is deployed\n \"clusterDomain\": \"cluster.local\", // the domain for cluster services\n ...\n }\n}\n```\n\notoroshi will patch coredns config at startup then you can call your services like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh` or `${service-name}.${service-namespace}.svc.otoroshi.local`\n\n### Using coredns with manual patching\n\nyou can also patch the coredns config manually\n\n```sh\nkubectl edit configmaps coredns -n kube-system # or your own custom config map\n```\n\nand change the `Corefile` data to add the following snippet in at the end of the file\n\n```yaml\notoroshi.mesh:53 {\n errors\n health\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.arpa\n }\n rewrite name regex (.*)\\.otoroshi\\.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}\n```\n\nyou can also define simpler rewrite if it suits you use case better\n\n```\nrewrite name my-service.otoroshi.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n```\n\ndo not hesitate to change `otoroshi-worker-service.otoroshi` according to your own setup. If otoroshi is not in cluster mode, change it to `otoroshi-service.otoroshi`. If otoroshi is not deployed in the `otoroshi` namespace, change it to `otoroshi-service.the-namespace`, etc.\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh`\n\nthen you can call your service like \n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\n\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using old kube-dns system\n\nif your stuck with an old version of kubernetes, it uses kube-dns that is not supported by otoroshi, so you will have to provide your own coredns deployment and declare it as a stubDomain in the old kube-dns system. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the kube-dns integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"kubeDnsOperatorIntegration\": true, // enable kube-dns integration for intra cluster calls\n \"kubeDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"kubeDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"kubeDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\n### Using Openshift DNS operator\n\nOpenshift DNS operator does not allow to customize DNS configuration a lot, so you will have to provide your own coredns deployment and declare it as a stub in the Openshift DNS operator. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the Openshift DNS operator integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"openshiftDnsOperatorIntegration\": true, // enable openshift dns operator integration for intra cluster calls\n \"openshiftDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"openshiftDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"openshiftDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\ndon't forget to update the otoroshi `ClusterRole`\n\n```yaml\n- apiGroups:\n - operator.openshift.io\n resources:\n - dnses\n verbs:\n - get\n - list\n - watch\n - update\n```\n\n## CRD validation in kubectl\n\nIn order to get CRD validation before manifest deployments right inside kubectl, you can deploy a validation webhook that will do the trick. Also check that you have `otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator` request sink enabled.\n\nvalidation-webhook.yaml\n: @@snip [validation-webhook.yaml](../snippets/kubernetes/kustomize/base/validation-webhook.yaml)\n\n## Easier integration with otoroshi-sidecar\n\nOtoroshi can help you to easily use existing services without modifications while gettings all the perks of otoroshi like apikeys, mTLS, exchange protocol, etc. To do so, otoroshi will inject a sidecar container in the pod of your deployment that will handle call coming from otoroshi and going to otoroshi. To enable otoroshi-sidecar, you need to deploy the following admission webhook. Also check that you have `otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector` request sink enabled.\n\nsidecar-webhook.yaml\n: @@snip [sidecar-webhook.yaml](../snippets/kubernetes/kustomize/base/sidecar-webhook.yaml)\n\nthen it's quite easy to add the sidecar, just add the following label to your pod `otoroshi.io/sidecar: inject` and some annotations to tell otoroshi what certificates and apikeys to use.\n\n```yaml\nannotations:\n otoroshi.io/sidecar-apikey: backend-apikey\n otoroshi.io/sidecar-backend-cert: backend-cert\n otoroshi.io/sidecar-client-cert: oto-client-cert\n otoroshi.io/token-secret: secret\n otoroshi.io/expected-dn: UID=oto-client-cert, O=OtoroshiApps\n```\n\nnow you can just call you otoroshi handled apis from inside your pod like `curl http://my-service.namespace.otoroshi.mesh/api` without passing any apikey or client certificate and the sidecar will handle everything for you. Same thing for call from otoroshi to your pod, everything will be done in mTLS fashion with apikeys and otoroshi exchange protocol\n\nhere is a full example\n\nsidecar.yaml\n: @@snip [sidecar.yaml](../snippets/kubernetes/kustomize/base/sidecar.yaml)\n\n@@@ warning\nPlease avoid to use port `80` for your pod as it's the default port to access otoroshi from your pod and the call will be redirect to the sidecar via an iptables rule\n@@@\n\n## Daikoku integration\n\nIt is possible to easily integrate daikoku generated apikeys without any human interaction with the actual apikey secret. To do that, create a plan in Daikoku and setup the integration mode to `Automatic`\n\n@@@ div { .centered-img }\n\n@@@\n\nthen when a user subscribe for an apikey, he will only see an integration token\n\n@@@ div { .centered-img }\n\n@@@\n\nthen just create an ApiKey manifest with this token and your good to go \n\n```yaml\napiVersion: proxy.otoroshi.io/v1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-3\nspec:\n exportSecret: true \n secretName: secret-3\n daikokuToken: RShQrvINByiuieiaCBwIZfGFgdPu7tIJEN5gdV8N8YeH4RI9ErPYJzkuFyAkZ2xy\n```\n\n" + "content": "# Kubernetes\n\nStarting at version 1.5.0, Otoroshi provides a native Kubernetes support. Multiple otoroshi jobs (that are actually kubernetes controllers) are provided in order to\n\n- sync kubernetes secrets of type `kubernetes.io/tls` to otoroshi certificates\n- act as a standard ingress controller (supporting `Ingress` objects)\n- provide Custom Resource Definitions (CRDs) to manage Otoroshi entities from Kubernetes and act as an ingress controller with its own resources\n\n## Installing otoroshi on your kubernetes cluster\n\n@@@ warning\nYou need to have cluster admin privileges to install otoroshi and its service account, role mapping and CRDs on a kubernetes cluster. We also advise you to create a dedicated namespace (you can name it `otoroshi` for example) to install otoroshi\n@@@\n\nIf you want to deploy otoroshi into your kubernetes cluster, you can download the deployment descriptors from https://github.com/MAIF/otoroshi/tree/master/kubernetes and use kustomize to create your own overlay.\n\nYou can also create a `kustomization.yaml` file with a remote base\n\n```yaml\nbases:\n- github.com/MAIF/otoroshi/kubernetes/kustomize/overlays/simple/?ref=v16.5.0-dev\n```\n\nThen deploy it with `kubectl apply -k ./overlays/myoverlay`. \n\nYou can also use Helm to deploy a simple otoroshi cluster on your kubernetes cluster\n\n```sh\nhelm repo add otoroshi https://maif.github.io/otoroshi/helm\nhelm install my-otoroshi otoroshi/otoroshi\n```\n\nBelow, you will find example of deployment. Do not hesitate to adapt them to your needs. Those descriptors have value placeholders that you will need to replace with actual values like \n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: ${domain}\n```\n\nyou will have to edit it to make it look like\n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: 'apis.my.domain'\n```\n\nif you don't want to use placeholders and environment variables, you can create a secret containing the configuration file of otoroshi\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: otoroshi-config\ntype: Opaque\nstringData:\n oto.conf: >\n include \"application.conf\"\n app {\n storage = \"redis\"\n domain = \"apis.my.domain\"\n }\n```\n\nand mount it in the otoroshi container\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: otoroshi-deployment\nspec:\n selector:\n matchLabels:\n run: otoroshi-deployment\n template:\n metadata:\n labels:\n run: otoroshi-deployment\n spec:\n serviceAccountName: otoroshi-admin-user\n terminationGracePeriodSeconds: 60\n hostNetwork: false\n containers:\n - image: maif/otoroshi:16.5.0-dev\n imagePullPolicy: IfNotPresent\n name: otoroshi\n args: ['-Dconfig.file=/usr/app/otoroshi/conf/oto.conf']\n ports:\n - containerPort: 8080\n name: \"http\"\n protocol: TCP\n - containerPort: 8443\n name: \"https\"\n protocol: TCP\n volumeMounts:\n - name: otoroshi-config\n mountPath: \"/usr/app/otoroshi/conf\"\n readOnly: true\n volumes:\n - name: otoroshi-config\n secret:\n secretName: otoroshi-config\n ...\n```\n\nYou can also create several secrets for each placeholder, mount them to the otoroshi container then use their file path as value\n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: 'file:///the/path/of/the/secret/file'\n```\n\nyou can use the same trick in the config. file itself\n\n### Note on bare metal kubernetes cluster installation\n\n@@@ note\nBare metal kubernetes clusters don't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples below.\n@@@\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@\n\n### Common manifests\n\nthe following manifests are always needed. They create otoroshi CRDs, tokens, role, etc. Redis deployment is not mandatory, it's just an example. You can use your own existing setup.\n\nrbac.yaml\n: @@snip [rbac.yaml](../snippets/kubernetes/kustomize/base/rbac.yaml) \n\ncrds.yaml\n: @@snip [crds.yaml](../snippets/kubernetes/kustomize/base/crds.yaml) \n\nredis.yaml\n: @@snip [redis.yaml](../snippets/kubernetes/kustomize/base/redis.yaml) \n\n\n### Deploy a simple otoroshi instanciation on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. We use a service of type `LoadBalancer` to expose otoroshi to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple/dns.example) \n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. The otoroshi instance are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/dns.example) \n\n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster using a DaemonSet\n\nHere we have one otoroshi instance on each kubernetes node (with the `otoroshi-kind: instance` label) with redis persistance. The otoroshi instances are exposed as `hostPort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/dns.example) \n\n### Deploy an otoroshi cluster on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas of an otoroshi leader connected to a redis instance and 2 replicas of an otoroshi worker connected to the leader. We use a service of type `LoadBalancer` to expose otoroshi leader/worker to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster\n\nHere we have 2 replicas of otoroshi leader connected to the same redis instance and 2 replicas for otoroshi worker. The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster using DaemonSet\n\nHere we have 1 otoroshi leader instance on each kubernetes node (with the `otoroshi-kind: leader` label) connected to the same redis instance and 1 otoroshi worker instance on each kubernetes node (with the `otoroshi-kind: worker` label). The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\n## Using Otoroshi as an Ingress Controller\n\nIf you want to use Otoroshi as an [Ingress Controller](https://kubernetes.io/fr/docs/concepts/services-networking/ingress/), just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Ingress Controller`.\n\nThen add the following configuration for the job (with your own tweaks of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": true, // sync ingresses\n \"crds\": false, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities. can be \"default\" to use otoroshi default templates\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n \"organization\": {},\n \"team\": {},\n \"data-exporter\": {},\n \"routes\": {},\n \"route-compositions\": {},\n \"backends\": {}\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nNow you can deploy your first service ;)\n\n### Deploy an ingress route\n\nnow let's say you want to deploy an http service and route to the outside world through otoroshi\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: kennethreitz/httpbin\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 80\n name: \"http\"\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8080\n targetPort: http\n name: http\n selector:\n run: http-app-deployment\n---\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nonce deployed, otoroshi will sync with kubernetes and create the corresponding service to route your app. You will be able to access your app with\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get\n```\n\n### Support for Ingress Classes\n\nSince Kubernetes 1.18, you can use `IngressClass` type of manifest to specify which ingress controller you want to use for a deployment (https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#extended-configuration-with-ingress-classes). Otoroshi is fully compatible with this new manifest `kind`. To use it, configure the Ingress job to match your controller\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClasses\": [\"otoroshi.io/ingress-controller\"],\n ...\n }\n}\n```\n\nthen you have to deploy an `IngressClass` to declare Otoroshi as an ingress controller\n\n```yaml\napiVersion: \"networking.k8s.io/v1beta1\"\nkind: \"IngressClass\"\nmetadata:\n name: \"otoroshi-ingress-controller\"\nspec:\n controller: \"otoroshi.io/ingress-controller\"\n parameters:\n apiGroup: \"proxy.otoroshi.io/v1alpha\"\n kind: \"IngressParameters\"\n name: \"otoroshi-ingress-controller\"\n```\n\nand use it in your `Ingress`\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\nspec:\n ingressClassName: otoroshi-ingress-controller\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\n### Use multiple ingress controllers\n\nIt is of course possible to use multiple ingress controller at the same time (https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#using-multiple-ingress-controllers) using the annotation `kubernetes.io/ingress.class`. By default, otoroshi reacts to the class `otoroshi`, but you can make it the default ingress controller with the following config\n\n```json\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClass\": \"*\",\n ...\n }\n}\n```\n\n### Supported annotations\n\nif you need to customize the service descriptor behind an ingress rule, you can use some annotations. If you need better customisation, just go to the CRDs part. The following annotations are supported :\n\n- `ingress.otoroshi.io/groups`\n- `ingress.otoroshi.io/group`\n- `ingress.otoroshi.io/groupId`\n- `ingress.otoroshi.io/name`\n- `ingress.otoroshi.io/targetsLoadBalancing`\n- `ingress.otoroshi.io/stripPath`\n- `ingress.otoroshi.io/enabled`\n- `ingress.otoroshi.io/userFacing`\n- `ingress.otoroshi.io/privateApp`\n- `ingress.otoroshi.io/forceHttps`\n- `ingress.otoroshi.io/maintenanceMode`\n- `ingress.otoroshi.io/buildMode`\n- `ingress.otoroshi.io/strictlyPrivate`\n- `ingress.otoroshi.io/sendOtoroshiHeadersBack`\n- `ingress.otoroshi.io/readOnly`\n- `ingress.otoroshi.io/xForwardedHeaders`\n- `ingress.otoroshi.io/overrideHost`\n- `ingress.otoroshi.io/allowHttp10`\n- `ingress.otoroshi.io/logAnalyticsOnServer`\n- `ingress.otoroshi.io/useAkkaHttpClient`\n- `ingress.otoroshi.io/useNewWSClient`\n- `ingress.otoroshi.io/tcpUdpTunneling`\n- `ingress.otoroshi.io/detectApiKeySooner`\n- `ingress.otoroshi.io/letsEncrypt`\n- `ingress.otoroshi.io/publicPatterns`\n- `ingress.otoroshi.io/privatePatterns`\n- `ingress.otoroshi.io/additionalHeaders`\n- `ingress.otoroshi.io/additionalHeadersOut`\n- `ingress.otoroshi.io/missingOnlyHeadersIn`\n- `ingress.otoroshi.io/missingOnlyHeadersOut`\n- `ingress.otoroshi.io/removeHeadersIn`\n- `ingress.otoroshi.io/removeHeadersOut`\n- `ingress.otoroshi.io/headersVerification`\n- `ingress.otoroshi.io/matchingHeaders`\n- `ingress.otoroshi.io/ipFiltering.whitelist`\n- `ingress.otoroshi.io/ipFiltering.blacklist`\n- `ingress.otoroshi.io/api.exposeApi`\n- `ingress.otoroshi.io/api.openApiDescriptorUrl`\n- `ingress.otoroshi.io/healthCheck.enabled`\n- `ingress.otoroshi.io/healthCheck.url`\n- `ingress.otoroshi.io/jwtVerifier.ids`\n- `ingress.otoroshi.io/jwtVerifier.enabled`\n- `ingress.otoroshi.io/jwtVerifier.excludedPatterns`\n- `ingress.otoroshi.io/authConfigRef`\n- `ingress.otoroshi.io/redirection.enabled`\n- `ingress.otoroshi.io/redirection.code`\n- `ingress.otoroshi.io/redirection.to`\n- `ingress.otoroshi.io/clientValidatorRef`\n- `ingress.otoroshi.io/transformerRefs`\n- `ingress.otoroshi.io/transformerConfig`\n- `ingress.otoroshi.io/accessValidator.enabled`\n- `ingress.otoroshi.io/accessValidator.excludedPatterns`\n- `ingress.otoroshi.io/accessValidator.refs`\n- `ingress.otoroshi.io/accessValidator.config`\n- `ingress.otoroshi.io/preRouting.enabled`\n- `ingress.otoroshi.io/preRouting.excludedPatterns`\n- `ingress.otoroshi.io/preRouting.refs`\n- `ingress.otoroshi.io/preRouting.config`\n- `ingress.otoroshi.io/issueCert`\n- `ingress.otoroshi.io/issueCertCA`\n- `ingress.otoroshi.io/gzip.enabled`\n- `ingress.otoroshi.io/gzip.excludedPatterns`\n- `ingress.otoroshi.io/gzip.whiteList`\n- `ingress.otoroshi.io/gzip.blackList`\n- `ingress.otoroshi.io/gzip.bufferSize`\n- `ingress.otoroshi.io/gzip.chunkedThreshold`\n- `ingress.otoroshi.io/gzip.compressionLevel`\n- `ingress.otoroshi.io/cors.enabled`\n- `ingress.otoroshi.io/cors.allowOrigin`\n- `ingress.otoroshi.io/cors.exposeHeaders`\n- `ingress.otoroshi.io/cors.allowHeaders`\n- `ingress.otoroshi.io/cors.allowMethods`\n- `ingress.otoroshi.io/cors.excludedPatterns`\n- `ingress.otoroshi.io/cors.maxAge`\n- `ingress.otoroshi.io/cors.allowCredentials`\n- `ingress.otoroshi.io/clientConfig.useCircuitBreaker`\n- `ingress.otoroshi.io/clientConfig.retries`\n- `ingress.otoroshi.io/clientConfig.maxErrors`\n- `ingress.otoroshi.io/clientConfig.retryInitialDelay`\n- `ingress.otoroshi.io/clientConfig.backoffFactor`\n- `ingress.otoroshi.io/clientConfig.connectionTimeout`\n- `ingress.otoroshi.io/clientConfig.idleTimeout`\n- `ingress.otoroshi.io/clientConfig.callAndStreamTimeout`\n- `ingress.otoroshi.io/clientConfig.callTimeout`\n- `ingress.otoroshi.io/clientConfig.globalTimeout`\n- `ingress.otoroshi.io/clientConfig.sampleInterval`\n- `ingress.otoroshi.io/enforceSecureCommunication`\n- `ingress.otoroshi.io/sendInfoToken`\n- `ingress.otoroshi.io/sendStateChallenge`\n- `ingress.otoroshi.io/secComHeaders.claimRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateResponseName`\n- `ingress.otoroshi.io/secComTtl`\n- `ingress.otoroshi.io/secComVersion`\n- `ingress.otoroshi.io/secComInfoTokenVersion`\n- `ingress.otoroshi.io/secComExcludedPatterns`\n- `ingress.otoroshi.io/secComSettings.size`\n- `ingress.otoroshi.io/secComSettings.secret`\n- `ingress.otoroshi.io/secComSettings.base64`\n- `ingress.otoroshi.io/secComUseSameAlgo`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.size`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.base64`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.size`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.base64`\n- `ingress.otoroshi.io/secComAlgoInfoToken.size`\n- `ingress.otoroshi.io/secComAlgoInfoToken.secret`\n- `ingress.otoroshi.io/secComAlgoInfoToken.base64`\n- `ingress.otoroshi.io/securityExcludedPatterns`\n\nfor more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n\nwith the previous example, the ingress does not define any apikey, so the route is public. If you want to enable apikeys on it, you can deploy the following descriptor\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\n ingress.otoroshi.io/group: http-app-group\n ingress.otoroshi.io/forceHttps: 'true'\n ingress.otoroshi.io/sendOtoroshiHeadersBack: 'true'\n ingress.otoroshi.io/overrideHost: 'true'\n ingress.otoroshi.io/allowHttp10: 'false'\n ingress.otoroshi.io/publicPatterns: ''\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nnow you can use an existing apikey in the `http-app-group` to access your app\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get -u existing-apikey-1:secret-1\n```\n\n## Use Otoroshi CRDs for a better/full integration\n\nOtoroshi provides some Custom Resource Definitions for kubernetes in order to manage Otoroshi related entities in kubernetes\n\n- `service-groups`\n- `service-descriptors`\n- `apikeys`\n- `certificates`\n- `global-configs`\n- `jwt-verifiers`\n- `auth-modules`\n- `scripts`\n- `tcp-services`\n- `data-exporters`\n- `admins`\n- `teams`\n- `organizations`\n\nusing CRDs, you will be able to deploy and manager those entities from kubectl or the kubernetes api like\n\n```sh\nsudo kubectl get apikeys --all-namespaces\nsudo kubectl get service-descriptors --all-namespaces\ncurl -X GET \\\n -H 'Authorization: Bearer eyJhbGciOiJSUzI....F463SrpOehQRaQ' \\\n -H 'Accept: application/json' -k \\\n https://127.0.0.1:6443/apis/proxy.otoroshi.io/v1/apikeys | jq\n```\n\nYou can see this as better `Ingress` resources. Like any `Ingress` resource can define which controller it uses (using the `kubernetes.io/ingress.class` annotation), you can chose another kind of resource instead of `Ingress`. With Otoroshi CRDs you can even define resources like `Certificate`, `Apikey`, `AuthModules`, `JwtVerifier`, etc. It will help you to use all the power of Otoroshi while using the deployment model of kubernetes.\n \n@@@ warning\nwhen using Otoroshi CRDs, Kubernetes becomes the single source of truth for the synced entities. It means that any value in the descriptors deployed will overrides the one in Otoroshi datastore each time it's synced. So be careful if you use the Otoroshi UI or the API, some changes in configuration may be overriden by CRDs sync job.\n@@@\n\n### Resources examples\n\ngroup.yaml\n: @@snip [group.yaml](../snippets/crds/group.yaml) \n\napikey.yaml\n: @@snip [apikey.yaml](../snippets/crds/apikey.yaml) \n\nservice-descriptor.yaml\n: @@snip [service.yaml](../snippets/crds/service-descriptor.yaml) \n\ncertificate.yaml\n: @@snip [cert.yaml](../snippets/crds/certificate.yaml) \n\njwt.yaml\n: @@snip [jwt.yaml](../snippets/crds/jwt.yaml) \n\nauth.yaml\n: @@snip [auth.yaml](../snippets/crds/auth.yaml) \n\norganization.yaml\n: @@snip [orga.yaml](../snippets/crds/organization.yaml) \n\nteam.yaml\n: @@snip [team.yaml](../snippets/crds/team.yaml) \n\n\n### Configuration\n\nTo configure it, just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Otoroshi CRDs Controller`. Then add the following configuration for the job (with your own tweak of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"crds\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": false, // sync ingresses\n \"crds\": true, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities. can be \"default\" to use otoroshi default templates\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n \"organization\": {},\n \"team\": {},\n \"data-exporter\": {}\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nyou can find a more complete example of the configuration object [here](https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/plugins/jobs/kubernetes/config.scala#L134-L163)\n\n### Note about `apikeys` and `certificates` resources\n\nApikeys and Certificates are a little bit different than the other resources. They have ability to be defined without their secret part, but with an export setting so otoroshi will generate the secret parts and export the apikey or the certificate to kubernetes secret. Then any app will be able to mount them as volumes (see the full example below)\n\nIn those resources you can define \n\n```yaml\nexportSecret: true \nsecretName: the-secret-name\n```\n\nand omit `clientSecret` for apikey or `publicKey`, `privateKey` for certificates. For certificate you will have to provide a `csr` for the certificate in order to generate it\n\n```yaml\ncsr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n - httpapps.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n```\n\nwhen apikeys are exported as kubernetes secrets, they will have the type `otoroshi.io/apikey-secret` with values `clientId` and `clientSecret`\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: apikey-1\ntype: otoroshi.io/apikey-secret\ndata:\n clientId: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n clientSecret: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n```\n\nwhen certificates are exported as kubernetes secrets, they will have the type `kubernetes.io/tls` with the standard values `tls.crt` (the full cert chain) and `tls.key` (the private key). For more convenience, they will also have a `cert.crt` value containing the actual certificate without the ca chain and `ca-chain.crt` containing the ca chain without the certificate.\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: certificate-1\ntype: kubernetes.io/tls\ndata:\n tls.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n tls.key: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n cert.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n ca-chain.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA== \n```\n\n## Full CRD example\n\nthen you can deploy the previous example with better configuration level, and using mtls, apikeys, etc\n\nLet say the app looks like :\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\n// here we read the apikey to access http-app-2 from files mounted from secrets\nconst clientId = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientId').toString('utf8')\nconst clientSecret = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientSecret').toString('utf8')\n\nconst backendKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/tls.key').toString('utf8')\nconst backendCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/cert.crt').toString('utf8')\nconst backendCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/ca-chain.crt').toString('utf8')\n\nconst clientKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/tls.key').toString('utf8')\nconst clientCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/cert.crt').toString('utf8')\nconst clientCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/ca-chain.crt').toString('utf8')\n\nfunction callApi2() {\n return new Promise((success, failure) => {\n const options = { \n // using the implicit internal name (*.global.otoroshi.mesh) of the other service descriptor passing through otoroshi\n hostname: 'http-app-service-descriptor-2.global.otoroshi.mesh', \n port: 433, \n path: '/', \n method: 'GET',\n headers: {\n 'Accept': 'application/json',\n 'Otoroshi-Client-Id': clientId,\n 'Otoroshi-Client-Secret': clientSecret,\n },\n cert: clientCert,\n key: clientKey,\n ca: clientCa\n }; \n let data = '';\n const req = https.request(options, (res) => { \n res.on('data', (d) => { \n data = data + d.toString('utf8');\n }); \n res.on('end', () => { \n success({ body: JSON.parse(data), res });\n }); \n res.on('error', (e) => { \n failure(e);\n }); \n }); \n req.end();\n })\n}\n\nconst options = { \n key: backendKey, \n cert: backendCert, \n ca: backendCa, \n // we want mtls behavior\n requestCert: true, \n rejectUnauthorized: true\n}; \nhttps.createServer(options, (req, res) => { \n res.writeHead(200, {'Content-Type': 'application/json'});\n callApi2().then(resp => {\n res.write(JSON.stringify{ (\"message\": `Hello to ${req.socket.getPeerCertificate().subject.CN}`, api2: resp.body })); \n });\n}).listen(433);\n```\n\nthen, the descriptors will be :\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: foo/http-app\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 443\n name: \"https\"\n volumeMounts:\n - name: apikey-volume\n # here you will be able to read apikey from files \n # - /var/run/secrets/kubernetes.io/apikeys/clientId\n # - /var/run/secrets/kubernetes.io/apikeys/clientSecret\n mountPath: \"/var/run/secrets/kubernetes.io/apikeys\"\n readOnly: true\n volumeMounts:\n - name: backend-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/backend/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/backend/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/backend\"\n readOnly: true\n - name: client-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/client/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/client/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/client\"\n readOnly: true\n volumes:\n - name: apikey-volume\n secret:\n # here we reference the secret name from apikey http-app-2-apikey-1\n secretName: secret-2\n - name: backend-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-backend\n secretName: http-app-certificate-backend-secret\n - name: client-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-client\n secretName: http-app-certificate-client-secret\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8443\n targetPort: https\n name: https\n selector:\n run: http-app-deployment\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ServiceGroup\nmetadata:\n name: http-app-group\n annotations:\n otoroshi.io/id: http-app-group\nspec:\n description: a group to hold services about the http-app\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ApiKey\nmetadata:\n name: http-app-apikey-1\n# this apikey can be used to access the app\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-1\n authorizedEntities: \n - group_http-app-group\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-1\n# this apikey can be used to access another app in a different group\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-2\n authorizedEntities: \n - group_http-app-2-group\n---\napiVersion: proxy.otoroshi.io/v1\nkind: Certificate\nmetadata:\n name: http-app-certificate-frontend\nspec:\n description: certificate for the http-app on otorshi frontend\n autoRenew: true\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1\nkind: Certificate\nmetadata:\n name: http-app-certificate-backend\nspec:\n description: certificate for the http-app deployed on pods\n autoRenew: true\n # a secret name http-app-certificate-backend-secret will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: http-app-certificate-backend-secret\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - http-app-service \n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-back, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1\nkind: Certificate\nmetadata:\n name: http-app-certificate-client\nspec:\n description: certificate for the http-app\n autoRenew: true\n secretName: http-app-certificate-client-secret\n csr:\n issuer: CN=Otoroshi Root\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-client, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ServiceDescriptor\nmetadata:\n name: http-app-service-descriptor\nspec:\n description: the service descriptor for the http app\n groups: \n - http-app-group\n forceHttps: true\n hosts:\n - httpapp.foo.bar # hostname exposed oustide of the kubernetes cluster\n # - http-app-service-descriptor.global.otoroshi.mesh # implicit internal name inside the kubernetes cluster \n matchingRoot: /\n targets:\n - url: https://http-app-service:8443\n # alternatively, you can use serviceName and servicePort to use pods ip addresses\n # serviceName: http-app-service\n # servicePort: https\n mtlsConfig:\n # use mtls to contact the backend\n mtls: true\n certs: \n # reference the DN for the client cert\n - UID=httpapp-client, O=OtoroshiApps\n trustedCerts: \n # reference the DN for the CA cert \n - CN=Otoroshi Root\n sendOtoroshiHeadersBack: true\n xForwardedHeaders: true\n overrideHost: true\n allowHttp10: false\n publicPatterns:\n - /health\n additionalHeaders:\n x-foo: bar\n# here you can specify everything supported by otoroshi like jwt-verifiers, auth config, etc ... for more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n```\n\nnow with this descriptor deployed, you can access your app with a command like \n\n```sh\nCLIENT_ID=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientId}\" | base64 --decode`\nCLIENT_SECRET=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientSecret}\" | base64 --decode`\ncurl -X GET https://httpapp.foo.bar/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n## Expose Otoroshi to outside world\n\nIf you deploy Otoroshi on a kubernetes cluster, the Otoroshi service is deployed as a loadbalancer (service type: `LoadBalancer`). You'll need to declare in your DNS settings any name that can be routed by otoroshi going to the loadbalancer endpoint (CNAME or ip addresses) of your kubernetes distribution. If you use a managed kubernetes cluster from a cloud provider, it will work seamlessly as they will provide external loadbalancers out of the box. However, if you use a bare metal kubernetes cluster, id doesn't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples in the installation section.\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@ \n\n## Access a service from inside the k8s cluster\n\n### Using host header overriding\n\nYou can access any service referenced in otoroshi, through otoroshi from inside the kubernetes cluster by using the otoroshi service name (if you use a template based on https://github.com/MAIF/otoroshi/tree/master/kubernetes/base deployed in the otoroshi namespace) and the host header with the service domain like :\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET -H 'Host: httpapp.foo.bar' https://otoroshi-service.otoroshi.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using dedicated services\n\nit's also possible to define services that targets otoroshi deployment (or otoroshi workers deployment) and use then as valid hosts in otoroshi services \n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-awesome-service\nspec:\n selector:\n # run: otoroshi-deployment\n # or in cluster mode\n run: otoroshi-worker-deployment\n ports:\n - port: 8080\n name: \"http\"\n targetPort: \"http\"\n - port: 8443\n name: \"https\"\n targetPort: \"https\"\n```\n\nand access it like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-namspace.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using coredns integration\n\nYou can also enable the coredns integration to simplify the flow. You can use the the following keys in the plugin config :\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"coreDnsIntegration\": true, // enable coredns integration for intra cluster calls\n \"kubeSystemNamespace\": \"kube-system\", // the namespace where coredns is deployed\n \"corednsConfigMap\": \"coredns\", // the name of the coredns configmap\n \"otoroshiServiceName\": \"otoroshi-service\", // the name of the otoroshi service, could be otoroshi-workers-service\n \"otoroshiNamespace\": \"otoroshi\", // the namespace where otoroshi is deployed\n \"clusterDomain\": \"cluster.local\", // the domain for cluster services\n ...\n }\n}\n```\n\notoroshi will patch coredns config at startup then you can call your services like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh` or `${service-name}.${service-namespace}.svc.otoroshi.local`\n\n### Using coredns with manual patching\n\nyou can also patch the coredns config manually\n\n```sh\nkubectl edit configmaps coredns -n kube-system # or your own custom config map\n```\n\nand change the `Corefile` data to add the following snippet in at the end of the file\n\n```yaml\notoroshi.mesh:53 {\n errors\n health\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.arpa\n }\n rewrite name regex (.*)\\.otoroshi\\.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}\n```\n\nyou can also define simpler rewrite if it suits you use case better\n\n```\nrewrite name my-service.otoroshi.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n```\n\ndo not hesitate to change `otoroshi-worker-service.otoroshi` according to your own setup. If otoroshi is not in cluster mode, change it to `otoroshi-service.otoroshi`. If otoroshi is not deployed in the `otoroshi` namespace, change it to `otoroshi-service.the-namespace`, etc.\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh`\n\nthen you can call your service like \n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\n\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using old kube-dns system\n\nif your stuck with an old version of kubernetes, it uses kube-dns that is not supported by otoroshi, so you will have to provide your own coredns deployment and declare it as a stubDomain in the old kube-dns system. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the kube-dns integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"kubeDnsOperatorIntegration\": true, // enable kube-dns integration for intra cluster calls\n \"kubeDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"kubeDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"kubeDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\n### Using Openshift DNS operator\n\nOpenshift DNS operator does not allow to customize DNS configuration a lot, so you will have to provide your own coredns deployment and declare it as a stub in the Openshift DNS operator. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the Openshift DNS operator integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"openshiftDnsOperatorIntegration\": true, // enable openshift dns operator integration for intra cluster calls\n \"openshiftDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"openshiftDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"openshiftDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\ndon't forget to update the otoroshi `ClusterRole`\n\n```yaml\n- apiGroups:\n - operator.openshift.io\n resources:\n - dnses\n verbs:\n - get\n - list\n - watch\n - update\n```\n\n## CRD validation in kubectl\n\nIn order to get CRD validation before manifest deployments right inside kubectl, you can deploy a validation webhook that will do the trick. Also check that you have `otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator` request sink enabled.\n\nvalidation-webhook.yaml\n: @@snip [validation-webhook.yaml](../snippets/kubernetes/kustomize/base/validation-webhook.yaml)\n\n## Easier integration with otoroshi-sidecar\n\nOtoroshi can help you to easily use existing services without modifications while gettings all the perks of otoroshi like apikeys, mTLS, exchange protocol, etc. To do so, otoroshi will inject a sidecar container in the pod of your deployment that will handle call coming from otoroshi and going to otoroshi. To enable otoroshi-sidecar, you need to deploy the following admission webhook. Also check that you have `otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector` request sink enabled.\n\nsidecar-webhook.yaml\n: @@snip [sidecar-webhook.yaml](../snippets/kubernetes/kustomize/base/sidecar-webhook.yaml)\n\nthen it's quite easy to add the sidecar, just add the following label to your pod `otoroshi.io/sidecar: inject` and some annotations to tell otoroshi what certificates and apikeys to use.\n\n```yaml\nannotations:\n otoroshi.io/sidecar-apikey: backend-apikey\n otoroshi.io/sidecar-backend-cert: backend-cert\n otoroshi.io/sidecar-client-cert: oto-client-cert\n otoroshi.io/token-secret: secret\n otoroshi.io/expected-dn: UID=oto-client-cert, O=OtoroshiApps\n```\n\nnow you can just call you otoroshi handled apis from inside your pod like `curl http://my-service.namespace.otoroshi.mesh/api` without passing any apikey or client certificate and the sidecar will handle everything for you. Same thing for call from otoroshi to your pod, everything will be done in mTLS fashion with apikeys and otoroshi exchange protocol\n\nhere is a full example\n\nsidecar.yaml\n: @@snip [sidecar.yaml](../snippets/kubernetes/kustomize/base/sidecar.yaml)\n\n@@@ warning\nPlease avoid to use port `80` for your pod as it's the default port to access otoroshi from your pod and the call will be redirect to the sidecar via an iptables rule\n@@@\n\n## Daikoku integration\n\nIt is possible to easily integrate daikoku generated apikeys without any human interaction with the actual apikey secret. To do that, create a plan in Daikoku and setup the integration mode to `Automatic`\n\n@@@ div { .centered-img }\n\n@@@\n\nthen when a user subscribe for an apikey, he will only see an integration token\n\n@@@ div { .centered-img }\n\n@@@\n\nthen just create an ApiKey manifest with this token and your good to go \n\n```yaml\napiVersion: proxy.otoroshi.io/v1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-3\nspec:\n exportSecret: true \n secretName: secret-3\n daikokuToken: RShQrvINByiuieiaCBwIZfGFgdPu7tIJEN5gdV8N8YeH4RI9ErPYJzkuFyAkZ2xy\n```\n\n" }, { "name": "scaling.md", @@ -186,7 +186,7 @@ "id": "/getting-started.md", "url": "/getting-started.html", "title": "Getting Started", - "content": "# Getting Started\n\n- [Protect your service with Otoroshi ApiKey](#protect-your-service-with-otoroshi-apikey)\n- [Secure your web app in 2 calls with an authentication](#secure-your-web-app-in-2-calls-with-an-authentication)\n\nDownload the latest jar of Otoroshi\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar'\n```\n\nOnce downloading, run Otoroshi.\n```sh\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n```\n\nYes, that command is all it took to start it up.\n\n## Protect your service with Otoroshi ApiKey\n\nCreate a new route, exposed on `http://myapi.oto.tools:8080`, which will forward all requests to the mirror `https://mirror.otoroshi.io`.\n\n```sh\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/routes \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"myapi.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.ApikeyCalls\",\n \"enabled\": true,\n \"config\": {\n \"validate\": true,\n \"mandatory\": true,\n \"update_quotas\": true\n }\n }\n ]\n}\nEOF\n```\n\nNow that we have created our route, let’s see if our request reaches our upstream service. \nYou should receive an error from Otoroshi about a missing api key in our request.\n\n```sh\ncurl 'http://myapi.oto.tools:8080'\n```\n\nIt looks like we don’t have access to it. Create your first api key with a quota of 10 calls by day and month.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/apikeys' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"clientId\": \"my-first-apikey-id\",\n \"clientSecret\": \"my-first-apikey-secret\",\n \"clientName\": \"my-first-apikey\",\n \"description\": \"my-first-apikey-description\",\n \"authorizedGroup\": \"default\",\n \"enabled\": true,\n \"throttlingQuota\": 10,\n \"dailyQuota\": 10,\n \"monthlyQuota\": 10\n}\nEOF\n```\n\nCall your api with the generated apikey.\n\n```sh\ncurl 'http://myapi.oto.tools:8080' -u my-first-apikey-id:my-first-apikey-secret\n```\n\n```json\n{\n \"method\": \"GET\",\n \"path\": \"/\",\n \"headers\": {\n \"host\": \"mirror.otoroshi.io\",\n \"accept\": \"*/*\",\n \"user-agent\": \"curl/7.64.1\",\n \"authorization\": \"Basic bXktZmlyc3QtYXBpLWtleS1pZDpteS1maXJzdC1hcGkta2V5LXNlY3JldA==\",\n \"otoroshi-request-id\": \"1465298507974836306\",\n \"otoroshi-proxied-host\": \"myapi.oto.tools:8080\",\n \"otoroshi-request-timestamp\": \"2021-11-29T13:36:02.888+01:00\",\n },\n \"body\": \"\"\n}\n```\n\nCheck your remaining quotas\n\n```sh\ncurl 'http://myapi.oto.tools:8080' -u my-first-apikey-id:my-first-apikey-secret --include\n```\n\nThis should output these following Otoroshi headers\n\n```json\nOtoroshi-Daily-Calls-Remaining: 6\nOtoroshi-Monthly-Calls-Remaining: 6\n```\n\nKeep calling the api and confirm that Otoroshi is sending you an apikey exceeding quota error\n\n\n```json\n{ \n \"Otoroshi-Error\": \"You performed too much requests\"\n}\n```\n\nWell done, you have secured your first api with the apikeys system with limited call quotas.\n\n## Secure your web app in 2 calls with an authentication\n\nCreate an in-memory authentication module, with one registered user, to protect your service.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/auths' \\\n-H \"Otoroshi-Client-Id: admin-api-apikey-id\" \\\n-H \"Otoroshi-Client-Secret: admin-api-apikey-secret\" \\\n-H 'Content-Type: application/json; charset=utf-8' \\\n-d @- <<'EOF'\n{\n \"type\":\"basic\",\n \"id\":\"auth_mod_in_memory_auth\",\n \"name\":\"in-memory-auth\",\n \"desc\":\"in-memory-auth\",\n \"users\":[\n {\n \"name\":\"User Otoroshi\",\n \"password\":\"$2a$10$oIf4JkaOsfiypk5ZK8DKOumiNbb2xHMZUkYkuJyuIqMDYnR/zXj9i\",\n \"email\":\"user@foo.bar\",\n \"metadata\":{\n \"username\":\"roger\"\n },\n \"tags\":[\"foo\"],\n \"webauthn\":null,\n \"rights\":[{\n \"tenant\":\"*:r\",\n \"teams\":[\"*:r\"]\n }]\n }\n ],\n \"sessionCookieValues\":{\n \"httpOnly\":true,\n \"secure\":false\n }\n}\nEOF\n```\n\nThen create a service secure by the previous authentication module, which proxies `google.fr` on `webapp.oto.tools`.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/routes' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"myapi.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"google.fr\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.AuthModule\",\n \"enabled\": true,\n \"config\": {\n \"pass_with_apikey\": false,\n \"auth_module\": null,\n \"module\": \"auth_mod_in_memory_auth\"\n }\n }\n ]\n}\nEOF\n```\n\nNavigate to http://webapp.oto.tools:8080, login with `user@foo.bar/password` and check that you're redirect to `google` page.\n\nWell done! You completed the discovery tutorial." + "content": "# Getting Started\n\n- [Protect your service with Otoroshi ApiKey](#protect-your-service-with-otoroshi-apikey)\n- [Secure your web app in 2 calls with an authentication](#secure-your-web-app-in-2-calls-with-an-authentication)\n\nDownload the latest jar of Otoroshi\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar'\n```\n\nOnce downloading, run Otoroshi.\n```sh\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n```\n\nYes, that command is all it took to start it up.\n\n## Protect your service with Otoroshi ApiKey\n\nCreate a new route, exposed on `http://myapi.oto.tools:8080`, which will forward all requests to the mirror `https://mirror.otoroshi.io`.\n\n```sh\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/routes \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"myapi.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.ApikeyCalls\",\n \"enabled\": true,\n \"config\": {\n \"validate\": true,\n \"mandatory\": true,\n \"update_quotas\": true\n }\n }\n ]\n}\nEOF\n```\n\nNow that we have created our route, let’s see if our request reaches our upstream service. \nYou should receive an error from Otoroshi about a missing api key in our request.\n\n```sh\ncurl 'http://myapi.oto.tools:8080'\n```\n\nIt looks like we don’t have access to it. Create your first api key with a quota of 10 calls by day and month.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/apikeys' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"clientId\": \"my-first-apikey-id\",\n \"clientSecret\": \"my-first-apikey-secret\",\n \"clientName\": \"my-first-apikey\",\n \"description\": \"my-first-apikey-description\",\n \"authorizedGroup\": \"default\",\n \"enabled\": true,\n \"throttlingQuota\": 10,\n \"dailyQuota\": 10,\n \"monthlyQuota\": 10\n}\nEOF\n```\n\nCall your api with the generated apikey.\n\n```sh\ncurl 'http://myapi.oto.tools:8080' -u my-first-apikey-id:my-first-apikey-secret\n```\n\n```json\n{\n \"method\": \"GET\",\n \"path\": \"/\",\n \"headers\": {\n \"host\": \"mirror.otoroshi.io\",\n \"accept\": \"*/*\",\n \"user-agent\": \"curl/7.64.1\",\n \"authorization\": \"Basic bXktZmlyc3QtYXBpLWtleS1pZDpteS1maXJzdC1hcGkta2V5LXNlY3JldA==\",\n \"otoroshi-request-id\": \"1465298507974836306\",\n \"otoroshi-proxied-host\": \"myapi.oto.tools:8080\",\n \"otoroshi-request-timestamp\": \"2021-11-29T13:36:02.888+01:00\",\n },\n \"body\": \"\"\n}\n```\n\nCheck your remaining quotas\n\n```sh\ncurl 'http://myapi.oto.tools:8080' -u my-first-apikey-id:my-first-apikey-secret --include\n```\n\nThis should output these following Otoroshi headers\n\n```json\nOtoroshi-Daily-Calls-Remaining: 6\nOtoroshi-Monthly-Calls-Remaining: 6\n```\n\nKeep calling the api and confirm that Otoroshi is sending you an apikey exceeding quota error\n\n\n```json\n{ \n \"Otoroshi-Error\": \"You performed too much requests\"\n}\n```\n\nWell done, you have secured your first api with the apikeys system with limited call quotas.\n\n## Secure your web app in 2 calls with an authentication\n\nCreate an in-memory authentication module, with one registered user, to protect your service.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/auths' \\\n-H \"Otoroshi-Client-Id: admin-api-apikey-id\" \\\n-H \"Otoroshi-Client-Secret: admin-api-apikey-secret\" \\\n-H 'Content-Type: application/json; charset=utf-8' \\\n-d @- <<'EOF'\n{\n \"type\":\"basic\",\n \"id\":\"auth_mod_in_memory_auth\",\n \"name\":\"in-memory-auth\",\n \"desc\":\"in-memory-auth\",\n \"users\":[\n {\n \"name\":\"User Otoroshi\",\n \"password\":\"$2a$10$oIf4JkaOsfiypk5ZK8DKOumiNbb2xHMZUkYkuJyuIqMDYnR/zXj9i\",\n \"email\":\"user@foo.bar\",\n \"metadata\":{\n \"username\":\"roger\"\n },\n \"tags\":[\"foo\"],\n \"webauthn\":null,\n \"rights\":[{\n \"tenant\":\"*:r\",\n \"teams\":[\"*:r\"]\n }]\n }\n ],\n \"sessionCookieValues\":{\n \"httpOnly\":true,\n \"secure\":false\n }\n}\nEOF\n```\n\nThen create a service secure by the previous authentication module, which proxies `google.fr` on `webapp.oto.tools`.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/routes' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"myapi.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"google.fr\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.AuthModule\",\n \"enabled\": true,\n \"config\": {\n \"pass_with_apikey\": false,\n \"auth_module\": null,\n \"module\": \"auth_mod_in_memory_auth\"\n }\n }\n ]\n}\nEOF\n```\n\nNavigate to http://webapp.oto.tools:8080, login with `user@foo.bar/password` and check that you're redirect to `google` page.\n\nWell done! You completed the discovery tutorial." }, { "name": "communicate-with-kafka.md", @@ -242,7 +242,7 @@ "id": "/how-to-s/import-export-otoroshi-datastore.md", "url": "/how-to-s/import-export-otoroshi-datastore.html", "title": "Import and export Otoroshi datastore", - "content": "# Import and export Otoroshi datastore\n\n### Start Otoroshi with an initial datastore\n\nLet's start by downloading the latest Otoroshi\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar'\n```\n\nBy default, Otoroshi starts with domain `oto.tools` that targets `127.0.0.1` Now you are almost ready to run Otoroshi for the first time, we want run it with an initial data.\n\nTo do that, you need to add the **otoroshi.importFrom** setting to the Otoroshi configuration (of `$APP_IMPORT_FROM` env). It can be a file path or a URL. The content of the initial datastore can look something like the following.\n\n```json\n{\n \"label\": \"Otoroshi initial datastore\",\n \"admins\": [],\n \"simpleAdmins\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"username\": \"admin@otoroshi.io\",\n \"password\": \"$2a$10$iQRkqjKTW.5XH8ugQrnMDeUstx4KqmIeQ58dHHdW2Dv1FkyyAs4C.\",\n \"label\": \"Otoroshi Admin\",\n \"createdAt\": 1634651307724,\n \"type\": \"SIMPLE\",\n \"metadata\": {},\n \"tags\": [],\n \"rights\": [\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\"\n ]\n }\n ]\n }\n ],\n \"serviceGroups\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"id\": \"admin-api-group\",\n \"name\": \"Otoroshi Admin Api group\",\n \"description\": \"No description\",\n \"tags\": [],\n \"metadata\": {}\n },\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"id\": \"default\",\n \"name\": \"default-group\",\n \"description\": \"The default service group\",\n \"tags\": [],\n \"metadata\": {}\n }\n ],\n \"apiKeys\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"clientId\": \"admin-api-apikey-id\",\n \"clientSecret\": \"admin-api-apikey-secret\",\n \"clientName\": \"Otoroshi Backoffice ApiKey\",\n \"description\": \"The apikey use by the Otoroshi UI\",\n \"authorizedGroup\": \"admin-api-group\",\n \"authorizedEntities\": [\n \"group_admin-api-group\"\n ],\n \"enabled\": true,\n \"readOnly\": false,\n \"allowClientIdOnly\": false,\n \"throttlingQuota\": 10000,\n \"dailyQuota\": 10000000,\n \"monthlyQuota\": 10000000,\n \"constrainedServicesOnly\": false,\n \"restrictions\": {\n \"enabled\": false,\n \"allowLast\": true,\n \"allowed\": [],\n \"forbidden\": [],\n \"notFound\": []\n },\n \"rotation\": {\n \"enabled\": false,\n \"rotationEvery\": 744,\n \"gracePeriod\": 168,\n \"nextSecret\": null\n },\n \"validUntil\": null,\n \"tags\": [],\n \"metadata\": {}\n }\n ],\n \"serviceDescriptors\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"id\": \"admin-api-service\",\n \"groupId\": \"admin-api-group\",\n \"groups\": [\n \"admin-api-group\"\n ],\n \"name\": \"otoroshi-admin-api\",\n \"description\": \"\",\n \"env\": \"prod\",\n \"domain\": \"oto.tools\",\n \"subdomain\": \"otoroshi-api\",\n \"targetsLoadBalancing\": {\n \"type\": \"RoundRobin\"\n },\n \"targets\": [\n {\n \"host\": \"127.0.0.1:8080\",\n \"scheme\": \"http\",\n \"weight\": 1,\n \"mtlsConfig\": {\n \"certs\": [],\n \"trustedCerts\": [],\n \"mtls\": false,\n \"loose\": false,\n \"trustAll\": false\n },\n \"tags\": [],\n \"metadata\": {},\n \"protocol\": \"HTTP/1.1\",\n \"predicate\": {\n \"type\": \"AlwaysMatch\"\n },\n \"ipAddress\": null\n }\n ],\n \"root\": \"/\",\n \"matchingRoot\": null,\n \"stripPath\": true,\n \"localHost\": \"127.0.0.1:8080\",\n \"localScheme\": \"http\",\n \"redirectToLocal\": false,\n \"enabled\": true,\n \"userFacing\": false,\n \"privateApp\": false,\n \"forceHttps\": false,\n \"logAnalyticsOnServer\": false,\n \"useAkkaHttpClient\": true,\n \"useNewWSClient\": false,\n \"tcpUdpTunneling\": false,\n \"detectApiKeySooner\": false,\n \"maintenanceMode\": false,\n \"buildMode\": false,\n \"strictlyPrivate\": false,\n \"enforceSecureCommunication\": true,\n \"sendInfoToken\": true,\n \"sendStateChallenge\": true,\n \"sendOtoroshiHeadersBack\": true,\n \"readOnly\": false,\n \"xForwardedHeaders\": false,\n \"overrideHost\": true,\n \"allowHttp10\": true,\n \"letsEncrypt\": false,\n \"secComHeaders\": {\n \"claimRequestName\": null,\n \"stateRequestName\": null,\n \"stateResponseName\": null\n },\n \"secComTtl\": 30000,\n \"secComVersion\": 1,\n \"secComInfoTokenVersion\": \"Legacy\",\n \"secComExcludedPatterns\": [],\n \"securityExcludedPatterns\": [],\n \"publicPatterns\": [\n \"/health\",\n \"/metrics\"\n ],\n \"privatePatterns\": [],\n \"additionalHeaders\": {\n \"Host\": \"otoroshi-admin-internal-api.oto.tools\"\n },\n \"additionalHeadersOut\": {},\n \"missingOnlyHeadersIn\": {},\n \"missingOnlyHeadersOut\": {},\n \"removeHeadersIn\": [],\n \"removeHeadersOut\": [],\n \"headersVerification\": {},\n \"matchingHeaders\": {},\n \"ipFiltering\": {\n \"whitelist\": [],\n \"blacklist\": []\n },\n \"api\": {\n \"exposeApi\": false\n },\n \"healthCheck\": {\n \"enabled\": false,\n \"url\": \"/\"\n },\n \"clientConfig\": {\n \"useCircuitBreaker\": true,\n \"retries\": 1,\n \"maxErrors\": 20,\n \"retryInitialDelay\": 50,\n \"backoffFactor\": 2,\n \"callTimeout\": 30000,\n \"callAndStreamTimeout\": 120000,\n \"connectionTimeout\": 10000,\n \"idleTimeout\": 60000,\n \"globalTimeout\": 30000,\n \"sampleInterval\": 2000,\n \"proxy\": {},\n \"customTimeouts\": [],\n \"cacheConnectionSettings\": {\n \"enabled\": false,\n \"queueSize\": 2048\n }\n },\n \"canary\": {\n \"enabled\": false,\n \"traffic\": 0.2,\n \"targets\": [],\n \"root\": \"/\"\n },\n \"gzip\": {\n \"enabled\": false,\n \"excludedPatterns\": [],\n \"whiteList\": [\n \"text/*\",\n \"application/javascript\",\n \"application/json\"\n ],\n \"blackList\": [],\n \"bufferSize\": 8192,\n \"chunkedThreshold\": 102400,\n \"compressionLevel\": 5\n },\n \"metadata\": {},\n \"tags\": [],\n \"chaosConfig\": {\n \"enabled\": false,\n \"largeRequestFaultConfig\": null,\n \"largeResponseFaultConfig\": null,\n \"latencyInjectionFaultConfig\": null,\n \"badResponsesFaultConfig\": null\n },\n \"jwtVerifier\": {\n \"type\": \"ref\",\n \"ids\": [],\n \"id\": null,\n \"enabled\": false,\n \"excludedPatterns\": []\n },\n \"secComSettings\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"secComUseSameAlgo\": true,\n \"secComAlgoChallengeOtoToBack\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"secComAlgoChallengeBackToOto\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"secComAlgoInfoToken\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"cors\": {\n \"enabled\": false,\n \"allowOrigin\": \"*\",\n \"exposeHeaders\": [],\n \"allowHeaders\": [],\n \"allowMethods\": [],\n \"excludedPatterns\": [],\n \"maxAge\": null,\n \"allowCredentials\": true\n },\n \"redirection\": {\n \"enabled\": false,\n \"code\": 303,\n \"to\": \"https://www.otoroshi.io\"\n },\n \"authConfigRef\": null,\n \"clientValidatorRef\": null,\n \"transformerRef\": null,\n \"transformerRefs\": [],\n \"transformerConfig\": {},\n \"apiKeyConstraints\": {\n \"basicAuth\": {\n \"enabled\": true,\n \"headerName\": null,\n \"queryName\": null\n },\n \"customHeadersAuth\": {\n \"enabled\": true,\n \"clientIdHeaderName\": null,\n \"clientSecretHeaderName\": null\n },\n \"clientIdAuth\": {\n \"enabled\": true,\n \"headerName\": null,\n \"queryName\": null\n },\n \"jwtAuth\": {\n \"enabled\": true,\n \"secretSigned\": true,\n \"keyPairSigned\": true,\n \"includeRequestAttributes\": false,\n \"maxJwtLifespanSecs\": null,\n \"headerName\": null,\n \"queryName\": null,\n \"cookieName\": null\n },\n \"routing\": {\n \"noneTagIn\": [],\n \"oneTagIn\": [],\n \"allTagsIn\": [],\n \"noneMetaIn\": {},\n \"oneMetaIn\": {},\n \"allMetaIn\": {},\n \"noneMetaKeysIn\": [],\n \"oneMetaKeyIn\": [],\n \"allMetaKeysIn\": []\n }\n },\n \"restrictions\": {\n \"enabled\": false,\n \"allowLast\": true,\n \"allowed\": [],\n \"forbidden\": [],\n \"notFound\": []\n },\n \"accessValidator\": {\n \"enabled\": false,\n \"refs\": [],\n \"config\": {},\n \"excludedPatterns\": []\n },\n \"preRouting\": {\n \"enabled\": false,\n \"refs\": [],\n \"config\": {},\n \"excludedPatterns\": []\n },\n \"plugins\": {\n \"enabled\": false,\n \"refs\": [],\n \"config\": {},\n \"excluded\": []\n },\n \"hosts\": [\n \"otoroshi-api.oto.tools\"\n ],\n \"paths\": [],\n \"handleLegacyDomain\": true,\n \"issueCert\": false,\n \"issueCertCA\": null\n }\n ],\n \"errorTemplates\": [],\n \"jwtVerifiers\": [],\n \"authConfigs\": [],\n \"certificates\": [],\n \"clientValidators\": [],\n \"scripts\": [],\n \"tcpServices\": [],\n \"dataExporters\": [],\n \"tenants\": [\n {\n \"id\": \"default\",\n \"name\": \"Default organization\",\n \"description\": \"The default organization\",\n \"metadata\": {},\n \"tags\": []\n }\n ],\n \"teams\": [\n {\n \"id\": \"default\",\n \"tenant\": \"default\",\n \"name\": \"Default Team\",\n \"description\": \"The default Team of the default organization\",\n \"metadata\": {},\n \"tags\": []\n }\n ]\n}\n```\n\nRun an Otoroshi with the previous file as parameter.\n\n```sh\njava \\\n -Dotoroshi.adminPassword=password \\\n -Dotoroshi.importFrom=./initial-state.json \\\n -jar otoroshi.jar \n```\n\nThis should show\n\n```sh\n...\n[info] otoroshi-env - Importing from: ./initial-state.json\n[info] otoroshi-env - Successful import !\n...\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n...\n```\n\n> Warning : when you using Otoroshi with a datastore different from file or in-memory, Otoroshi will not reload the initialization script. If you expected, you have to manually clean your store.\n\n### Export the current datastore via the danger zone\n\nWhen Otoroshi is running, you can backup the global configuration store from the UI. Navigate to your instance (in our case @link:[http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone](http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone) { open=new }) and scroll to the bottom page. \n\nClick on `Full export` button to download the full global configuration.\n\n### Import a datastore from file via the danger zone\n\nWhen Otoroshi is running, you can recover a global configuration from the UI. Navigate to your instance (in our case @link:[http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone](http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone) { open=new }) and scroll to the bottom of the page. \n\nClick on `Recover from a full export file` button to apply all configurations from a file.\n\n### Export the current datastore with the Admin API\n\nOtoroshi exposes his own Admin API to manage Otoroshi resources. To call this api, you need to have an api key with the rights on `Otoroshi Admin Api group`. This group includes the `Otoroshi-admin-api` service that you can found on the services page. \n\nBy default, and with our initial configuration, Otoroshi has already created an api key named `Otoroshi Backoffice ApiKey`. You can verify the rights of an api key on its page by checking the `Authorized On` field (you should find the `Otoroshi Admin Api group` inside).\n\nThe default api key id and secret are `admin-api-apikey-id` and `admin-api-apikey-secret`.\n\nRun the next command with these values.\n\n```sh\ncurl \\\n -H 'Content-Type: application/json' \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n 'http://otoroshi-api.oto.tools:8080/api/otoroshi.json'\n```\n\nWhen calling the `/api/otoroshi.json`, the return should be the current datastore including the service descriptors, the api keys, all others resources like certificates and authentification modules, and the the global config (representing the form of the danger zone).\n\n### Import the current datastore with the Admin API\n\nAs the same way of previous section, you can erase the current datastore with a POST request. The route is the same : `/api/otoroshi.json`.\n\n```sh\ncurl \\\n -X POST \\\n -H 'Content-Type: application/json' \\\n -d '{\n \"label\" : \"Otoroshi export\",\n \"dateRaw\" : 1634714811217,\n \"date\" : \"2021-10-20 09:26:51\",\n \"stats\" : {\n \"calls\" : 4,\n \"dataIn\" : 0,\n \"dataOut\" : 97991\n },\n \"config\" : {\n \"tags\" : [ ],\n \"letsEncryptSettings\" : {\n \"enabled\" : false,\n \"server\" : \"acme://letsencrypt.org/staging\",\n \"emails\" : [ ],\n \"contacts\" : [ ],\n \"publicKey\" : \"\",\n \"privateKey\" : \"\"\n },\n \"lines\" : [ \"prod\" ],\n \"maintenanceMode\" : false,\n \"enableEmbeddedMetrics\" : true,\n \"streamEntityOnly\" : true,\n \"autoLinkToDefaultGroup\" : true,\n \"limitConcurrentRequests\" : false,\n \"maxConcurrentRequests\" : 1000,\n \"maxHttp10ResponseSize\" : 4194304,\n \"useCircuitBreakers\" : true,\n \"apiReadOnly\" : false,\n \"u2fLoginOnly\" : false,\n \"trustXForwarded\" : true,\n \"ipFiltering\" : {\n \"whitelist\" : [ ],\n \"blacklist\" : [ ]\n },\n \"throttlingQuota\" : 10000000,\n \"perIpThrottlingQuota\" : 10000000,\n \"analyticsWebhooks\" : [ ],\n \"alertsWebhooks\" : [ ],\n \"elasticWritesConfigs\" : [ ],\n \"elasticReadsConfig\" : null,\n \"alertsEmails\" : [ ],\n \"logAnalyticsOnServer\" : false,\n \"useAkkaHttpClient\" : false,\n \"endlessIpAddresses\" : [ ],\n \"statsdConfig\" : null,\n \"kafkaConfig\" : {\n \"servers\" : [ ],\n \"keyPass\" : null,\n \"keystore\" : null,\n \"truststore\" : null,\n \"topic\" : \"otoroshi-events\",\n \"mtlsConfig\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n },\n \"backOfficeAuthRef\" : null,\n \"mailerSettings\" : {\n \"type\" : \"none\"\n },\n \"cleverSettings\" : null,\n \"maxWebhookSize\" : 100,\n \"middleFingers\" : false,\n \"maxLogsSize\" : 10000,\n \"otoroshiId\" : \"83539cbca-76ee-4abc-ad31-a4794e873848\",\n \"snowMonkeyConfig\" : {\n \"enabled\" : false,\n \"outageStrategy\" : \"OneServicePerGroup\",\n \"includeUserFacingDescriptors\" : false,\n \"dryRun\" : false,\n \"timesPerDay\" : 1,\n \"startTime\" : \"09:00:00.000\",\n \"stopTime\" : \"23:59:59.000\",\n \"outageDurationFrom\" : 600000,\n \"outageDurationTo\" : 3600000,\n \"targetGroups\" : [ ],\n \"chaosConfig\" : {\n \"enabled\" : true,\n \"largeRequestFaultConfig\" : null,\n \"largeResponseFaultConfig\" : null,\n \"latencyInjectionFaultConfig\" : {\n \"ratio\" : 0.2,\n \"from\" : 500,\n \"to\" : 5000\n },\n \"badResponsesFaultConfig\" : {\n \"ratio\" : 0.2,\n \"responses\" : [ {\n \"status\" : 502,\n \"body\" : \"{\\\"error\\\":\\\"Nihonzaru everywhere ...\\\"}\",\n \"headers\" : {\n \"Content-Type\" : \"application/json\"\n }\n } ]\n }\n }\n },\n \"scripts\" : {\n \"enabled\" : false,\n \"transformersRefs\" : [ ],\n \"transformersConfig\" : { },\n \"validatorRefs\" : [ ],\n \"validatorConfig\" : { },\n \"preRouteRefs\" : [ ],\n \"preRouteConfig\" : { },\n \"sinkRefs\" : [ ],\n \"sinkConfig\" : { },\n \"jobRefs\" : [ ],\n \"jobConfig\" : { }\n },\n \"geolocationSettings\" : {\n \"type\" : \"none\"\n },\n \"userAgentSettings\" : {\n \"enabled\" : false\n },\n \"autoCert\" : {\n \"enabled\" : false,\n \"replyNicely\" : false,\n \"caRef\" : null,\n \"allowed\" : [ ],\n \"notAllowed\" : [ ]\n },\n \"tlsSettings\" : {\n \"defaultDomain\" : null,\n \"randomIfNotFound\" : false,\n \"includeJdkCaServer\" : true,\n \"includeJdkCaClient\" : true,\n \"trustedCAsServer\" : [ ]\n },\n \"plugins\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excluded\" : [ ]\n },\n \"metadata\" : { }\n },\n \"admins\" : [ ],\n \"simpleAdmins\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"username\" : \"admin@otoroshi.io\",\n \"password\" : \"$2a$10$iQRkqjKTW.5XH8ugQrnMDeUstx4KqmIeQ58dHHdW2Dv1FkyyAs4C.\",\n \"label\" : \"Otoroshi Admin\",\n \"createdAt\" : 1634651307724,\n \"type\" : \"SIMPLE\",\n \"metadata\" : { },\n \"tags\" : [ ],\n \"rights\" : [ {\n \"tenant\" : \"*:rw\",\n \"teams\" : [ \"*:rw\" ]\n } ]\n } ],\n \"serviceGroups\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"admin-api-group\",\n \"name\" : \"Otoroshi Admin Api group\",\n \"description\" : \"No description\",\n \"tags\" : [ ],\n \"metadata\" : { }\n }, {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"default\",\n \"name\" : \"default-group\",\n \"description\" : \"The default service group\",\n \"tags\" : [ ],\n \"metadata\" : { }\n } ],\n \"apiKeys\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"clientId\" : \"admin-api-apikey-id\",\n \"clientSecret\" : \"admin-api-apikey-secret\",\n \"clientName\" : \"Otoroshi Backoffice ApiKey\",\n \"description\" : \"The apikey use by the Otoroshi UI\",\n \"authorizedGroup\" : \"admin-api-group\",\n \"authorizedEntities\" : [ \"group_admin-api-group\" ],\n \"enabled\" : true,\n \"readOnly\" : false,\n \"allowClientIdOnly\" : false,\n \"throttlingQuota\" : 10000,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"constrainedServicesOnly\" : false,\n \"restrictions\" : {\n \"enabled\" : false,\n \"allowLast\" : true,\n \"allowed\" : [ ],\n \"forbidden\" : [ ],\n \"notFound\" : [ ]\n },\n \"rotation\" : {\n \"enabled\" : false,\n \"rotationEvery\" : 744,\n \"gracePeriod\" : 168,\n \"nextSecret\" : null\n },\n \"validUntil\" : null,\n \"tags\" : [ ],\n \"metadata\" : { }\n } ],\n \"serviceDescriptors\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"admin-api-service\",\n \"groupId\" : \"admin-api-group\",\n \"groups\" : [ \"admin-api-group\" ],\n \"name\" : \"otoroshi-admin-api\",\n \"description\" : \"\",\n \"env\" : \"prod\",\n \"domain\" : \"oto.tools\",\n \"subdomain\" : \"otoroshi-api\",\n \"targetsLoadBalancing\" : {\n \"type\" : \"RoundRobin\"\n },\n \"targets\" : [ {\n \"host\" : \"127.0.0.1:8080\",\n \"scheme\" : \"http\",\n \"weight\" : 1,\n \"mtlsConfig\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n },\n \"tags\" : [ ],\n \"metadata\" : { },\n \"protocol\" : \"HTTP/1.1\",\n \"predicate\" : {\n \"type\" : \"AlwaysMatch\"\n },\n \"ipAddress\" : null\n } ],\n \"root\" : \"/\",\n \"matchingRoot\" : null,\n \"stripPath\" : true,\n \"localHost\" : \"127.0.0.1:8080\",\n \"localScheme\" : \"http\",\n \"redirectToLocal\" : false,\n \"enabled\" : true,\n \"userFacing\" : false,\n \"privateApp\" : false,\n \"forceHttps\" : false,\n \"logAnalyticsOnServer\" : false,\n \"useAkkaHttpClient\" : true,\n \"useNewWSClient\" : false,\n \"tcpUdpTunneling\" : false,\n \"detectApiKeySooner\" : false,\n \"maintenanceMode\" : false,\n \"buildMode\" : false,\n \"strictlyPrivate\" : false,\n \"enforceSecureCommunication\" : true,\n \"sendInfoToken\" : true,\n \"sendStateChallenge\" : true,\n \"sendOtoroshiHeadersBack\" : true,\n \"readOnly\" : false,\n \"xForwardedHeaders\" : false,\n \"overrideHost\" : true,\n \"allowHttp10\" : true,\n \"letsEncrypt\" : false,\n \"secComHeaders\" : {\n \"claimRequestName\" : null,\n \"stateRequestName\" : null,\n \"stateResponseName\" : null\n },\n \"secComTtl\" : 30000,\n \"secComVersion\" : 1,\n \"secComInfoTokenVersion\" : \"Legacy\",\n \"secComExcludedPatterns\" : [ ],\n \"securityExcludedPatterns\" : [ ],\n \"publicPatterns\" : [ \"/health\", \"/metrics\" ],\n \"privatePatterns\" : [ ],\n \"additionalHeaders\" : {\n \"Host\" : \"otoroshi-admin-internal-api.oto.tools\"\n },\n \"additionalHeadersOut\" : { },\n \"missingOnlyHeadersIn\" : { },\n \"missingOnlyHeadersOut\" : { },\n \"removeHeadersIn\" : [ ],\n \"removeHeadersOut\" : [ ],\n \"headersVerification\" : { },\n \"matchingHeaders\" : { },\n \"ipFiltering\" : {\n \"whitelist\" : [ ],\n \"blacklist\" : [ ]\n },\n \"api\" : {\n \"exposeApi\" : false\n },\n \"healthCheck\" : {\n \"enabled\" : false,\n \"url\" : \"/\"\n },\n \"clientConfig\" : {\n \"useCircuitBreaker\" : true,\n \"retries\" : 1,\n \"maxErrors\" : 20,\n \"retryInitialDelay\" : 50,\n \"backoffFactor\" : 2,\n \"callTimeout\" : 30000,\n \"callAndStreamTimeout\" : 120000,\n \"connectionTimeout\" : 10000,\n \"idleTimeout\" : 60000,\n \"globalTimeout\" : 30000,\n \"sampleInterval\" : 2000,\n \"proxy\" : { },\n \"customTimeouts\" : [ ],\n \"cacheConnectionSettings\" : {\n \"enabled\" : false,\n \"queueSize\" : 2048\n }\n },\n \"canary\" : {\n \"enabled\" : false,\n \"traffic\" : 0.2,\n \"targets\" : [ ],\n \"root\" : \"/\"\n },\n \"gzip\" : {\n \"enabled\" : false,\n \"excludedPatterns\" : [ ],\n \"whiteList\" : [ \"text/*\", \"application/javascript\", \"application/json\" ],\n \"blackList\" : [ ],\n \"bufferSize\" : 8192,\n \"chunkedThreshold\" : 102400,\n \"compressionLevel\" : 5\n },\n \"metadata\" : { },\n \"tags\" : [ ],\n \"chaosConfig\" : {\n \"enabled\" : false,\n \"largeRequestFaultConfig\" : null,\n \"largeResponseFaultConfig\" : null,\n \"latencyInjectionFaultConfig\" : null,\n \"badResponsesFaultConfig\" : null\n },\n \"jwtVerifier\" : {\n \"type\" : \"ref\",\n \"ids\" : [ ],\n \"id\" : null,\n \"enabled\" : false,\n \"excludedPatterns\" : [ ]\n },\n \"secComSettings\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"secComUseSameAlgo\" : true,\n \"secComAlgoChallengeOtoToBack\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"secComAlgoChallengeBackToOto\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"secComAlgoInfoToken\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"cors\" : {\n \"enabled\" : false,\n \"allowOrigin\" : \"*\",\n \"exposeHeaders\" : [ ],\n \"allowHeaders\" : [ ],\n \"allowMethods\" : [ ],\n \"excludedPatterns\" : [ ],\n \"maxAge\" : null,\n \"allowCredentials\" : true\n },\n \"redirection\" : {\n \"enabled\" : false,\n \"code\" : 303,\n \"to\" : \"https://www.otoroshi.io\"\n },\n \"authConfigRef\" : null,\n \"clientValidatorRef\" : null,\n \"transformerRef\" : null,\n \"transformerRefs\" : [ ],\n \"transformerConfig\" : { },\n \"apiKeyConstraints\" : {\n \"basicAuth\" : {\n \"enabled\" : true,\n \"headerName\" : null,\n \"queryName\" : null\n },\n \"customHeadersAuth\" : {\n \"enabled\" : true,\n \"clientIdHeaderName\" : null,\n \"clientSecretHeaderName\" : null\n },\n \"clientIdAuth\" : {\n \"enabled\" : true,\n \"headerName\" : null,\n \"queryName\" : null\n },\n \"jwtAuth\" : {\n \"enabled\" : true,\n \"secretSigned\" : true,\n \"keyPairSigned\" : true,\n \"includeRequestAttributes\" : false,\n \"maxJwtLifespanSecs\" : null,\n \"headerName\" : null,\n \"queryName\" : null,\n \"cookieName\" : null\n },\n \"routing\" : {\n \"noneTagIn\" : [ ],\n \"oneTagIn\" : [ ],\n \"allTagsIn\" : [ ],\n \"noneMetaIn\" : { },\n \"oneMetaIn\" : { },\n \"allMetaIn\" : { },\n \"noneMetaKeysIn\" : [ ],\n \"oneMetaKeyIn\" : [ ],\n \"allMetaKeysIn\" : [ ]\n }\n },\n \"restrictions\" : {\n \"enabled\" : false,\n \"allowLast\" : true,\n \"allowed\" : [ ],\n \"forbidden\" : [ ],\n \"notFound\" : [ ]\n },\n \"accessValidator\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excludedPatterns\" : [ ]\n },\n \"preRouting\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excludedPatterns\" : [ ]\n },\n \"plugins\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excluded\" : [ ]\n },\n \"hosts\" : [ \"otoroshi-api.oto.tools\" ],\n \"paths\" : [ ],\n \"handleLegacyDomain\" : true,\n \"issueCert\" : false,\n \"issueCertCA\" : null\n } ],\n \"errorTemplates\" : [ ],\n \"jwtVerifiers\" : [ ],\n \"authConfigs\" : [ ],\n \"certificates\" : [],\n \"clientValidators\" : [ ],\n \"scripts\" : [ ],\n \"tcpServices\" : [ ],\n \"dataExporters\" : [ ],\n \"tenants\" : [ {\n \"id\" : \"default\",\n \"name\" : \"Default organization\",\n \"description\" : \"The default organization\",\n \"metadata\" : { },\n \"tags\" : [ ]\n } ],\n \"teams\" : [ {\n \"id\" : \"default\",\n \"tenant\" : \"default\",\n \"name\" : \"Default Team\",\n \"description\" : \"The default Team of the default organization\",\n \"metadata\" : { },\n \"tags\" : [ ]\n } ]\n }' \\\n 'http://otoroshi-api.oto.tools:8080/api/otoroshi.json' \\\n -u admin-api-apikey-id:admin-api-apikey-secret \n```\n\nThis should output :\n\n```json\n{ \"done\":true }\n```\n\n> Note : be very carefully with this POST command. If you send a wrong JSON, you risk breaking your instance.\n\nThe second way is to send the same configuration but from a file. You can pass two kind of file : a `json` file or a `ndjson` file. Both files are available as export methods on the danger zone.\n\n```sh\n# the curl is run from a folder containing the initial-state.json file \ncurl -X POST \\\n -H \"Content-Type: application/json\" \\\n -d @./initial-state.json \\\n 'http://otoroshi-api.oto.tools:8080/api/otoroshi.json' \\\n -u admin-api-apikey-id:admin-api-apikey-secret\n```\n\nThis should output :\n\n```json\n{ \"done\":true }\n```\n\n> Note: To send a ndjson file, you have to set the Content-Type header at `application/x-ndjson`" + "content": "# Import and export Otoroshi datastore\n\n### Start Otoroshi with an initial datastore\n\nLet's start by downloading the latest Otoroshi\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar'\n```\n\nBy default, Otoroshi starts with domain `oto.tools` that targets `127.0.0.1` Now you are almost ready to run Otoroshi for the first time, we want run it with an initial data.\n\nTo do that, you need to add the **otoroshi.importFrom** setting to the Otoroshi configuration (of `$APP_IMPORT_FROM` env). It can be a file path or a URL. The content of the initial datastore can look something like the following.\n\n```json\n{\n \"label\": \"Otoroshi initial datastore\",\n \"admins\": [],\n \"simpleAdmins\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"username\": \"admin@otoroshi.io\",\n \"password\": \"$2a$10$iQRkqjKTW.5XH8ugQrnMDeUstx4KqmIeQ58dHHdW2Dv1FkyyAs4C.\",\n \"label\": \"Otoroshi Admin\",\n \"createdAt\": 1634651307724,\n \"type\": \"SIMPLE\",\n \"metadata\": {},\n \"tags\": [],\n \"rights\": [\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\"\n ]\n }\n ]\n }\n ],\n \"serviceGroups\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"id\": \"admin-api-group\",\n \"name\": \"Otoroshi Admin Api group\",\n \"description\": \"No description\",\n \"tags\": [],\n \"metadata\": {}\n },\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"id\": \"default\",\n \"name\": \"default-group\",\n \"description\": \"The default service group\",\n \"tags\": [],\n \"metadata\": {}\n }\n ],\n \"apiKeys\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"clientId\": \"admin-api-apikey-id\",\n \"clientSecret\": \"admin-api-apikey-secret\",\n \"clientName\": \"Otoroshi Backoffice ApiKey\",\n \"description\": \"The apikey use by the Otoroshi UI\",\n \"authorizedGroup\": \"admin-api-group\",\n \"authorizedEntities\": [\n \"group_admin-api-group\"\n ],\n \"enabled\": true,\n \"readOnly\": false,\n \"allowClientIdOnly\": false,\n \"throttlingQuota\": 10000,\n \"dailyQuota\": 10000000,\n \"monthlyQuota\": 10000000,\n \"constrainedServicesOnly\": false,\n \"restrictions\": {\n \"enabled\": false,\n \"allowLast\": true,\n \"allowed\": [],\n \"forbidden\": [],\n \"notFound\": []\n },\n \"rotation\": {\n \"enabled\": false,\n \"rotationEvery\": 744,\n \"gracePeriod\": 168,\n \"nextSecret\": null\n },\n \"validUntil\": null,\n \"tags\": [],\n \"metadata\": {}\n }\n ],\n \"serviceDescriptors\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"id\": \"admin-api-service\",\n \"groupId\": \"admin-api-group\",\n \"groups\": [\n \"admin-api-group\"\n ],\n \"name\": \"otoroshi-admin-api\",\n \"description\": \"\",\n \"env\": \"prod\",\n \"domain\": \"oto.tools\",\n \"subdomain\": \"otoroshi-api\",\n \"targetsLoadBalancing\": {\n \"type\": \"RoundRobin\"\n },\n \"targets\": [\n {\n \"host\": \"127.0.0.1:8080\",\n \"scheme\": \"http\",\n \"weight\": 1,\n \"mtlsConfig\": {\n \"certs\": [],\n \"trustedCerts\": [],\n \"mtls\": false,\n \"loose\": false,\n \"trustAll\": false\n },\n \"tags\": [],\n \"metadata\": {},\n \"protocol\": \"HTTP/1.1\",\n \"predicate\": {\n \"type\": \"AlwaysMatch\"\n },\n \"ipAddress\": null\n }\n ],\n \"root\": \"/\",\n \"matchingRoot\": null,\n \"stripPath\": true,\n \"localHost\": \"127.0.0.1:8080\",\n \"localScheme\": \"http\",\n \"redirectToLocal\": false,\n \"enabled\": true,\n \"userFacing\": false,\n \"privateApp\": false,\n \"forceHttps\": false,\n \"logAnalyticsOnServer\": false,\n \"useAkkaHttpClient\": true,\n \"useNewWSClient\": false,\n \"tcpUdpTunneling\": false,\n \"detectApiKeySooner\": false,\n \"maintenanceMode\": false,\n \"buildMode\": false,\n \"strictlyPrivate\": false,\n \"enforceSecureCommunication\": true,\n \"sendInfoToken\": true,\n \"sendStateChallenge\": true,\n \"sendOtoroshiHeadersBack\": true,\n \"readOnly\": false,\n \"xForwardedHeaders\": false,\n \"overrideHost\": true,\n \"allowHttp10\": true,\n \"letsEncrypt\": false,\n \"secComHeaders\": {\n \"claimRequestName\": null,\n \"stateRequestName\": null,\n \"stateResponseName\": null\n },\n \"secComTtl\": 30000,\n \"secComVersion\": 1,\n \"secComInfoTokenVersion\": \"Legacy\",\n \"secComExcludedPatterns\": [],\n \"securityExcludedPatterns\": [],\n \"publicPatterns\": [\n \"/health\",\n \"/metrics\"\n ],\n \"privatePatterns\": [],\n \"additionalHeaders\": {\n \"Host\": \"otoroshi-admin-internal-api.oto.tools\"\n },\n \"additionalHeadersOut\": {},\n \"missingOnlyHeadersIn\": {},\n \"missingOnlyHeadersOut\": {},\n \"removeHeadersIn\": [],\n \"removeHeadersOut\": [],\n \"headersVerification\": {},\n \"matchingHeaders\": {},\n \"ipFiltering\": {\n \"whitelist\": [],\n \"blacklist\": []\n },\n \"api\": {\n \"exposeApi\": false\n },\n \"healthCheck\": {\n \"enabled\": false,\n \"url\": \"/\"\n },\n \"clientConfig\": {\n \"useCircuitBreaker\": true,\n \"retries\": 1,\n \"maxErrors\": 20,\n \"retryInitialDelay\": 50,\n \"backoffFactor\": 2,\n \"callTimeout\": 30000,\n \"callAndStreamTimeout\": 120000,\n \"connectionTimeout\": 10000,\n \"idleTimeout\": 60000,\n \"globalTimeout\": 30000,\n \"sampleInterval\": 2000,\n \"proxy\": {},\n \"customTimeouts\": [],\n \"cacheConnectionSettings\": {\n \"enabled\": false,\n \"queueSize\": 2048\n }\n },\n \"canary\": {\n \"enabled\": false,\n \"traffic\": 0.2,\n \"targets\": [],\n \"root\": \"/\"\n },\n \"gzip\": {\n \"enabled\": false,\n \"excludedPatterns\": [],\n \"whiteList\": [\n \"text/*\",\n \"application/javascript\",\n \"application/json\"\n ],\n \"blackList\": [],\n \"bufferSize\": 8192,\n \"chunkedThreshold\": 102400,\n \"compressionLevel\": 5\n },\n \"metadata\": {},\n \"tags\": [],\n \"chaosConfig\": {\n \"enabled\": false,\n \"largeRequestFaultConfig\": null,\n \"largeResponseFaultConfig\": null,\n \"latencyInjectionFaultConfig\": null,\n \"badResponsesFaultConfig\": null\n },\n \"jwtVerifier\": {\n \"type\": \"ref\",\n \"ids\": [],\n \"id\": null,\n \"enabled\": false,\n \"excludedPatterns\": []\n },\n \"secComSettings\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"secComUseSameAlgo\": true,\n \"secComAlgoChallengeOtoToBack\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"secComAlgoChallengeBackToOto\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"secComAlgoInfoToken\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"cors\": {\n \"enabled\": false,\n \"allowOrigin\": \"*\",\n \"exposeHeaders\": [],\n \"allowHeaders\": [],\n \"allowMethods\": [],\n \"excludedPatterns\": [],\n \"maxAge\": null,\n \"allowCredentials\": true\n },\n \"redirection\": {\n \"enabled\": false,\n \"code\": 303,\n \"to\": \"https://www.otoroshi.io\"\n },\n \"authConfigRef\": null,\n \"clientValidatorRef\": null,\n \"transformerRef\": null,\n \"transformerRefs\": [],\n \"transformerConfig\": {},\n \"apiKeyConstraints\": {\n \"basicAuth\": {\n \"enabled\": true,\n \"headerName\": null,\n \"queryName\": null\n },\n \"customHeadersAuth\": {\n \"enabled\": true,\n \"clientIdHeaderName\": null,\n \"clientSecretHeaderName\": null\n },\n \"clientIdAuth\": {\n \"enabled\": true,\n \"headerName\": null,\n \"queryName\": null\n },\n \"jwtAuth\": {\n \"enabled\": true,\n \"secretSigned\": true,\n \"keyPairSigned\": true,\n \"includeRequestAttributes\": false,\n \"maxJwtLifespanSecs\": null,\n \"headerName\": null,\n \"queryName\": null,\n \"cookieName\": null\n },\n \"routing\": {\n \"noneTagIn\": [],\n \"oneTagIn\": [],\n \"allTagsIn\": [],\n \"noneMetaIn\": {},\n \"oneMetaIn\": {},\n \"allMetaIn\": {},\n \"noneMetaKeysIn\": [],\n \"oneMetaKeyIn\": [],\n \"allMetaKeysIn\": []\n }\n },\n \"restrictions\": {\n \"enabled\": false,\n \"allowLast\": true,\n \"allowed\": [],\n \"forbidden\": [],\n \"notFound\": []\n },\n \"accessValidator\": {\n \"enabled\": false,\n \"refs\": [],\n \"config\": {},\n \"excludedPatterns\": []\n },\n \"preRouting\": {\n \"enabled\": false,\n \"refs\": [],\n \"config\": {},\n \"excludedPatterns\": []\n },\n \"plugins\": {\n \"enabled\": false,\n \"refs\": [],\n \"config\": {},\n \"excluded\": []\n },\n \"hosts\": [\n \"otoroshi-api.oto.tools\"\n ],\n \"paths\": [],\n \"handleLegacyDomain\": true,\n \"issueCert\": false,\n \"issueCertCA\": null\n }\n ],\n \"errorTemplates\": [],\n \"jwtVerifiers\": [],\n \"authConfigs\": [],\n \"certificates\": [],\n \"clientValidators\": [],\n \"scripts\": [],\n \"tcpServices\": [],\n \"dataExporters\": [],\n \"tenants\": [\n {\n \"id\": \"default\",\n \"name\": \"Default organization\",\n \"description\": \"The default organization\",\n \"metadata\": {},\n \"tags\": []\n }\n ],\n \"teams\": [\n {\n \"id\": \"default\",\n \"tenant\": \"default\",\n \"name\": \"Default Team\",\n \"description\": \"The default Team of the default organization\",\n \"metadata\": {},\n \"tags\": []\n }\n ]\n}\n```\n\nRun an Otoroshi with the previous file as parameter.\n\n```sh\njava \\\n -Dotoroshi.adminPassword=password \\\n -Dotoroshi.importFrom=./initial-state.json \\\n -jar otoroshi.jar \n```\n\nThis should show\n\n```sh\n...\n[info] otoroshi-env - Importing from: ./initial-state.json\n[info] otoroshi-env - Successful import !\n...\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n...\n```\n\n> Warning : when you using Otoroshi with a datastore different from file or in-memory, Otoroshi will not reload the initialization script. If you expected, you have to manually clean your store.\n\n### Export the current datastore via the danger zone\n\nWhen Otoroshi is running, you can backup the global configuration store from the UI. Navigate to your instance (in our case @link:[http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone](http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone) { open=new }) and scroll to the bottom page. \n\nClick on `Full export` button to download the full global configuration.\n\n### Import a datastore from file via the danger zone\n\nWhen Otoroshi is running, you can recover a global configuration from the UI. Navigate to your instance (in our case @link:[http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone](http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone) { open=new }) and scroll to the bottom of the page. \n\nClick on `Recover from a full export file` button to apply all configurations from a file.\n\n### Export the current datastore with the Admin API\n\nOtoroshi exposes his own Admin API to manage Otoroshi resources. To call this api, you need to have an api key with the rights on `Otoroshi Admin Api group`. This group includes the `Otoroshi-admin-api` service that you can found on the services page. \n\nBy default, and with our initial configuration, Otoroshi has already created an api key named `Otoroshi Backoffice ApiKey`. You can verify the rights of an api key on its page by checking the `Authorized On` field (you should find the `Otoroshi Admin Api group` inside).\n\nThe default api key id and secret are `admin-api-apikey-id` and `admin-api-apikey-secret`.\n\nRun the next command with these values.\n\n```sh\ncurl \\\n -H 'Content-Type: application/json' \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n 'http://otoroshi-api.oto.tools:8080/api/otoroshi.json'\n```\n\nWhen calling the `/api/otoroshi.json`, the return should be the current datastore including the service descriptors, the api keys, all others resources like certificates and authentification modules, and the the global config (representing the form of the danger zone).\n\n### Import the current datastore with the Admin API\n\nAs the same way of previous section, you can erase the current datastore with a POST request. The route is the same : `/api/otoroshi.json`.\n\n```sh\ncurl \\\n -X POST \\\n -H 'Content-Type: application/json' \\\n -d '{\n \"label\" : \"Otoroshi export\",\n \"dateRaw\" : 1634714811217,\n \"date\" : \"2021-10-20 09:26:51\",\n \"stats\" : {\n \"calls\" : 4,\n \"dataIn\" : 0,\n \"dataOut\" : 97991\n },\n \"config\" : {\n \"tags\" : [ ],\n \"letsEncryptSettings\" : {\n \"enabled\" : false,\n \"server\" : \"acme://letsencrypt.org/staging\",\n \"emails\" : [ ],\n \"contacts\" : [ ],\n \"publicKey\" : \"\",\n \"privateKey\" : \"\"\n },\n \"lines\" : [ \"prod\" ],\n \"maintenanceMode\" : false,\n \"enableEmbeddedMetrics\" : true,\n \"streamEntityOnly\" : true,\n \"autoLinkToDefaultGroup\" : true,\n \"limitConcurrentRequests\" : false,\n \"maxConcurrentRequests\" : 1000,\n \"maxHttp10ResponseSize\" : 4194304,\n \"useCircuitBreakers\" : true,\n \"apiReadOnly\" : false,\n \"u2fLoginOnly\" : false,\n \"trustXForwarded\" : true,\n \"ipFiltering\" : {\n \"whitelist\" : [ ],\n \"blacklist\" : [ ]\n },\n \"throttlingQuota\" : 10000000,\n \"perIpThrottlingQuota\" : 10000000,\n \"analyticsWebhooks\" : [ ],\n \"alertsWebhooks\" : [ ],\n \"elasticWritesConfigs\" : [ ],\n \"elasticReadsConfig\" : null,\n \"alertsEmails\" : [ ],\n \"logAnalyticsOnServer\" : false,\n \"useAkkaHttpClient\" : false,\n \"endlessIpAddresses\" : [ ],\n \"statsdConfig\" : null,\n \"kafkaConfig\" : {\n \"servers\" : [ ],\n \"keyPass\" : null,\n \"keystore\" : null,\n \"truststore\" : null,\n \"topic\" : \"otoroshi-events\",\n \"mtlsConfig\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n },\n \"backOfficeAuthRef\" : null,\n \"mailerSettings\" : {\n \"type\" : \"none\"\n },\n \"cleverSettings\" : null,\n \"maxWebhookSize\" : 100,\n \"middleFingers\" : false,\n \"maxLogsSize\" : 10000,\n \"otoroshiId\" : \"83539cbca-76ee-4abc-ad31-a4794e873848\",\n \"snowMonkeyConfig\" : {\n \"enabled\" : false,\n \"outageStrategy\" : \"OneServicePerGroup\",\n \"includeUserFacingDescriptors\" : false,\n \"dryRun\" : false,\n \"timesPerDay\" : 1,\n \"startTime\" : \"09:00:00.000\",\n \"stopTime\" : \"23:59:59.000\",\n \"outageDurationFrom\" : 600000,\n \"outageDurationTo\" : 3600000,\n \"targetGroups\" : [ ],\n \"chaosConfig\" : {\n \"enabled\" : true,\n \"largeRequestFaultConfig\" : null,\n \"largeResponseFaultConfig\" : null,\n \"latencyInjectionFaultConfig\" : {\n \"ratio\" : 0.2,\n \"from\" : 500,\n \"to\" : 5000\n },\n \"badResponsesFaultConfig\" : {\n \"ratio\" : 0.2,\n \"responses\" : [ {\n \"status\" : 502,\n \"body\" : \"{\\\"error\\\":\\\"Nihonzaru everywhere ...\\\"}\",\n \"headers\" : {\n \"Content-Type\" : \"application/json\"\n }\n } ]\n }\n }\n },\n \"scripts\" : {\n \"enabled\" : false,\n \"transformersRefs\" : [ ],\n \"transformersConfig\" : { },\n \"validatorRefs\" : [ ],\n \"validatorConfig\" : { },\n \"preRouteRefs\" : [ ],\n \"preRouteConfig\" : { },\n \"sinkRefs\" : [ ],\n \"sinkConfig\" : { },\n \"jobRefs\" : [ ],\n \"jobConfig\" : { }\n },\n \"geolocationSettings\" : {\n \"type\" : \"none\"\n },\n \"userAgentSettings\" : {\n \"enabled\" : false\n },\n \"autoCert\" : {\n \"enabled\" : false,\n \"replyNicely\" : false,\n \"caRef\" : null,\n \"allowed\" : [ ],\n \"notAllowed\" : [ ]\n },\n \"tlsSettings\" : {\n \"defaultDomain\" : null,\n \"randomIfNotFound\" : false,\n \"includeJdkCaServer\" : true,\n \"includeJdkCaClient\" : true,\n \"trustedCAsServer\" : [ ]\n },\n \"plugins\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excluded\" : [ ]\n },\n \"metadata\" : { }\n },\n \"admins\" : [ ],\n \"simpleAdmins\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"username\" : \"admin@otoroshi.io\",\n \"password\" : \"$2a$10$iQRkqjKTW.5XH8ugQrnMDeUstx4KqmIeQ58dHHdW2Dv1FkyyAs4C.\",\n \"label\" : \"Otoroshi Admin\",\n \"createdAt\" : 1634651307724,\n \"type\" : \"SIMPLE\",\n \"metadata\" : { },\n \"tags\" : [ ],\n \"rights\" : [ {\n \"tenant\" : \"*:rw\",\n \"teams\" : [ \"*:rw\" ]\n } ]\n } ],\n \"serviceGroups\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"admin-api-group\",\n \"name\" : \"Otoroshi Admin Api group\",\n \"description\" : \"No description\",\n \"tags\" : [ ],\n \"metadata\" : { }\n }, {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"default\",\n \"name\" : \"default-group\",\n \"description\" : \"The default service group\",\n \"tags\" : [ ],\n \"metadata\" : { }\n } ],\n \"apiKeys\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"clientId\" : \"admin-api-apikey-id\",\n \"clientSecret\" : \"admin-api-apikey-secret\",\n \"clientName\" : \"Otoroshi Backoffice ApiKey\",\n \"description\" : \"The apikey use by the Otoroshi UI\",\n \"authorizedGroup\" : \"admin-api-group\",\n \"authorizedEntities\" : [ \"group_admin-api-group\" ],\n \"enabled\" : true,\n \"readOnly\" : false,\n \"allowClientIdOnly\" : false,\n \"throttlingQuota\" : 10000,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"constrainedServicesOnly\" : false,\n \"restrictions\" : {\n \"enabled\" : false,\n \"allowLast\" : true,\n \"allowed\" : [ ],\n \"forbidden\" : [ ],\n \"notFound\" : [ ]\n },\n \"rotation\" : {\n \"enabled\" : false,\n \"rotationEvery\" : 744,\n \"gracePeriod\" : 168,\n \"nextSecret\" : null\n },\n \"validUntil\" : null,\n \"tags\" : [ ],\n \"metadata\" : { }\n } ],\n \"serviceDescriptors\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"admin-api-service\",\n \"groupId\" : \"admin-api-group\",\n \"groups\" : [ \"admin-api-group\" ],\n \"name\" : \"otoroshi-admin-api\",\n \"description\" : \"\",\n \"env\" : \"prod\",\n \"domain\" : \"oto.tools\",\n \"subdomain\" : \"otoroshi-api\",\n \"targetsLoadBalancing\" : {\n \"type\" : \"RoundRobin\"\n },\n \"targets\" : [ {\n \"host\" : \"127.0.0.1:8080\",\n \"scheme\" : \"http\",\n \"weight\" : 1,\n \"mtlsConfig\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n },\n \"tags\" : [ ],\n \"metadata\" : { },\n \"protocol\" : \"HTTP/1.1\",\n \"predicate\" : {\n \"type\" : \"AlwaysMatch\"\n },\n \"ipAddress\" : null\n } ],\n \"root\" : \"/\",\n \"matchingRoot\" : null,\n \"stripPath\" : true,\n \"localHost\" : \"127.0.0.1:8080\",\n \"localScheme\" : \"http\",\n \"redirectToLocal\" : false,\n \"enabled\" : true,\n \"userFacing\" : false,\n \"privateApp\" : false,\n \"forceHttps\" : false,\n \"logAnalyticsOnServer\" : false,\n \"useAkkaHttpClient\" : true,\n \"useNewWSClient\" : false,\n \"tcpUdpTunneling\" : false,\n \"detectApiKeySooner\" : false,\n \"maintenanceMode\" : false,\n \"buildMode\" : false,\n \"strictlyPrivate\" : false,\n \"enforceSecureCommunication\" : true,\n \"sendInfoToken\" : true,\n \"sendStateChallenge\" : true,\n \"sendOtoroshiHeadersBack\" : true,\n \"readOnly\" : false,\n \"xForwardedHeaders\" : false,\n \"overrideHost\" : true,\n \"allowHttp10\" : true,\n \"letsEncrypt\" : false,\n \"secComHeaders\" : {\n \"claimRequestName\" : null,\n \"stateRequestName\" : null,\n \"stateResponseName\" : null\n },\n \"secComTtl\" : 30000,\n \"secComVersion\" : 1,\n \"secComInfoTokenVersion\" : \"Legacy\",\n \"secComExcludedPatterns\" : [ ],\n \"securityExcludedPatterns\" : [ ],\n \"publicPatterns\" : [ \"/health\", \"/metrics\" ],\n \"privatePatterns\" : [ ],\n \"additionalHeaders\" : {\n \"Host\" : \"otoroshi-admin-internal-api.oto.tools\"\n },\n \"additionalHeadersOut\" : { },\n \"missingOnlyHeadersIn\" : { },\n \"missingOnlyHeadersOut\" : { },\n \"removeHeadersIn\" : [ ],\n \"removeHeadersOut\" : [ ],\n \"headersVerification\" : { },\n \"matchingHeaders\" : { },\n \"ipFiltering\" : {\n \"whitelist\" : [ ],\n \"blacklist\" : [ ]\n },\n \"api\" : {\n \"exposeApi\" : false\n },\n \"healthCheck\" : {\n \"enabled\" : false,\n \"url\" : \"/\"\n },\n \"clientConfig\" : {\n \"useCircuitBreaker\" : true,\n \"retries\" : 1,\n \"maxErrors\" : 20,\n \"retryInitialDelay\" : 50,\n \"backoffFactor\" : 2,\n \"callTimeout\" : 30000,\n \"callAndStreamTimeout\" : 120000,\n \"connectionTimeout\" : 10000,\n \"idleTimeout\" : 60000,\n \"globalTimeout\" : 30000,\n \"sampleInterval\" : 2000,\n \"proxy\" : { },\n \"customTimeouts\" : [ ],\n \"cacheConnectionSettings\" : {\n \"enabled\" : false,\n \"queueSize\" : 2048\n }\n },\n \"canary\" : {\n \"enabled\" : false,\n \"traffic\" : 0.2,\n \"targets\" : [ ],\n \"root\" : \"/\"\n },\n \"gzip\" : {\n \"enabled\" : false,\n \"excludedPatterns\" : [ ],\n \"whiteList\" : [ \"text/*\", \"application/javascript\", \"application/json\" ],\n \"blackList\" : [ ],\n \"bufferSize\" : 8192,\n \"chunkedThreshold\" : 102400,\n \"compressionLevel\" : 5\n },\n \"metadata\" : { },\n \"tags\" : [ ],\n \"chaosConfig\" : {\n \"enabled\" : false,\n \"largeRequestFaultConfig\" : null,\n \"largeResponseFaultConfig\" : null,\n \"latencyInjectionFaultConfig\" : null,\n \"badResponsesFaultConfig\" : null\n },\n \"jwtVerifier\" : {\n \"type\" : \"ref\",\n \"ids\" : [ ],\n \"id\" : null,\n \"enabled\" : false,\n \"excludedPatterns\" : [ ]\n },\n \"secComSettings\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"secComUseSameAlgo\" : true,\n \"secComAlgoChallengeOtoToBack\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"secComAlgoChallengeBackToOto\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"secComAlgoInfoToken\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"cors\" : {\n \"enabled\" : false,\n \"allowOrigin\" : \"*\",\n \"exposeHeaders\" : [ ],\n \"allowHeaders\" : [ ],\n \"allowMethods\" : [ ],\n \"excludedPatterns\" : [ ],\n \"maxAge\" : null,\n \"allowCredentials\" : true\n },\n \"redirection\" : {\n \"enabled\" : false,\n \"code\" : 303,\n \"to\" : \"https://www.otoroshi.io\"\n },\n \"authConfigRef\" : null,\n \"clientValidatorRef\" : null,\n \"transformerRef\" : null,\n \"transformerRefs\" : [ ],\n \"transformerConfig\" : { },\n \"apiKeyConstraints\" : {\n \"basicAuth\" : {\n \"enabled\" : true,\n \"headerName\" : null,\n \"queryName\" : null\n },\n \"customHeadersAuth\" : {\n \"enabled\" : true,\n \"clientIdHeaderName\" : null,\n \"clientSecretHeaderName\" : null\n },\n \"clientIdAuth\" : {\n \"enabled\" : true,\n \"headerName\" : null,\n \"queryName\" : null\n },\n \"jwtAuth\" : {\n \"enabled\" : true,\n \"secretSigned\" : true,\n \"keyPairSigned\" : true,\n \"includeRequestAttributes\" : false,\n \"maxJwtLifespanSecs\" : null,\n \"headerName\" : null,\n \"queryName\" : null,\n \"cookieName\" : null\n },\n \"routing\" : {\n \"noneTagIn\" : [ ],\n \"oneTagIn\" : [ ],\n \"allTagsIn\" : [ ],\n \"noneMetaIn\" : { },\n \"oneMetaIn\" : { },\n \"allMetaIn\" : { },\n \"noneMetaKeysIn\" : [ ],\n \"oneMetaKeyIn\" : [ ],\n \"allMetaKeysIn\" : [ ]\n }\n },\n \"restrictions\" : {\n \"enabled\" : false,\n \"allowLast\" : true,\n \"allowed\" : [ ],\n \"forbidden\" : [ ],\n \"notFound\" : [ ]\n },\n \"accessValidator\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excludedPatterns\" : [ ]\n },\n \"preRouting\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excludedPatterns\" : [ ]\n },\n \"plugins\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excluded\" : [ ]\n },\n \"hosts\" : [ \"otoroshi-api.oto.tools\" ],\n \"paths\" : [ ],\n \"handleLegacyDomain\" : true,\n \"issueCert\" : false,\n \"issueCertCA\" : null\n } ],\n \"errorTemplates\" : [ ],\n \"jwtVerifiers\" : [ ],\n \"authConfigs\" : [ ],\n \"certificates\" : [],\n \"clientValidators\" : [ ],\n \"scripts\" : [ ],\n \"tcpServices\" : [ ],\n \"dataExporters\" : [ ],\n \"tenants\" : [ {\n \"id\" : \"default\",\n \"name\" : \"Default organization\",\n \"description\" : \"The default organization\",\n \"metadata\" : { },\n \"tags\" : [ ]\n } ],\n \"teams\" : [ {\n \"id\" : \"default\",\n \"tenant\" : \"default\",\n \"name\" : \"Default Team\",\n \"description\" : \"The default Team of the default organization\",\n \"metadata\" : { },\n \"tags\" : [ ]\n } ]\n }' \\\n 'http://otoroshi-api.oto.tools:8080/api/otoroshi.json' \\\n -u admin-api-apikey-id:admin-api-apikey-secret \n```\n\nThis should output :\n\n```json\n{ \"done\":true }\n```\n\n> Note : be very carefully with this POST command. If you send a wrong JSON, you risk breaking your instance.\n\nThe second way is to send the same configuration but from a file. You can pass two kind of file : a `json` file or a `ndjson` file. Both files are available as export methods on the danger zone.\n\n```sh\n# the curl is run from a folder containing the initial-state.json file \ncurl -X POST \\\n -H \"Content-Type: application/json\" \\\n -d @./initial-state.json \\\n 'http://otoroshi-api.oto.tools:8080/api/otoroshi.json' \\\n -u admin-api-apikey-id:admin-api-apikey-secret\n```\n\nThis should output :\n\n```json\n{ \"done\":true }\n```\n\n> Note: To send a ndjson file, you have to set the Content-Type header at `application/x-ndjson`" }, { "name": "index.md", @@ -326,7 +326,7 @@ "id": "/how-to-s/setup-otoroshi-cluster.md", "url": "/how-to-s/setup-otoroshi-cluster.html", "title": "Setup an Otoroshi cluster", - "content": "# Setup an Otoroshi cluster\n\nIn this tutorial, you create an cluster of Otoroshi.\n\n### Summary \n\n1. Deploy an Otoroshi cluster with one leader and 2 workers \n2. Add a load balancer in front of the workers \n3. Validate the installation by adding a header on the requests\n\nLet's start by downloading the latest jar of Otoroshi.\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar'\n```\n\nThen create an instance of Otoroshi and indicates with the `otoroshi.cluster.mode` environment variable that it will be the leader.\n\n```sh\njava -Dhttp.port=8091 -Dhttps.port=9091 -Dotoroshi.cluster.mode=leader -jar otoroshi.jar\n```\n\nLet's create two Otoroshi workers, exposed on `:8082/:8092` and `:8083/:8093` ports, and setting the leader URL in the `otoroshi.cluster.leader.urls` environment variable.\n\nThe first worker will listen on the `:8082/:8092` ports\n```sh\njava \\\n -Dotoroshi.cluster.worker.name=worker-1 \\\n -Dhttp.port=8092 \\\n -Dhttps.port=9092 \\\n -Dotoroshi.cluster.mode=worker \\\n -Dotoroshi.cluster.leader.urls.0='http://127.0.0.1:8091' -jar otoroshi.jar\n```\n\nThe second worker will listen on the `:8083/:8093` ports\n```sh\njava \\\n -Dotoroshi.cluster.worker.name=worker-2 \\\n -Dhttp.port=8093 \\\n -Dhttps.port=9093 \\\n -Dotoroshi.cluster.mode=worker \\\n -Dotoroshi.cluster.leader.urls.0='http://127.0.0.1:8091' -jar otoroshi.jar\n```\n\nOnce launched, you can navigate to the @link:[cluster view](http://otoroshi.oto.tools:8091/bo/dashboard/cluster) { open=new }. The cluster is now configured, you can see the 3 instances and some health informations on each instance.\n\nTo complete our installation, we want to spread the incoming requests accross otoroshi worker instances. \n\nIn this tutorial, we will use `haproxy` has a TCP loadbalancer. If you don't have haproxy installed, you can use docker to run an haproxy instance as explained below.\n\nBut first, we need an haproxy configuration file named `haproxy.cfg` with the following content :\n\n```sh\nfrontend front_nodes_http\n bind *:8080\n mode tcp\n default_backend back_http_nodes\n timeout client 1m\n\nbackend back_http_nodes\n mode tcp\n balance roundrobin\n server node1 host.docker.internal:8092 # (1)\n server node2 host.docker.internal:8093 # (1)\n timeout connect 10s\n timeout server 1m\n```\n\nand run haproxy with this config file\n\nno docker\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #no_docker }\n\ndocker (on linux)\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #docker_linux }\n\ndocker (on macos)\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #docker_mac }\n\ndocker (on windows)\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #docker_windows }\n\nThe last step is to create a route, add a rule to add, in the headers, a specific value to identify the worker used.\n\nCreate this route, exposed on `http://api.oto.tools:xxxx`, which will forward all requests to the mirror `https://mirror.otoroshi.io`.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8091/api/routes' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"api.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.AdditionalHeadersIn\",\n \"config\": {\n \"headers\": {\n \"worker-name\": \"${config.otoroshi.cluster.worker.name}\"\n }\n }\n }\n ]\n}\nEOF\n```\n\nOnce created, call two times the service. If all is working, the header received by the backend service will have `worker-1` and `worker-2` as value.\n\n```sh\ncurl 'http://api.oto.tools:8080'\n## Response headers\n{\n ...\n \"worker-name\": \"worker-2\"\n ...\n}\n```\n\nThis should output `worker-1`, then `worker-2`, etc. Well done, your loadbalancing is working and your cluster is set up correctly.\n\n\n" + "content": "# Setup an Otoroshi cluster\n\nIn this tutorial, you create an cluster of Otoroshi.\n\n### Summary \n\n1. Deploy an Otoroshi cluster with one leader and 2 workers \n2. Add a load balancer in front of the workers \n3. Validate the installation by adding a header on the requests\n\nLet's start by downloading the latest jar of Otoroshi.\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar'\n```\n\nThen create an instance of Otoroshi and indicates with the `otoroshi.cluster.mode` environment variable that it will be the leader.\n\n```sh\njava -Dhttp.port=8091 -Dhttps.port=9091 -Dotoroshi.cluster.mode=leader -jar otoroshi.jar\n```\n\nLet's create two Otoroshi workers, exposed on `:8082/:8092` and `:8083/:8093` ports, and setting the leader URL in the `otoroshi.cluster.leader.urls` environment variable.\n\nThe first worker will listen on the `:8082/:8092` ports\n```sh\njava \\\n -Dotoroshi.cluster.worker.name=worker-1 \\\n -Dhttp.port=8092 \\\n -Dhttps.port=9092 \\\n -Dotoroshi.cluster.mode=worker \\\n -Dotoroshi.cluster.leader.urls.0='http://127.0.0.1:8091' -jar otoroshi.jar\n```\n\nThe second worker will listen on the `:8083/:8093` ports\n```sh\njava \\\n -Dotoroshi.cluster.worker.name=worker-2 \\\n -Dhttp.port=8093 \\\n -Dhttps.port=9093 \\\n -Dotoroshi.cluster.mode=worker \\\n -Dotoroshi.cluster.leader.urls.0='http://127.0.0.1:8091' -jar otoroshi.jar\n```\n\nOnce launched, you can navigate to the @link:[cluster view](http://otoroshi.oto.tools:8091/bo/dashboard/cluster) { open=new }. The cluster is now configured, you can see the 3 instances and some health informations on each instance.\n\nTo complete our installation, we want to spread the incoming requests accross otoroshi worker instances. \n\nIn this tutorial, we will use `haproxy` has a TCP loadbalancer. If you don't have haproxy installed, you can use docker to run an haproxy instance as explained below.\n\nBut first, we need an haproxy configuration file named `haproxy.cfg` with the following content :\n\n```sh\nfrontend front_nodes_http\n bind *:8080\n mode tcp\n default_backend back_http_nodes\n timeout client 1m\n\nbackend back_http_nodes\n mode tcp\n balance roundrobin\n server node1 host.docker.internal:8092 # (1)\n server node2 host.docker.internal:8093 # (1)\n timeout connect 10s\n timeout server 1m\n```\n\nand run haproxy with this config file\n\nno docker\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #no_docker }\n\ndocker (on linux)\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #docker_linux }\n\ndocker (on macos)\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #docker_mac }\n\ndocker (on windows)\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #docker_windows }\n\nThe last step is to create a route, add a rule to add, in the headers, a specific value to identify the worker used.\n\nCreate this route, exposed on `http://api.oto.tools:xxxx`, which will forward all requests to the mirror `https://mirror.otoroshi.io`.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8091/api/routes' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"api.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.AdditionalHeadersIn\",\n \"config\": {\n \"headers\": {\n \"worker-name\": \"${config.otoroshi.cluster.worker.name}\"\n }\n }\n }\n ]\n}\nEOF\n```\n\nOnce created, call two times the service. If all is working, the header received by the backend service will have `worker-1` and `worker-2` as value.\n\n```sh\ncurl 'http://api.oto.tools:8080'\n## Response headers\n{\n ...\n \"worker-name\": \"worker-2\"\n ...\n}\n```\n\nThis should output `worker-1`, then `worker-2`, etc. Well done, your loadbalancing is working and your cluster is set up correctly.\n\n\n" }, { "name": "tailscale-integration.md", @@ -382,28 +382,28 @@ "id": "/includes/fetch-and-start.md", "url": "/includes/fetch-and-start.html", "title": "", - "content": "\nIf you already have an up and running otoroshi instance, you can skip the following instructions\n\nLet's start by downloading the latest Otoroshi.\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar'\n```\n\nthen you can run start Otoroshi :\n\n```sh\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n```\n\nNow you can log into Otoroshi at @link:[http://otoroshi.oto.tools:8080](http://otoroshi.oto.tools:8080) { open=new } with `admin@otoroshi.io/password`\n" + "content": "\nIf you already have an up and running otoroshi instance, you can skip the following instructions\n\nLet's start by downloading the latest Otoroshi.\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar'\n```\n\nthen you can run start Otoroshi :\n\n```sh\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n```\n\nNow you can log into Otoroshi at @link:[http://otoroshi.oto.tools:8080](http://otoroshi.oto.tools:8080) { open=new } with `admin@otoroshi.io/password`\n" }, { "name": "initialize.md", "id": "/includes/initialize.md", "url": "/includes/initialize.html", "title": "", - "content": "\n\nIf you already have an up and running otoroshi instance, you can skip the following instructions\n\n\n@@@div { .instructions }\n\n
\nSet up an Otoroshi\n\n
\n\nLet's start by downloading the latest Otoroshi.\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar'\n```\n\nthen you can run start Otoroshi :\n\n```sh\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n```\n\nNow you can log into Otoroshi at http://otoroshi.oto.tools:8080 with `admin@otoroshi.io/password`\n\nCreate a new route, exposed on `http://myservice.oto.tools:8080`, which will forward all requests to the mirror `https://mirror.otoroshi.io`. Each call to this service will returned the body and the headers received by the mirror.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/routes' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"my-service\",\n \"frontend\": {\n \"domains\": [\"myservice.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n }\n}\nEOF\n```\n\n\n@@@\n" + "content": "\n\nIf you already have an up and running otoroshi instance, you can skip the following instructions\n\n\n@@@div { .instructions }\n\n
\nSet up an Otoroshi\n\n
\n\nLet's start by downloading the latest Otoroshi.\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar'\n```\n\nthen you can run start Otoroshi :\n\n```sh\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n```\n\nNow you can log into Otoroshi at http://otoroshi.oto.tools:8080 with `admin@otoroshi.io/password`\n\nCreate a new route, exposed on `http://myservice.oto.tools:8080`, which will forward all requests to the mirror `https://mirror.otoroshi.io`. Each call to this service will returned the body and the headers received by the mirror.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/routes' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"my-service\",\n \"frontend\": {\n \"domains\": [\"myservice.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n }\n}\nEOF\n```\n\n\n@@@\n" }, { "name": "index.md", "id": "/index.md", "url": "/index.html", "title": "Otoroshi", - "content": "# Otoroshi\n\n**Otoroshi** is a layer of lightweight api management on top of a modern http reverse proxy written in
Scala and developped by the MAIF OSS team that can handle all the calls to and between your microservices without service locator and let you change configuration dynamicaly at runtime.\n\n\n> *The Otoroshi is a large hairy monster that tends to lurk on the top of the torii gate in front of Shinto shrines. It's a hostile creature, but also said to be the guardian of the shrine and is said to leap down from the top of the gate to devour those who approach the shrine for only self-serving purposes.*\n\n@@@ div { .centered-img }\n[![Join the discord](https://img.shields.io/discord/1089571852940218538?color=f9b000&label=Community&logo=Discord&logoColor=f9b000)](https://discord.gg/dmbwZrfpcQ) [ ![Download](https://img.shields.io/github/release/MAIF/otoroshi.svg) ](hhttps://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar)\n@@@\n\n@@@ div { .centered-img }\n\n@@@\n\n## Installation\n\nYou can download the latest build of Otoroshi as a @ref:[fat jar](./install/get-otoroshi.md#from-jar-file), as a @ref:[zip package](./install/get-otoroshi.md#from-zip) or as a @ref:[docker image](./install/get-otoroshi.md#from-docker).\n\nYou can install and run Otoroshi with this little bash snippet\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar'\njava -jar otoroshi.jar\n```\n\nor using docker\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi:16.5.2\n```\n\nnow open your browser to http://otoroshi.oto.tools:8080/, **log in with the credential generated in the logs** and explore by yourself, if you want better instructions, just go to the @ref:[Quick Start](./getting-started.md) or directly to the @ref:[installation instructions](./install/get-otoroshi.md)\n\n## Documentation\n\n* @ref:[About Otoroshi](./about.md)\n* @ref:[Architecture](./architecture.md)\n* @ref:[Features](./features.md)\n* @ref:[Getting started](./getting-started.md)\n* @ref:[Install Otoroshi](./install/index.md)\n* @ref:[Main entities](./entities/index.md)\n* @ref:[Detailed topics](./topics/index.md)\n* @ref:[How to's](./how-to-s/index.md)\n* @ref:[Plugins](./plugins/index.md)\n* @ref:[Admin REST API](./api.md)\n* @ref:[Deploy to production](./deploy/index.md)\n* @ref:[Developing Otoroshi](./dev.md)\n\n## Discussion\n\nJoin the @link:[Otoroshi server](https://discord.gg/dmbwZrfpcQ) { open=new } Discord\n\n## Sources\n\nThe sources of Otoroshi are available on @link:[Github](https://github.com/MAIF/otoroshi) { open=new }.\n\n## Logo\n\nYou can find the official Otoroshi logo @link:[on GitHub](https://github.com/MAIF/otoroshi/blob/master/resources/otoroshi-logo.png) { open=new }. The Otoroshi logo has been created by François Galioto ([@fgalioto](https://twitter.com/fgalioto))\n\n## Changelog\n\nEvery release, along with the migration instructions, is documented on the @link:[Github Releases](https://github.com/MAIF/otoroshi/releases) { open=new } page. A condensed version of the changelog is available on @link:[github](https://github.com/MAIF/otoroshi/blob/master/CHANGELOG.md) { open=new }\n\n## Patrons\n\nThe work on Otoroshi was funded by MAIF with the help of the community.\n\n## Licence\n\nOtoroshi is Open Source and available under the @link:[Apache 2 License](https://opensource.org/licenses/Apache-2.0) { open=new }\n\n@@@ index\n\n* [About Otoroshi](./about.md)\n* [Architecture](./architecture.md)\n* [Features](./features.md)\n* [Getting started](./getting-started.md)\n* [Install Otoroshi](./install/index.md)\n* [Main entities](./entities/index.md)\n* [Detailed topics](./topics/index.md)\n* [How to's](./how-to-s/index.md)\n* [Plugins](./plugins/index.md)\n* [Admin REST API](./api.md)\n* [Deploy to production](./deploy/index.md)\n* [Developing Otoroshi](./dev.md)\n\n@@@\n\n" + "content": "# Otoroshi\n\n**Otoroshi** is a layer of lightweight api management on top of a modern http reverse proxy written in Scala and developped by the MAIF OSS team that can handle all the calls to and between your microservices without service locator and let you change configuration dynamicaly at runtime.\n\n\n> *The Otoroshi is a large hairy monster that tends to lurk on the top of the torii gate in front of Shinto shrines. It's a hostile creature, but also said to be the guardian of the shrine and is said to leap down from the top of the gate to devour those who approach the shrine for only self-serving purposes.*\n\n@@@ div { .centered-img }\n[![Join the discord](https://img.shields.io/discord/1089571852940218538?color=f9b000&label=Community&logo=Discord&logoColor=f9b000)](https://discord.gg/dmbwZrfpcQ) [ ![Download](https://img.shields.io/github/release/MAIF/otoroshi.svg) ](hhttps://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar)\n@@@\n\n@@@ div { .centered-img }\n\n@@@\n\n## Installation\n\nYou can download the latest build of Otoroshi as a @ref:[fat jar](./install/get-otoroshi.md#from-jar-file), as a @ref:[zip package](./install/get-otoroshi.md#from-zip) or as a @ref:[docker image](./install/get-otoroshi.md#from-docker).\n\nYou can install and run Otoroshi with this little bash snippet\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar'\njava -jar otoroshi.jar\n```\n\nor using docker\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi:16.5.0-dev\n```\n\nnow open your browser to http://otoroshi.oto.tools:8080/, **log in with the credential generated in the logs** and explore by yourself, if you want better instructions, just go to the @ref:[Quick Start](./getting-started.md) or directly to the @ref:[installation instructions](./install/get-otoroshi.md)\n\n## Documentation\n\n* @ref:[About Otoroshi](./about.md)\n* @ref:[Architecture](./architecture.md)\n* @ref:[Features](./features.md)\n* @ref:[Getting started](./getting-started.md)\n* @ref:[Install Otoroshi](./install/index.md)\n* @ref:[Main entities](./entities/index.md)\n* @ref:[Detailed topics](./topics/index.md)\n* @ref:[How to's](./how-to-s/index.md)\n* @ref:[Plugins](./plugins/index.md)\n* @ref:[Admin REST API](./api.md)\n* @ref:[Deploy to production](./deploy/index.md)\n* @ref:[Developing Otoroshi](./dev.md)\n\n## Discussion\n\nJoin the @link:[Otoroshi server](https://discord.gg/dmbwZrfpcQ) { open=new } Discord\n\n## Sources\n\nThe sources of Otoroshi are available on @link:[Github](https://github.com/MAIF/otoroshi) { open=new }.\n\n## Logo\n\nYou can find the official Otoroshi logo @link:[on GitHub](https://github.com/MAIF/otoroshi/blob/master/resources/otoroshi-logo.png) { open=new }. The Otoroshi logo has been created by François Galioto ([@fgalioto](https://twitter.com/fgalioto))\n\n## Changelog\n\nEvery release, along with the migration instructions, is documented on the @link:[Github Releases](https://github.com/MAIF/otoroshi/releases) { open=new } page. A condensed version of the changelog is available on @link:[github](https://github.com/MAIF/otoroshi/blob/master/CHANGELOG.md) { open=new }\n\n## Patrons\n\nThe work on Otoroshi was funded by MAIF with the help of the community.\n\n## Licence\n\nOtoroshi is Open Source and available under the @link:[Apache 2 License](https://opensource.org/licenses/Apache-2.0) { open=new }\n\n@@@ index\n\n* [About Otoroshi](./about.md)\n* [Architecture](./architecture.md)\n* [Features](./features.md)\n* [Getting started](./getting-started.md)\n* [Install Otoroshi](./install/index.md)\n* [Main entities](./entities/index.md)\n* [Detailed topics](./topics/index.md)\n* [How to's](./how-to-s/index.md)\n* [Plugins](./plugins/index.md)\n* [Admin REST API](./api.md)\n* [Deploy to production](./deploy/index.md)\n* [Developing Otoroshi](./dev.md)\n\n@@@\n\n" }, { "name": "get-otoroshi.md", "id": "/install/get-otoroshi.md", "url": "/install/get-otoroshi.html", "title": "Get Otoroshi", - "content": "# Get Otoroshi\n\nAll release can be bound on the releases page of the @link:[repository](https://github.com/MAIF/otoroshi/releases) { open=new }.\n\n## From zip\n\n```sh\n# Download the latest version\nwget https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi-16.5.2.zip\nunzip ./otoroshi-16.5.2.zip\ncd otoroshi-16.5.2\n```\n\n## From jar file\n\n```sh\n# Download the latest version\nwget https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar\n```\n\n## From Docker\n\n```sh\n# Download the latest version\ndocker pull maif/otoroshi:16.5.2-jdk11\n```\n\n## From Sources\n\nTo build Otoroshi from sources, just go to the @ref:[dev documentation](../dev.md)\n" + "content": "# Get Otoroshi\n\nAll release can be bound on the releases page of the @link:[repository](https://github.com/MAIF/otoroshi/releases) { open=new }.\n\n## From zip\n\n```sh\n# Download the latest version\nwget https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi-16.5.0-dev.zip\nunzip ./otoroshi-16.5.0-dev.zip\ncd otoroshi-16.5.0-dev\n```\n\n## From jar file\n\n```sh\n# Download the latest version\nwget https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar\n```\n\n## From Docker\n\n```sh\n# Download the latest version\ndocker pull maif/otoroshi:16.5.0-dev-jdk11\n```\n\n## From Sources\n\nTo build Otoroshi from sources, just go to the @ref:[dev documentation](../dev.md)\n" }, { "name": "index.md", @@ -501,7 +501,7 @@ "id": "/topics/expression-language.md", "url": "/topics/expression-language.html", "title": "Expression language", - "content": "# Expression language\n\n- [Documentation and examples](#documentation-and-examples)\n- [Test the expression language](#test-the-expression-language)\n\nThe expression language provides an important mechanism for accessing and manipulating Otoroshi data on different inputs. For example, with this mechanism, you can mapping a claim of an inconming token directly in a claim of a generated token (using @ref:[JWT verifiers](../entities/jwt-verifiers.md)). You can add information of the service descriptor traversed such as the domain of the service or the name of the service. This information can be useful on the backend service.\n\n## Documentation and examples\n\n@@@div { #expressions }\n \n@@@\n\nIf an input contains a string starting by `${`, Otoroshi will try to evaluate the content. If the content doesn't match a known expression,\nthe 'bad-expr' value will be set.\n\n## Test the expression language\n\nYou can test to get the same values than the right part by creating these following services. \n\n```sh\n# Let's start by downloading the latest Otoroshi.\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar'\n\n# Once downloading, run Otoroshi.\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n\n# Create an authentication module to protect the following route.\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/auths \\\n-H \"Otoroshi-Client-Id: admin-api-apikey-id\" \\\n-H \"Otoroshi-Client-Secret: admin-api-apikey-secret\" \\\n-H 'Content-Type: application/json; charset=utf-8' \\\n-d @- <<'EOF'\n{\"type\":\"basic\",\"id\":\"auth_mod_in_memory_auth\",\"name\":\"in-memory-auth\",\"desc\":\"in-memory-auth\",\"users\":[{\"name\":\"User Otoroshi\",\"password\":\"$2a$10$oIf4JkaOsfiypk5ZK8DKOumiNbb2xHMZUkYkuJyuIqMDYnR/zXj9i\",\"email\":\"user@foo.bar\",\"metadata\":{\"username\":\"roger\"},\"tags\":[\"foo\"],\"webauthn\":null,\"rights\":[{\"tenant\":\"*:r\",\"teams\":[\"*:r\"]}]}],\"sessionCookieValues\":{\"httpOnly\":true,\"secure\":false}}\nEOF\n\n\n# Create a proxy of the mirror.otoroshi.io on http://api.oto.tools:8080\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/routes \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-H 'Content-Type: application/json; charset=utf-8' \\\n-d @- <<'EOF'\n{\n \"id\": \"expression-language-api-service\",\n \"name\": \"expression-language\",\n \"enabled\": true,\n \"frontend\": {\n \"domains\": [\n \"api.oto.tools/\"\n ]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.OverrideHost\"\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.ApikeyCalls\",\n \"config\": {\n \"validate\": true,\n \"mandatory\": true,\n \"pass_with_user\": true,\n \"wipe_backend_request\": true,\n \"update_quotas\": true\n },\n \"plugin_index\": {\n \"validate_access\": 1,\n \"transform_request\": 2,\n \"match_route\": 0\n }\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.AuthModule\",\n \"config\": {\n \"pass_with_apikey\": true,\n \"auth_module\": null,\n \"module\": \"auth_mod_in_memory_auth\"\n },\n \"plugin_index\": {\n \"validate_access\": 1\n }\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.AdditionalHeadersIn\",\n \"config\": {\n \"headers\": {\n \"my-expr-header.apikey.unknown-tag\": \"${apikey.tags['0':'no-found-tag']}\",\n \"my-expr-header.request.uri\": \"${req.uri}\",\n \"my-expr-header.ctx.replace-field-all-value\": \"${ctx.foo.replaceAll('o','a')}\",\n \"my-expr-header.env.unknown-field\": \"${env.java_h:not-found-java_h}\",\n \"my-expr-header.service-id\": \"${service.id}\",\n \"my-expr-header.ctx.unknown-fields\": \"${ctx.foob|ctx.foot:not-found}\",\n \"my-expr-header.apikey.metadata\": \"${apikey.metadata.foo}\",\n \"my-expr-header.request.protocol\": \"${req.protocol}\",\n \"my-expr-header.service-domain\": \"${service.domain}\",\n \"my-expr-header.token.unknown-foo-field\": \"${token.foob:not-found-foob}\",\n \"my-expr-header.service-unknown-group\": \"${service.groups['0':'unkown group']}\",\n \"my-expr-header.env.path\": \"${env.PATH}\",\n \"my-expr-header.request.unknown-header\": \"${req.headers.foob:default value}\",\n \"my-expr-header.service-name\": \"${service.name}\",\n \"my-expr-header.token.foo-field\": \"${token.foob|token.foo}\",\n \"my-expr-header.request.path\": \"${req.path}\",\n \"my-expr-header.ctx.geolocation\": \"${ctx.geolocation.foo}\",\n \"my-expr-header.token.unknown-fields\": \"${token.foob|token.foob2:not-found}\",\n \"my-expr-header.request.unknown-query\": \"${req.query.foob:default value}\",\n \"my-expr-header.service-subdomain\": \"${service.subdomain}\",\n \"my-expr-header.date\": \"${date}\",\n \"my-expr-header.ctx.replace-field-value\": \"${ctx.foo.replace('o','a')}\",\n \"my-expr-header.apikey.name\": \"${apikey.name}\",\n \"my-expr-header.request.full-url\": \"${req.fullUrl}\",\n \"my-expr-header.ctx.default-value\": \"${ctx.foob:other}\",\n \"my-expr-header.service-tld\": \"${service.tld}\",\n \"my-expr-header.service-metadata\": \"${service.metadata.foo}\",\n \"my-expr-header.ctx.useragent\": \"${ctx.useragent.foo}\",\n \"my-expr-header.service-env\": \"${service.env}\",\n \"my-expr-header.request.host\": \"${req.host}\",\n \"my-expr-header.config.unknown-port-field\": \"${config.http.ports:not-found}\",\n \"my-expr-header.request.domain\": \"${req.domain}\",\n \"my-expr-header.token.replace-header-value\": \"${token.foo.replace('o','a')}\",\n \"my-expr-header.service-group\": \"${service.groups['0']}\",\n \"my-expr-header.ctx.foo\": \"${ctx.foo}\",\n \"my-expr-header.apikey.tag\": \"${apikey.tags['0']}\",\n \"my-expr-header.service-unknown-metadata\": \"${service.metadata.test:default-value}\",\n \"my-expr-header.apikey.id\": \"${apikey.id}\",\n \"my-expr-header.request.header\": \"${req.headers.foo}\",\n \"my-expr-header.request.method\": \"${req.method}\",\n \"my-expr-header.ctx.foo-field\": \"${ctx.foob|ctx.foo}\",\n \"my-expr-header.config.port\": \"${config.http.port}\",\n \"my-expr-header.token.unknown-foo\": \"${token.foo}\",\n \"my-expr-header.date-with-format\": \"${date.format('yyy-MM-dd')}\",\n \"my-expr-header.apikey.unknown-metadata\": \"${apikey.metadata.myfield:default value}\",\n \"my-expr-header.request.query\": \"${req.query.foo}\",\n \"my-expr-header.token.replace-header-all-value\": \"${token.foo.replaceAll('o','a')}\"\n }\n }\n }\n ]\n}\nEOF\n```\n\nCreate an apikey or use the default generate apikey.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/apikeys' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"clientId\": \"api-apikey-id\",\n \"clientSecret\": \"api-apikey-secret\",\n \"clientName\": \"api-apikey-name\",\n \"description\": \"api-apikey-id-description\",\n \"authorizedGroup\": \"default\",\n \"enabled\": true,\n \"throttlingQuota\": 10,\n \"dailyQuota\": 10,\n \"monthlyQuota\": 10,\n \"tags\": [\"foo\"],\n \"metadata\": {\n \"fii\": \"bar\"\n }\n}\nEOF\n```\n\nThen try to call the first service.\n\n```sh\ncurl http://api.oto.tools:8080/api/\\?foo\\=bar \\\n-H \"Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyLCJmb28iOiJiYXIifQ.lV130dFXR3bNtWBkwwf9dLmfsRVmnZhfYF9gvAaRzF8\" \\\n-H \"Otoroshi-Client-Id: api-apikey-id\" \\\n-H \"Otoroshi-Client-Secret: api-apikey-secret\" \\\n-H \"foo: bar\" | jq\n```\n\nThis will returns the list of the received headers by the mirror.\n\n```json\n{\n ...\n \"headers\": {\n ...\n \"my-expr-header.date\": \"2021-11-26T10:54:51.112+01:00\",\n \"my-expr-header.ctx.foo\": \"no-ctx-foo\",\n \"my-expr-header.env.path\": \"/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin\",\n \"my-expr-header.apikey.id\": \"admin-api-apikey-id\",\n \"my-expr-header.apikey.tag\": \"one-tag\",\n \"my-expr-header.service-id\": \"expression-language-api-service\",\n \"my-expr-header.apikey.name\": \"Otoroshi Backoffice ApiKey\",\n \"my-expr-header.config.port\": \"8080\",\n \"my-expr-header.request.uri\": \"/api/?foo=bar\",\n \"my-expr-header.service-env\": \"prod\",\n \"my-expr-header.service-tld\": \"oto.tools\",\n \"my-expr-header.request.host\": \"api.oto.tools:8080\",\n \"my-expr-header.request.path\": \"/api/\",\n \"my-expr-header.service-name\": \"expression-language\",\n \"my-expr-header.ctx.foo-field\": \"no-ctx-foob-foo\",\n \"my-expr-header.ctx.useragent\": \"no-ctx-useragent.foo\",\n \"my-expr-header.request.query\": \"bar\",\n \"my-expr-header.service-group\": \"default\",\n \"my-expr-header.request.domain\": \"api.oto.tools\",\n \"my-expr-header.request.header\": \"bar\",\n \"my-expr-header.request.method\": \"GET\",\n \"my-expr-header.service-domain\": \"api.oto.tools\",\n \"my-expr-header.apikey.metadata\": \"bar\",\n \"my-expr-header.ctx.geolocation\": \"no-ctx-geolocation.foo\",\n \"my-expr-header.token.foo-field\": \"no-token-foob-foo\",\n \"my-expr-header.date-with-format\": \"2021-11-26\",\n \"my-expr-header.request.full-url\": \"http://api.oto.tools:8080/api/?foo=bar\",\n \"my-expr-header.request.protocol\": \"http\",\n \"my-expr-header.service-metadata\": \"no-meta-foo\",\n \"my-expr-header.ctx.default-value\": \"other\",\n \"my-expr-header.env.unknown-field\": \"not-found-java_h\",\n \"my-expr-header.service-subdomain\": \"api\",\n \"my-expr-header.token.unknown-foo\": \"no-token-foo\",\n \"my-expr-header.apikey.unknown-tag\": \"one-tag\",\n \"my-expr-header.ctx.unknown-fields\": \"not-found\",\n \"my-expr-header.token.unknown-fields\": \"not-found\",\n \"my-expr-header.request.unknown-query\": \"default value\",\n \"my-expr-header.service-unknown-group\": \"default\",\n \"my-expr-header.request.unknown-header\": \"default value\",\n \"my-expr-header.apikey.unknown-metadata\": \"default value\",\n \"my-expr-header.ctx.replace-field-value\": \"no-ctx-foo\",\n \"my-expr-header.token.unknown-foo-field\": \"not-found-foob\",\n \"my-expr-header.service-unknown-metadata\": \"default-value\",\n \"my-expr-header.config.unknown-port-field\": \"not-found\",\n \"my-expr-header.token.replace-header-value\": \"no-token-foo\",\n \"my-expr-header.ctx.replace-field-all-value\": \"no-ctx-foo\",\n \"my-expr-header.token.replace-header-all-value\": \"no-token-foo\",\n }\n}\n```\n\nThen try the second call to the webapp. Navigate on your browser to `http://webapp.oto.tools:8080`. Continue with `user@foo.bar` as user and `password` as credential.\n\nThis should output:\n\n```json\n{\n ...\n \"headers\": {\n ...\n \"my-expr-header.user\": \"User Otoroshi\",\n \"my-expr-header.user.email\": \"user@foo.bar\",\n \"my-expr-header.user.metadata\": \"roger\",\n \"my-expr-header.user.profile-field\": \"User Otoroshi\",\n \"my-expr-header.user.unknown-metadata\": \"not-found\",\n \"my-expr-header.user.unknown-profile-field\": \"not-found\",\n }\n}\n```" + "content": "# Expression language\n\n- [Documentation and examples](#documentation-and-examples)\n- [Test the expression language](#test-the-expression-language)\n\nThe expression language provides an important mechanism for accessing and manipulating Otoroshi data on different inputs. For example, with this mechanism, you can mapping a claim of an inconming token directly in a claim of a generated token (using @ref:[JWT verifiers](../entities/jwt-verifiers.md)). You can add information of the service descriptor traversed such as the domain of the service or the name of the service. This information can be useful on the backend service.\n\n## Documentation and examples\n\n@@@div { #expressions }\n \n@@@\n\nIf an input contains a string starting by `${`, Otoroshi will try to evaluate the content. If the content doesn't match a known expression,\nthe 'bad-expr' value will be set.\n\n## Test the expression language\n\nYou can test to get the same values than the right part by creating these following services. \n\n```sh\n# Let's start by downloading the latest Otoroshi.\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar'\n\n# Once downloading, run Otoroshi.\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n\n# Create an authentication module to protect the following route.\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/auths \\\n-H \"Otoroshi-Client-Id: admin-api-apikey-id\" \\\n-H \"Otoroshi-Client-Secret: admin-api-apikey-secret\" \\\n-H 'Content-Type: application/json; charset=utf-8' \\\n-d @- <<'EOF'\n{\"type\":\"basic\",\"id\":\"auth_mod_in_memory_auth\",\"name\":\"in-memory-auth\",\"desc\":\"in-memory-auth\",\"users\":[{\"name\":\"User Otoroshi\",\"password\":\"$2a$10$oIf4JkaOsfiypk5ZK8DKOumiNbb2xHMZUkYkuJyuIqMDYnR/zXj9i\",\"email\":\"user@foo.bar\",\"metadata\":{\"username\":\"roger\"},\"tags\":[\"foo\"],\"webauthn\":null,\"rights\":[{\"tenant\":\"*:r\",\"teams\":[\"*:r\"]}]}],\"sessionCookieValues\":{\"httpOnly\":true,\"secure\":false}}\nEOF\n\n\n# Create a proxy of the mirror.otoroshi.io on http://api.oto.tools:8080\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/routes \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-H 'Content-Type: application/json; charset=utf-8' \\\n-d @- <<'EOF'\n{\n \"id\": \"expression-language-api-service\",\n \"name\": \"expression-language\",\n \"enabled\": true,\n \"frontend\": {\n \"domains\": [\n \"api.oto.tools/\"\n ]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.OverrideHost\"\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.ApikeyCalls\",\n \"config\": {\n \"validate\": true,\n \"mandatory\": true,\n \"pass_with_user\": true,\n \"wipe_backend_request\": true,\n \"update_quotas\": true\n },\n \"plugin_index\": {\n \"validate_access\": 1,\n \"transform_request\": 2,\n \"match_route\": 0\n }\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.AuthModule\",\n \"config\": {\n \"pass_with_apikey\": true,\n \"auth_module\": null,\n \"module\": \"auth_mod_in_memory_auth\"\n },\n \"plugin_index\": {\n \"validate_access\": 1\n }\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.AdditionalHeadersIn\",\n \"config\": {\n \"headers\": {\n \"my-expr-header.apikey.unknown-tag\": \"${apikey.tags['0':'no-found-tag']}\",\n \"my-expr-header.request.uri\": \"${req.uri}\",\n \"my-expr-header.ctx.replace-field-all-value\": \"${ctx.foo.replaceAll('o','a')}\",\n \"my-expr-header.env.unknown-field\": \"${env.java_h:not-found-java_h}\",\n \"my-expr-header.service-id\": \"${service.id}\",\n \"my-expr-header.ctx.unknown-fields\": \"${ctx.foob|ctx.foot:not-found}\",\n \"my-expr-header.apikey.metadata\": \"${apikey.metadata.foo}\",\n \"my-expr-header.request.protocol\": \"${req.protocol}\",\n \"my-expr-header.service-domain\": \"${service.domain}\",\n \"my-expr-header.token.unknown-foo-field\": \"${token.foob:not-found-foob}\",\n \"my-expr-header.service-unknown-group\": \"${service.groups['0':'unkown group']}\",\n \"my-expr-header.env.path\": \"${env.PATH}\",\n \"my-expr-header.request.unknown-header\": \"${req.headers.foob:default value}\",\n \"my-expr-header.service-name\": \"${service.name}\",\n \"my-expr-header.token.foo-field\": \"${token.foob|token.foo}\",\n \"my-expr-header.request.path\": \"${req.path}\",\n \"my-expr-header.ctx.geolocation\": \"${ctx.geolocation.foo}\",\n \"my-expr-header.token.unknown-fields\": \"${token.foob|token.foob2:not-found}\",\n \"my-expr-header.request.unknown-query\": \"${req.query.foob:default value}\",\n \"my-expr-header.service-subdomain\": \"${service.subdomain}\",\n \"my-expr-header.date\": \"${date}\",\n \"my-expr-header.ctx.replace-field-value\": \"${ctx.foo.replace('o','a')}\",\n \"my-expr-header.apikey.name\": \"${apikey.name}\",\n \"my-expr-header.request.full-url\": \"${req.fullUrl}\",\n \"my-expr-header.ctx.default-value\": \"${ctx.foob:other}\",\n \"my-expr-header.service-tld\": \"${service.tld}\",\n \"my-expr-header.service-metadata\": \"${service.metadata.foo}\",\n \"my-expr-header.ctx.useragent\": \"${ctx.useragent.foo}\",\n \"my-expr-header.service-env\": \"${service.env}\",\n \"my-expr-header.request.host\": \"${req.host}\",\n \"my-expr-header.config.unknown-port-field\": \"${config.http.ports:not-found}\",\n \"my-expr-header.request.domain\": \"${req.domain}\",\n \"my-expr-header.token.replace-header-value\": \"${token.foo.replace('o','a')}\",\n \"my-expr-header.service-group\": \"${service.groups['0']}\",\n \"my-expr-header.ctx.foo\": \"${ctx.foo}\",\n \"my-expr-header.apikey.tag\": \"${apikey.tags['0']}\",\n \"my-expr-header.service-unknown-metadata\": \"${service.metadata.test:default-value}\",\n \"my-expr-header.apikey.id\": \"${apikey.id}\",\n \"my-expr-header.request.header\": \"${req.headers.foo}\",\n \"my-expr-header.request.method\": \"${req.method}\",\n \"my-expr-header.ctx.foo-field\": \"${ctx.foob|ctx.foo}\",\n \"my-expr-header.config.port\": \"${config.http.port}\",\n \"my-expr-header.token.unknown-foo\": \"${token.foo}\",\n \"my-expr-header.date-with-format\": \"${date.format('yyy-MM-dd')}\",\n \"my-expr-header.apikey.unknown-metadata\": \"${apikey.metadata.myfield:default value}\",\n \"my-expr-header.request.query\": \"${req.query.foo}\",\n \"my-expr-header.token.replace-header-all-value\": \"${token.foo.replaceAll('o','a')}\"\n }\n }\n }\n ]\n}\nEOF\n```\n\nCreate an apikey or use the default generate apikey.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/apikeys' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"clientId\": \"api-apikey-id\",\n \"clientSecret\": \"api-apikey-secret\",\n \"clientName\": \"api-apikey-name\",\n \"description\": \"api-apikey-id-description\",\n \"authorizedGroup\": \"default\",\n \"enabled\": true,\n \"throttlingQuota\": 10,\n \"dailyQuota\": 10,\n \"monthlyQuota\": 10,\n \"tags\": [\"foo\"],\n \"metadata\": {\n \"fii\": \"bar\"\n }\n}\nEOF\n```\n\nThen try to call the first service.\n\n```sh\ncurl http://api.oto.tools:8080/api/\\?foo\\=bar \\\n-H \"Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyLCJmb28iOiJiYXIifQ.lV130dFXR3bNtWBkwwf9dLmfsRVmnZhfYF9gvAaRzF8\" \\\n-H \"Otoroshi-Client-Id: api-apikey-id\" \\\n-H \"Otoroshi-Client-Secret: api-apikey-secret\" \\\n-H \"foo: bar\" | jq\n```\n\nThis will returns the list of the received headers by the mirror.\n\n```json\n{\n ...\n \"headers\": {\n ...\n \"my-expr-header.date\": \"2021-11-26T10:54:51.112+01:00\",\n \"my-expr-header.ctx.foo\": \"no-ctx-foo\",\n \"my-expr-header.env.path\": \"/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin\",\n \"my-expr-header.apikey.id\": \"admin-api-apikey-id\",\n \"my-expr-header.apikey.tag\": \"one-tag\",\n \"my-expr-header.service-id\": \"expression-language-api-service\",\n \"my-expr-header.apikey.name\": \"Otoroshi Backoffice ApiKey\",\n \"my-expr-header.config.port\": \"8080\",\n \"my-expr-header.request.uri\": \"/api/?foo=bar\",\n \"my-expr-header.service-env\": \"prod\",\n \"my-expr-header.service-tld\": \"oto.tools\",\n \"my-expr-header.request.host\": \"api.oto.tools:8080\",\n \"my-expr-header.request.path\": \"/api/\",\n \"my-expr-header.service-name\": \"expression-language\",\n \"my-expr-header.ctx.foo-field\": \"no-ctx-foob-foo\",\n \"my-expr-header.ctx.useragent\": \"no-ctx-useragent.foo\",\n \"my-expr-header.request.query\": \"bar\",\n \"my-expr-header.service-group\": \"default\",\n \"my-expr-header.request.domain\": \"api.oto.tools\",\n \"my-expr-header.request.header\": \"bar\",\n \"my-expr-header.request.method\": \"GET\",\n \"my-expr-header.service-domain\": \"api.oto.tools\",\n \"my-expr-header.apikey.metadata\": \"bar\",\n \"my-expr-header.ctx.geolocation\": \"no-ctx-geolocation.foo\",\n \"my-expr-header.token.foo-field\": \"no-token-foob-foo\",\n \"my-expr-header.date-with-format\": \"2021-11-26\",\n \"my-expr-header.request.full-url\": \"http://api.oto.tools:8080/api/?foo=bar\",\n \"my-expr-header.request.protocol\": \"http\",\n \"my-expr-header.service-metadata\": \"no-meta-foo\",\n \"my-expr-header.ctx.default-value\": \"other\",\n \"my-expr-header.env.unknown-field\": \"not-found-java_h\",\n \"my-expr-header.service-subdomain\": \"api\",\n \"my-expr-header.token.unknown-foo\": \"no-token-foo\",\n \"my-expr-header.apikey.unknown-tag\": \"one-tag\",\n \"my-expr-header.ctx.unknown-fields\": \"not-found\",\n \"my-expr-header.token.unknown-fields\": \"not-found\",\n \"my-expr-header.request.unknown-query\": \"default value\",\n \"my-expr-header.service-unknown-group\": \"default\",\n \"my-expr-header.request.unknown-header\": \"default value\",\n \"my-expr-header.apikey.unknown-metadata\": \"default value\",\n \"my-expr-header.ctx.replace-field-value\": \"no-ctx-foo\",\n \"my-expr-header.token.unknown-foo-field\": \"not-found-foob\",\n \"my-expr-header.service-unknown-metadata\": \"default-value\",\n \"my-expr-header.config.unknown-port-field\": \"not-found\",\n \"my-expr-header.token.replace-header-value\": \"no-token-foo\",\n \"my-expr-header.ctx.replace-field-all-value\": \"no-ctx-foo\",\n \"my-expr-header.token.replace-header-all-value\": \"no-token-foo\",\n }\n}\n```\n\nThen try the second call to the webapp. Navigate on your browser to `http://webapp.oto.tools:8080`. Continue with `user@foo.bar` as user and `password` as credential.\n\nThis should output:\n\n```json\n{\n ...\n \"headers\": {\n ...\n \"my-expr-header.user\": \"User Otoroshi\",\n \"my-expr-header.user.email\": \"user@foo.bar\",\n \"my-expr-header.user.metadata\": \"roger\",\n \"my-expr-header.user.profile-field\": \"User Otoroshi\",\n \"my-expr-header.user.unknown-metadata\": \"not-found\",\n \"my-expr-header.user.unknown-profile-field\": \"not-found\",\n }\n}\n```" }, { "name": "graphql-composer.md", diff --git a/manual/src/main/paradox/content.json b/manual/src/main/paradox/content.json index 9c26dfad92..d35c523f02 100644 --- a/manual/src/main/paradox/content.json +++ b/manual/src/main/paradox/content.json @@ -1 +1 @@ -[{"name":"about.md","id":"/about.md","url":"/about.html","title":"About Otoroshi","content":"# About Otoroshi\n\nAt the beginning of 2017, we had the need to create a new environment to be able to create new \"digital\" products very quickly in an agile fashion at @link:[MAIF](https://www.maif.fr) { open=new }. Naturally we turned to PaaS solutions and chose the excellent @link:[Clever Cloud](https://www.clever-cloud.com) { open=new } product to run our apps. \n\nWe also chose that every feature team will have the freedom to choose its own technological stack to build its product. It was a nice move but it has also introduced some challenges in terms of homogeneity for traceability, security, logging, ... because we did not want to force library usage in the products. We could have used something like @link:[Service Mesh Pattern](http://philcalcado.com/2017/08/03/pattern_service_mesh.html) { open=new } but the deployement model of @link:[Clever Cloud](https://www.clever-cloud.com) { open=new } prevented us to do it.\n\nThe right solution was to use a reverse proxy or some kind of API Gateway able to provide tracability, logging, security with apikeys, quotas, DNS as a service locator, etc. We needed something easy to use, with a human friendly UI, a nice API to extends its features, true hot reconfiguration, able to generate internal events for third party usage. A couple of solutions were available at that time, but not one seems to fit our needs, there was always something missing, too complicated for our needs or not playing well with @link:[Clever Cloud](https://www.clever-cloud.com) { open=new } deployment model.\n\nAt some point, we tried to write a small prototype to explore what could be our dream reverse proxy. The design was very simple, there were some rough edges but every major feature needed was there waiting to be enhanced.\n\n**Otoroshi** was born and we decided to move ahead with our hairy monster :)\n\n## Philosophy \n\nEvery OSS product build at @link:[MAIF](https://www.maif.fr) { open=new } like the developer portal @link:[Daikoku](https://maif.github.io/daikoku) { open=new } or @link:[Izanami](https://maif.github.io/izanami) { open=new } follow a common philosophy. \n\n* the services or API provided should be **technology agnostic**.\n* **http first**: http is the right answer to the previous quote \n* **api First**: the UI is just another client of the api. \n* **secured**: the services exposed need authentication for both humans or machines \n* **event based**: the services should expose a way to get notified of what happened inside. \n"},{"name":"api.md","id":"/api.md","url":"/api.html","title":"Admin REST API","content":"# Admin REST API\n\nOtoroshi provides a fully featured REST admin API to perform almost every operation possible in the Otoroshi dashboard. The Otoroshi dashbaord is just a regular consumer of the admin API.\n\nUsing the admin API, you can do whatever you want and enhance your Otoroshi instances with a lot of features that will feet your needs.\n\n## Swagger descriptor\n\nThe Otoroshi admin API is described using OpenAPI format and is available at :\n\nhttps://maif.github.io/otoroshi/manual/code/openapi.json\n\nEvery Otoroshi instance provides its own embedded OpenAPI descriptor at :\n\nhttp://otoroshi.oto.tools:8080/api/openapi.json\n\n## Swagger documentation\n\nYou can read the OpenAPI descriptor in a more human friendly fashion using `Swagger UI`. The swagger UI documentation of the Otoroshi admin API is available at :\n\nhttps://maif.github.io/otoroshi/swagger-ui/index.html\n\nEvery Otoroshi instance provides its own embedded OpenAPI descriptor at :\n\nhttp://otoroshi.oto.tools:8080/api/swagger/ui\n\nYou can also read the swagger UI documentation of the Otoroshi admin API below :\n\n@@@ div { .swagger-frame }\n\n\n@@@\n"},{"name":"architecture.md","id":"/architecture.md","url":"/architecture.html","title":"Architecture","content":"# Architecture\n\nWhen we started the development of Otoroshi, we had several classical patterns in mind like `Service gateway`, `Service locator`, `Circuit breakers`, etc ...\n\nAt start we thought about providing a bunch of librairies that would be included in each microservice or app to perform these tasks. But the more we were thinking about it, the more it was feeling weird, unagile, etc, it also prevented us to use any technical stack we wanted to use. So we decided to change our approach to something more universal.\n\nWe chose to make Otoroshi the central part of our microservices system, something between a reverse-proxy, a service gateway and a service locator where each call to a microservice (even from another microservice) must pass through Otoroshi. There are multiple benefits to do that, each call can be logged, audited, monitored, integrated with a circuit breaker, etc without imposing libraries and technical stack. Any service is exposed through its own domain and we rely only on DNS to handle the service location part. Any access to a service is secured by default with an api key and is supervised by a circuit breaker to avoid cascading failures.\n\n@@@ div { .centered-img }\n\n@@@\n\nOtoroshi tries to embrace our @ref:[global philosophy](./about.md#philosophy) by providing a full featured REST admin api, a gorgeous admin dashboard written in @link:[React](https://reactjs.org) { open=new } that uses the api, by generating traffic events, alerts events, audit events that can be consumed by several channels. Otoroshi also supports a bunch of datastores to better match with different use cases.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"aws.md","id":"/deploy/aws.md","url":"/deploy/aws.html","title":"AWS - Elastic Beanstalk","content":"# AWS - Elastic Beanstalk\n\nNow you want to use Otoroshi on AWS. There are multiple options to deploy Otoroshi on AWS, \nfor instance :\n\n* You can deploy the @ref:[Docker image](../install/get-otoroshi.md#from-docker) on [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html)\n* You can create a basic [Amazon EC2](https://docs.aws.amazon.com/fr_fr/AWSEC2/latest/UserGuide/concepts.html), access it via SSH, then \ndeploy the @ref:[otoroshi.jar](../install/get-otoroshi.md#from-jar-file) \n* Or you can use [AWS Elastic Beanstalk](https://aws.amazon.com/fr/elasticbeanstalk)\n\nIn this section we are going to cover how to deploy Otoroshi on [AWS Elastic Beanstalk](https://aws.amazon.com/fr/elasticbeanstalk). \n\n## AWS Elastic Beanstalk Overview\nUnlike Clever Cloud, to deploy an application on AWS Elastic Beanstalk, you don't link your app to your VCS repository, push your code and expect it to be built and run.\n\nAWS Elastic Beanstalk does only the run part. So you have to handle your own build pipeline, upload a Zip file containing your runnable, then AWS Elastic Beanstalk will take it from there. \n \nEg: for apps running on the JVM (Scala/Java/Kotlin) a Zip with the jar inside would suffice, for apps running in a Docker container, a Zip with the DockerFile would be enough. \n\n\n## Prepare your deployment target\nActually, there are 2 options to build your target. \n\nEither you create a DockerFile from this @ref:[Docker image](../install/get-otoroshi.md#from-docker), build a zip, and do all the Otoroshi custom configuration using ENVs.\n\nOr you download the @ref:[otoroshi.jar](../install/get-otoroshi.md#from-jar-file), do all the Otoroshi custom configuration using your own otoroshi.conf, and create a DockerFile that runs the jar using your otoroshi.conf. \n\nFor the second option your DockerFile would look like this :\n\n```dockerfile\nFROM openjdk:11\nVOLUME /tmp\nEXPOSE 8080\nADD otoroshi.jar otoroshi.jar\nADD otoroshi.conf otoroshi.conf\nRUN sh -c 'touch /otoroshi.jar'\nENV JAVA_OPTS=\"\"\nENTRYPOINT [ \"sh\", \"-c\", \"java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -Dconfig.file=/otoroshi.conf -jar /otoroshi.jar\" ]\n``` \n \nI'd recommend the second option.\n \nNow Zip your target (Jar + Conf + DockerFile) and get ready for deployment. \n\n## Create an Otoroshi instance on AWS Elastic Beanstalk\nFirst, go to [AWS Elastic Beanstalk Console](https://eu-west-3.console.aws.amazon.com/elasticbeanstalk/home?region=eu-west-3#/welcome), don't forget to sign in and make sure that you are in the good region (eg : eu-west-3 for Paris).\n\nHit **Get started** \n\n@@@ div { .centered-img }\n\n@@@\n\nSpecify the **Application name** of your application, Otoroshi for example.\n\n@@@ div { .centered-img }\n\n@@@\n \nChoose the **Platform** of the application you want to create, in your case use Docker.\n\nFor **Application code** choose **Upload your code** then hit **Upload**.\n\n@@@ div { .centered-img }\n\n@@@\n\nBrowse the zip created in the [previous section](#prepare-your-deployment-target) from your machine. \n\nAs you can see in the image above, you can also choose an S3 location, you can imagine that at the end of your build pipeline you upload your Zip to S3, and then get it from there (I wouldn't recommend that though).\n \nWhen the upload is done, hit **Configure more options**.\n \n@@@ div { .centered-img }\n\n@@@ \n \nRight now an AWS Elastic Beanstalk application has been created, and by default an environment named Otoroshi-env is being created as well.\n\nAWS Elastic Beanstalk can manage multiple environments of the same application, for instance environments can be (prod, preprod, expriments...). \n\nOtoroshi is a bit particular, it doesn't make much sense to have multiple environments, since Otoroshi will handle all the requests from/to backend services regardless of the environment. \n \nAs you see in the image above, we are now configuring the Otoroshi-env, the one and only environment of Otoroshi.\n \nFor **Configuration presets**, choose custom configuration, now you have a load balancer for your environment with the capacity of at least one instance and at most four.\nI'd recommend at least 2 instances, to change that, on the **Capacity** card hit **Modify**. \n\n@@@ div { .centered-img }\n\n@@@\n\nChange the **Instances** to min 2, max 4 then hit **Save**. For the **Scaling triggers**, I'd keep the default values, but know that you can edit the capacity config any time you want, it only costs a redeploy, which will be done automatically by the way.\n \nInstances size is by default t2.micro, which is a bit small for running Otoroshi, I'd recommend a t2.medium. \nOn the **Instances** card hit **Modify**.\n\n@@@ div { .centered-img }\n\n@@@\n\nFor **Instance type** choose t2.medium, then hit **Save**, no need to change the volume size, unless you have a lot of http call faults, which means a lot more logs, in that case the default volume size may not be enough.\n\nThe default environment created for Otoroshi, for instance Otoroshi-env, is a web server environment which fits in your case, but the thing is that on AWS Elastic Beanstalk by default a web server environment for a docker-based application, runs behind an Nginx proxy.\nWe have to remove that proxy. So on the **Software** card hit **Modify**.\n \n@@@ div { .centered-img }\n\n@@@ \n \nFor **Proxy server** choose None then hit **Save**.\n\nAlso note that you can set Envs for Otoroshi in same page (see image below). \n\n@@@ div { .centered-img }\n\n@@@ \n\nTo finalise the creation process, hit **Create app** on the bottom right.\n\nThe Otoroshi app is now created, and it's running which is cool, but we still don't have neither a **datastore** nor **https**.\n \n## Create an Otoroshi datastore on AWS ElastiCache\n\nBy default Otoroshi uses non persistent memory to store it's data, Otoroshi supports many kinds of datastores. In this section we will be covering Redis datastore. \n\nBefore starting, using a datastore hosted by AWS is not at all mandatory, feel free to use your own if you like, but if you want to learn more about ElastiCache, this section may interest you, otherwise you can skip it.\n\nGo to [AWS ElastiCache](https://eu-west-3.console.aws.amazon.com/elasticache/home?region=eu-west-3#) and hit **Get Started Now**.\n\n@@@ div { .centered-img }\n\n@@@ \n\nFor **Cluster engine** keep Redis.\n\nChoose a **Name** for your datastore, for instance otoroshi-datastore.\n\nYou can keep all the other default values and hit **Create** on the bottom right of the page.\n\nOnce your Redis Cluster is created, it would look like the image below.\n\n@@@ div { .centered-img }\n\n@@@ \n\n\nFor applications in the same security group as your cluster, redis cluster is accessible via the **Primary Endpoint**. Don't worry the default security group is fine, you don't need any configuration to access the cluster from Otoroshi.\n\nTo make Otoroshi use the created cluster, you can either use Envs `APP_STORAGE=redis`, `REDIS_HOST` and `REDIS_PORT`, or set `otoroshi.storage=redis`, `otoroshi.redis.host` and `otoroshi.redis.port` in your otoroshi.conf.\n\n## Create SSL certificate and configure your domain\n\nOtoroshi has now a datastore, but not yet ready for use. \n\nIn order to get it ready you need to :\n\n* Configure Otoroshi with your domain \n* Create a wildcard SSL certificate for your domain\n* Configure Otoroshi AWS Elastic Beanstalk instance with the SSL certificate \n* Configure your DNS to redirect all traffic on your domain to Otoroshi \n \n### Configure Otoroshi with your domain\n\nYou can use ENVs or you can use a custom otoroshi.conf in your Docker container.\n\nFor the second option your otoroshi.conf would look like this :\n\n``` \n include \"application.conf\"\n http.port = 8080\n app {\n env = \"prod\"\n domain = \"mysubdomain.oto.tools\"\n rootScheme = \"https\"\n snowflake {\n seed = 0\n }\n events {\n maxSize = 1000\n }\n backoffice {\n subdomain = \"otoroshi\"\n session {\n exp = 86400000\n }\n }\n \n storage = \"redis\"\n redis {\n host=\"myredishost\"\n port=myredisport\n }\n \n privateapps {\n subdomain = \"privateapps\"\n }\n \n adminapi {\n targetSubdomain = \"otoroshi-admin-internal-api\"\n exposedSubdomain = \"otoroshi-api\"\n defaultValues {\n backOfficeGroupId = \"admin-api-group\"\n backOfficeApiKeyClientId = \"admin-client-id\"\n backOfficeApiKeyClientSecret = \"admin-client-secret\"\n backOfficeServiceId = \"admin-api-service\"\n }\n proxy {\n https = true\n local = false\n }\n }\n claim {\n sharedKey = \"myclaimsharedkey\"\n }\n }\n \n play.http {\n session {\n secure = false\n httpOnly = true\n maxAge = 2147483646\n domain = \".mysubdomain.oto.tools\"\n cookieName = \"oto-sess\"\n }\n }\n``` \n\n### Create a wildcard SSL certificate for your domain\n\nGo to [AWS Certificate Manager](https://eu-west-3.console.aws.amazon.com/acm/home?region=eu-west-3#/firstrun).\n\nBelow **Provision certificates** hit **Get started**.\n\n@@@ div { .centered-img }\n\n@@@ \n \nKeep the default selected value **Request a public certificate** and hit **Request a certificate**.\n \n@@@ div { .centered-img }\n\n@@@ \n\nPut your **Domain name**, use *. for wildcard, for instance *\\*.mysubdomain.oto.tools*, then hit **Next**.\n\n@@@ div { .centered-img }\n\n@@@ \n\nYou can choose between **Email validation** and **DNS validation**, I'd recommend **DNS validation**, then hit **Review**. \n \n@@@ div { .centered-img }\n\n@@@ \n \nVerify that you did put the right **Domain name** then hit **Confirm and request**. \n\n@@@ div { .centered-img }\n\n@@@\n \nAs you see in the image above, to let Amazon do the validation you have to add the `CNAME` record to your DNS configuration. Normally this operation takes around one day.\n \n### Configure Otoroshi AWS Elastic Beanstalk instance with the SSL certificate \n\nOnce the certificate is validated, you need to modify the configuration of Otoroshi-env to add the SSL certificate for HTTPS. \nFor that you need to go to [AWS Elastic Beanstalk applications](https://eu-west-3.console.aws.amazon.com/elasticbeanstalk/home?region=eu-west-3#/applications),\nhit **Otoroshi-env**, then on the left side hit **Configuration**, then on the **Load balancer** card hit **Modify**.\n\n@@@ div { .centered-img }\n\n@@@\n\nIn the **Application Load Balancer** section hit **Add listener**.\n\n@@@ div { .centered-img }\n\n@@@\n\nFill the popup as the image above, then hit **Add**. \n\nYou should now be seeing something like this : \n \n@@@ div { .centered-img }\n\n@@@ \n \n \nMake sure that your listener is enabled, and on the bottom right of the page hit **Apply**.\n\nNow you have **https**, so let's use Otoroshi.\n\n### Configure your DNS to redirect all traffic on your domain to Otoroshi\n \nIt's actually pretty simple, you just need to add a `CNAME` record to your DNS configuration, that redirects *\\*.mysubdomain.oto.tools* to the DNS name of Otoroshi's load balancer.\n\nTo find the DNS name of Otoroshi's load balancer go to [AWS Ec2](https://eu-west-3.console.aws.amazon.com/ec2/v2/home?region=eu-west-3#LoadBalancers:tag:elasticbeanstalk:environment-name=Otoroshi-env;sort=loadBalancerName)\n\nYou would find something like this : \n \n@@@ div { .centered-img }\n\n@@@ \n\nThere is your DNS name, so add your `CNAME` record. \n \nOnce all these steps are done, the AWS Elastic Beanstalk Otoroshi instance, would now be handling all the requests on your domain. ;) \n"},{"name":"clever-cloud.md","id":"/deploy/clever-cloud.md","url":"/deploy/clever-cloud.html","title":"Clever-Cloud","content":"# Clever-Cloud\n\nNow you want to use Otoroshi on Clever Cloud. Otoroshi has been designed and created to run on Clever Cloud and a lot of choices were made because of how Clever Cloud works.\n\n## Create an Otoroshi instance on CleverCloud\n\nIf you want to customize the configuration @ref:[use env. variables](../install/setup-otoroshi.md#configuration-with-env-variables), you can use [the example provided below](#example-of-clevercloud-env-variables)\n\nCreate a new CleverCloud app based on a clevercloud git repo (not empty) or a github project of your own (not empty).\n\n@@@ div { .centered-img }\n\n@@@\n\nThen choose what kind of app your want to create, for Otoroshi, choose `Java + Jar`\n\n@@@ div { .centered-img }\n\n@@@\n\nNext, set up choose instance size and auto-scalling. Otoroshi can run on small instances, especially if you just want to test it.\n\n@@@ div { .centered-img }\n\n@@@\n\nFinally, choose a name for your app\n\n@@@ div { .centered-img }\n\n@@@\n\nNow you just need to customize environnment variables\n\nat this point, you can also add other env. variables to configure Otoroshi like in [the example provided below](#example-of-clevercloud-env-variables)\n\n@@@ div { .centered-img }\n\n@@@\n\nYou can also use expert mode :\n\n@@@ div { .centered-img }\n\n@@@\n\nNow, your app is ready, don't forget to add a custom domains name on the CleverCloud app matching the Otoroshi app domain. \n\n## Example of CleverCloud env. variables\n\nYou can add more env variables to customize your Otoroshi instance like the following. Use the expert mode to copy/paste all the values in one shot. If you want an real datastore, create a redis addon on clevercloud, link it to your otoroshi app and change the `APP_STORAGE` variable to `redis`\n\n
\n\n
\n```\nADMIN_API_CLIENT_ID=xxxx\nADMIN_API_CLIENT_SECRET=xxxxx\nADMIN_API_GROUP=xxxxxx\nADMIN_API_SERVICE_ID=xxxxxxx\nCLAIM_SHAREDKEY=xxxxxxx\nOTOROSHI_INITIAL_ADMIN_LOGIN=youremailaddress\nOTOROSHI_INITIAL_ADMIN_PASSWORD=yourpassword\nPLAY_CRYPTO_SECRET=xxxxxx\nSESSION_NAME=oto-session\nAPP_DOMAIN=yourdomain.tech\nAPP_ENV=prod\nAPP_STORAGE=inmemory\nAPP_ROOT_SCHEME=https\nCC_PRE_BUILD_HOOK=curl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/${latest_otoroshi_version}/otoroshi.jar'\nCC_JAR_PATH=./otoroshi.jar\nCC_JAVA_VERSION=11\nPORT=8080\nSESSION_DOMAIN=.yourdomain.tech\nSESSION_MAX_AGE=604800000\nSESSION_SECURE_ONLY=true\nUSER_AGENT=otoroshi\nMAX_EVENTS_SIZE=1\nWEBHOOK_SIZE=100\nAPP_BACKOFFICE_SESSION_EXP=86400000\nAPP_PRIVATEAPPS_SESSION_EXP=86400000\nENABLE_METRICS=true\nOTOROSHI_ANALYTICS_PRESSURE_ENABLED=true\nUSE_CACHE=true\n```\n
"},{"name":"clustering.md","id":"/deploy/clustering.md","url":"/deploy/clustering.html","title":"Otoroshi clustering","content":"# Otoroshi clustering\n\nOtoroshi can work as a cluster by default as you can spin many Otoroshi servers using the same datastore or datastore cluster. In that case any instance is capable of serving services, Otoroshi admin UI, Otoroshi admin API, etc.\n\nBut sometimes, this is not enough. So Otoroshi provides an additional clustering model named `Leader / Workers` where there is a leader cluster ([control plane](https://en.wikipedia.org/wiki/Control_plane)), composed of Otoroshi instances backed by a datastore like Redis, PostgreSQL or Cassandra, that is in charge of all `writes` to the datastore through Otoroshi admin UI and API, and a worker cluster ([data plane](https://en.wikipedia.org/wiki/Forwarding_plane)) composed of horizontally scalable Otoroshi instances, backed by a super fast in memory datastore, with the sole purpose of routing traffic to your services based on data synced from the leader cluster. With this distributed Otoroshi version, you can reach your goals of high availability, scalability and security.\n\nOtoroshi clustering only uses http internally (right now) to make communications between leaders and workers instances so it is fully compatible with PaaS providers like [Clever-Cloud](https://www.clever-cloud.com/en/) that only provide one external port for http traffic.\n\n@@@ div { .centered-img }\n\n\n*Fig. 1: Simplified view*\n@@@\n\n@@@ div { .centered-img }\n\n\n*Fig. 2: Deployment view*\n@@@\n\n## Cluster configuration\n\n```hocon\notoroshi {\n cluster {\n mode = \"leader\" # can be \"off\", \"leader\", \"worker\"\n compression = 4 # compression of the data sent between leader cluster and worker cluster. From -1 (disabled) to 9\n leader {\n name = ${?CLUSTER_LEADER_NAME} # name of the instance, if none, it will be generated\n urls = [\"http://127.0.0.1:8080\"] # urls to contact the leader cluster\n host = \"otoroshi-api.oto.tools\" # host of the otoroshi api in the leader cluster\n clientId = \"apikey-id\" # otoroshi api client id\n clientSecret = \"secret\" # otoroshi api client secret\n cacheStateFor = 4000 # state is cached during (ms)\n }\n worker {\n name = ${?CLUSTER_WORKER_NAME} # name of the instance, if none, it will be generated\n retries = 3 # number of retries when calling leader cluster\n timeout = 2000 # timeout when calling leader cluster\n state {\n retries = ${otoroshi.cluster.worker.retries} # number of retries when calling leader cluster on state sync\n pollEvery = 10000 # interval of time (ms) between 2 state sync\n timeout = ${otoroshi.cluster.worker.timeout} # timeout when calling leader cluster on state sync\n }\n quotas {\n retries = ${otoroshi.cluster.worker.retries} # number of retries when calling leader cluster on quotas sync\n pushEvery = 2000 # interval of time (ms) between 2 quotas sync\n timeout = ${otoroshi.cluster.worker.timeout} # timeout when calling leader cluster on quotas sync\n }\n }\n }\n}\n```\n\nyou can also use many env. variables to configure Otoroshi cluster\n\n```hocon\notoroshi {\n cluster {\n mode = ${?CLUSTER_MODE}\n compression = ${?CLUSTER_COMPRESSION}\n leader {\n name = ${?CLUSTER_LEADER_NAME}\n host = ${?CLUSTER_LEADER_HOST}\n url = ${?CLUSTER_LEADER_URL}\n clientId = ${?CLUSTER_LEADER_CLIENT_ID}\n clientSecret = ${?CLUSTER_LEADER_CLIENT_SECRET}\n groupingBy = ${?CLUSTER_LEADER_GROUP_BY}\n cacheStateFor = ${?CLUSTER_LEADER_CACHE_STATE_FOR}\n stateDumpPath = ${?CLUSTER_LEADER_DUMP_PATH}\n }\n worker {\n name = ${?CLUSTER_WORKER_NAME}\n retries = ${?CLUSTER_WORKER_RETRIES}\n timeout = ${?CLUSTER_WORKER_TIMEOUT}\n state {\n retries = ${?CLUSTER_WORKER_STATE_RETRIES}\n pollEvery = ${?CLUSTER_WORKER_POLL_EVERY}\n timeout = ${?CLUSTER_WORKER_POLL_TIMEOUT}\n }\n quotas {\n retries = ${?CLUSTER_WORKER_QUOTAS_RETRIES}\n pushEvery = ${?CLUSTER_WORKER_PUSH_EVERY}\n timeout = ${?CLUSTER_WORKER_PUSH_TIMEOUT}\n }\n }\n }\n}\n```\n\n@@@ warning\nYou **should** use HTTPS exposition for the Otoroshi API that will be used for data sync as sensitive informations are exchanged between control plane and data plane.\n@@@\n\n@@@ warning\nYou **must** have the same cluster configuration on every Otoroshi instance (worker/leader) with only names and mode changed for each instance. Some things in leader/worker are computed using configuration of their counterpart worker/leader.\n@@@\n\n## Cluster UI\n\nOnce an Otoroshi instance is launcher as cluster Leader, a new row of live metrics tile will be available on the home page of Otoroshi admin UI.\n\n@@@ div { .centered-img }\n\n@@@\n\nyou can also access a more detailed view of the cluster at `Settings (cog icon) / Cluster View`\n\n@@@ div { .centered-img }\n\n@@@\n\n## Run examples\n\nfor leader \n\n```sh\njava -Dhttp.port=8091 -Dhttps.port=9091 -Dotoroshi.cluster.mode=leader -jar otoroshi.jar\n```\n\nfor worker\n\n```sh\njava -Dhttp.port=8092 -Dhttps.port=9092 -Dotoroshi.cluster.mode=worker \\\n -Dotoroshi.cluster.leader.urls.0=http://127.0.0.1:8091 -jar otoroshi.jar\n```\n\n## Setup a cluster by example\n\nif you want to see how to setup an otoroshi cluster, just check @ref:[the clustering tutorial](../how-to-s/setup-otoroshi-cluster.md)"},{"name":"index.md","id":"/deploy/index.md","url":"/deploy/index.html","title":"Deploy to production","content":"# Deploy to production\n\nNow it's time to deploy Otoroshi in production, in this chapter we will see what kind of things you can do.\n\nOtoroshi can run wherever you want, even on a raspberry pi (Cluster^^) ;)\n\n@@@div { .plugin .platform }\n\n## Clever Cloud\n\nOtoroshi provides an integration to create easily services based on application deployed on your Clever Cloud account.\n\n\n@ref:[Documentation](./clever-cloud.md)\n@@@\n\n@@@div { .plugin .platform } \n## Kubernetes\nStarting at version 1.5.0, Otoroshi provides a native Kubernetes support.\n\n\n\n@ref:[Documentation](./kubernetes.md)\n@@@\n\n@@@div { .plugin .platform } \n## AWS Elastic Beanstalk\n\nRun Otoroshi on AWS Elastic Beanstalk\n\n\n\n@ref:[Tutorial](./aws.md)\n@@@\n\n@@@div { .plugin .platform } \n## Amazon ECS\n\nDeploy the Otoroshi Docker image using Amazon Elastic Container Service\n\n\n\n@link:[Tutorial](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html)\n@ref:[Docker image](../install/get-otoroshi.md#from-docker)\n\n@@@\n\n@@@div { .plugin .platform }\n## GCE\n\nDeploy the Docker image using Google Compute Engine container integration\n\n\n\n@link:[Documentation](https://cloud.google.com/compute/docs/containers/deploying-containers)\n@ref:[Docker image](../install/get-otoroshi.md#from-docker)\n\n@@@\n\n@@@div { .plugin .platform } \n## Azure\n\nDeploy the Docker image using Azure Container Service\n\n\n\n@link:[Documentation](https://azure.microsoft.com/en-us/services/container-service/)\n@ref:[Docker image](../install/get-otoroshi.md#from-docker) \n@@@\n\n@@@div { .plugin .platform } \n## Heroku\n\nDeploy the Docker image using Docker integration\n\n\n\n@link:[Documentation](https://devcenter.heroku.com/articles/container-registry-and-runtime)\n@ref:[Docker image](../install/get-otoroshi.md#from-docker)\n@@@\n\n@@@div { .plugin .platform } \n## CloudFoundry\n\nDeploy the Docker image using -Docker integration\n\n\n\n@link:[Documentation](https://docs.cloudfoundry.org/adminguide/docker.html)\n@ref:[Docker image](../install/get-otoroshi.md#from-docker)\n@@@\n\n@@@div { .plugin .platform .platform-actions-column } \n## Your own infrastructure\n\nAs Otoroshi is a Play Framework application, you can read the doc about putting a `Play` app in production.\n\nDownload the latest Otoroshi distribution, unzip it, customize it and run it.\n\n@link:[Play Framework](https://www.playframework.com)\n@link:[Production Configuration](https://www.playframework.com/documentation/2.6.x/ProductionConfiguration)\n@ref:[Otoroshi distribution](../install/get-otoroshi.md#from-zip)\n@@@\n\n@@@div { .break }\n## Scaling and clustering in production\n@@@\n\n\n@@@div { .plugin .platform .dark-platform } \n## Clustering\n\nDeploy Otoroshi as a cluster of leaders and workers.\n\n\n@ref:[Documentation](./clustering.md)\n@@@\n\n@@@div { .plugin .platform .dark-platform } \n## Scaling Otoroshi\n\nOtoroshi is designed to be reasonably easy to scale and be highly available.\n\n\n@ref:[Documentation](./scaling.md) \n@@@\n\n@@@ index\n\n* [Clustering](./clustering.md)\n* [Kubernetes](./kubernetes.md)\n* [Clever Cloud](./clever-cloud.md)\n* [AWS - Elastic Beanstalk](./aws.md)\n* [Scaling](./scaling.md) \n\n@@@\n"},{"name":"kubernetes.md","id":"/deploy/kubernetes.md","url":"/deploy/kubernetes.html","title":"Kubernetes","content":"# Kubernetes\n\nStarting at version 1.5.0, Otoroshi provides a native Kubernetes support. Multiple otoroshi jobs (that are actually kubernetes controllers) are provided in order to\n\n- sync kubernetes secrets of type `kubernetes.io/tls` to otoroshi certificates\n- act as a standard ingress controller (supporting `Ingress` objects)\n- provide Custom Resource Definitions (CRDs) to manage Otoroshi entities from Kubernetes and act as an ingress controller with its own resources\n\n## Installing otoroshi on your kubernetes cluster\n\n@@@ warning\nYou need to have cluster admin privileges to install otoroshi and its service account, role mapping and CRDs on a kubernetes cluster. We also advise you to create a dedicated namespace (you can name it `otoroshi` for example) to install otoroshi\n@@@\n\nIf you want to deploy otoroshi into your kubernetes cluster, you can download the deployment descriptors from https://github.com/MAIF/otoroshi/tree/master/kubernetes and use kustomize to create your own overlay.\n\nYou can also create a `kustomization.yaml` file with a remote base\n\n```yaml\nbases:\n- github.com/MAIF/otoroshi/kubernetes/kustomize/overlays/simple/?ref=v16.5.2\n```\n\nThen deploy it with `kubectl apply -k ./overlays/myoverlay`. \n\nYou can also use Helm to deploy a simple otoroshi cluster on your kubernetes cluster\n\n```sh\nhelm repo add otoroshi https://maif.github.io/otoroshi/helm\nhelm install my-otoroshi otoroshi/otoroshi\n```\n\nBelow, you will find example of deployment. Do not hesitate to adapt them to your needs. Those descriptors have value placeholders that you will need to replace with actual values like \n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: ${domain}\n```\n\nyou will have to edit it to make it look like\n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: 'apis.my.domain'\n```\n\nif you don't want to use placeholders and environment variables, you can create a secret containing the configuration file of otoroshi\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: otoroshi-config\ntype: Opaque\nstringData:\n oto.conf: >\n include \"application.conf\"\n app {\n storage = \"redis\"\n domain = \"apis.my.domain\"\n }\n```\n\nand mount it in the otoroshi container\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: otoroshi-deployment\nspec:\n selector:\n matchLabels:\n run: otoroshi-deployment\n template:\n metadata:\n labels:\n run: otoroshi-deployment\n spec:\n serviceAccountName: otoroshi-admin-user\n terminationGracePeriodSeconds: 60\n hostNetwork: false\n containers:\n - image: maif/otoroshi:16.5.2\n imagePullPolicy: IfNotPresent\n name: otoroshi\n args: ['-Dconfig.file=/usr/app/otoroshi/conf/oto.conf']\n ports:\n - containerPort: 8080\n name: \"http\"\n protocol: TCP\n - containerPort: 8443\n name: \"https\"\n protocol: TCP\n volumeMounts:\n - name: otoroshi-config\n mountPath: \"/usr/app/otoroshi/conf\"\n readOnly: true\n volumes:\n - name: otoroshi-config\n secret:\n secretName: otoroshi-config\n ...\n```\n\nYou can also create several secrets for each placeholder, mount them to the otoroshi container then use their file path as value\n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: 'file:///the/path/of/the/secret/file'\n```\n\nyou can use the same trick in the config. file itself\n\n### Note on bare metal kubernetes cluster installation\n\n@@@ note\nBare metal kubernetes clusters don't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples below.\n@@@\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@\n\n### Common manifests\n\nthe following manifests are always needed. They create otoroshi CRDs, tokens, role, etc. Redis deployment is not mandatory, it's just an example. You can use your own existing setup.\n\nrbac.yaml\n: @@snip [rbac.yaml](../snippets/kubernetes/kustomize/base/rbac.yaml) \n\ncrds.yaml\n: @@snip [crds.yaml](../snippets/kubernetes/kustomize/base/crds.yaml) \n\nredis.yaml\n: @@snip [redis.yaml](../snippets/kubernetes/kustomize/base/redis.yaml) \n\n\n### Deploy a simple otoroshi instanciation on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. We use a service of type `LoadBalancer` to expose otoroshi to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple/dns.example) \n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. The otoroshi instance are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/dns.example) \n\n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster using a DaemonSet\n\nHere we have one otoroshi instance on each kubernetes node (with the `otoroshi-kind: instance` label) with redis persistance. The otoroshi instances are exposed as `hostPort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/dns.example) \n\n### Deploy an otoroshi cluster on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas of an otoroshi leader connected to a redis instance and 2 replicas of an otoroshi worker connected to the leader. We use a service of type `LoadBalancer` to expose otoroshi leader/worker to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster\n\nHere we have 2 replicas of otoroshi leader connected to the same redis instance and 2 replicas for otoroshi worker. The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster using DaemonSet\n\nHere we have 1 otoroshi leader instance on each kubernetes node (with the `otoroshi-kind: leader` label) connected to the same redis instance and 1 otoroshi worker instance on each kubernetes node (with the `otoroshi-kind: worker` label). The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\n## Using Otoroshi as an Ingress Controller\n\nIf you want to use Otoroshi as an [Ingress Controller](https://kubernetes.io/fr/docs/concepts/services-networking/ingress/), just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Ingress Controller`.\n\nThen add the following configuration for the job (with your own tweaks of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": true, // sync ingresses\n \"crds\": false, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities. can be \"default\" to use otoroshi default templates\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n \"organization\": {},\n \"team\": {},\n \"data-exporter\": {},\n \"routes\": {},\n \"route-compositions\": {},\n \"backends\": {}\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nNow you can deploy your first service ;)\n\n### Deploy an ingress route\n\nnow let's say you want to deploy an http service and route to the outside world through otoroshi\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: kennethreitz/httpbin\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 80\n name: \"http\"\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8080\n targetPort: http\n name: http\n selector:\n run: http-app-deployment\n---\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nonce deployed, otoroshi will sync with kubernetes and create the corresponding service to route your app. You will be able to access your app with\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get\n```\n\n### Support for Ingress Classes\n\nSince Kubernetes 1.18, you can use `IngressClass` type of manifest to specify which ingress controller you want to use for a deployment (https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#extended-configuration-with-ingress-classes). Otoroshi is fully compatible with this new manifest `kind`. To use it, configure the Ingress job to match your controller\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClasses\": [\"otoroshi.io/ingress-controller\"],\n ...\n }\n}\n```\n\nthen you have to deploy an `IngressClass` to declare Otoroshi as an ingress controller\n\n```yaml\napiVersion: \"networking.k8s.io/v1beta1\"\nkind: \"IngressClass\"\nmetadata:\n name: \"otoroshi-ingress-controller\"\nspec:\n controller: \"otoroshi.io/ingress-controller\"\n parameters:\n apiGroup: \"proxy.otoroshi.io/v1alpha\"\n kind: \"IngressParameters\"\n name: \"otoroshi-ingress-controller\"\n```\n\nand use it in your `Ingress`\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\nspec:\n ingressClassName: otoroshi-ingress-controller\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\n### Use multiple ingress controllers\n\nIt is of course possible to use multiple ingress controller at the same time (https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#using-multiple-ingress-controllers) using the annotation `kubernetes.io/ingress.class`. By default, otoroshi reacts to the class `otoroshi`, but you can make it the default ingress controller with the following config\n\n```json\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClass\": \"*\",\n ...\n }\n}\n```\n\n### Supported annotations\n\nif you need to customize the service descriptor behind an ingress rule, you can use some annotations. If you need better customisation, just go to the CRDs part. The following annotations are supported :\n\n- `ingress.otoroshi.io/groups`\n- `ingress.otoroshi.io/group`\n- `ingress.otoroshi.io/groupId`\n- `ingress.otoroshi.io/name`\n- `ingress.otoroshi.io/targetsLoadBalancing`\n- `ingress.otoroshi.io/stripPath`\n- `ingress.otoroshi.io/enabled`\n- `ingress.otoroshi.io/userFacing`\n- `ingress.otoroshi.io/privateApp`\n- `ingress.otoroshi.io/forceHttps`\n- `ingress.otoroshi.io/maintenanceMode`\n- `ingress.otoroshi.io/buildMode`\n- `ingress.otoroshi.io/strictlyPrivate`\n- `ingress.otoroshi.io/sendOtoroshiHeadersBack`\n- `ingress.otoroshi.io/readOnly`\n- `ingress.otoroshi.io/xForwardedHeaders`\n- `ingress.otoroshi.io/overrideHost`\n- `ingress.otoroshi.io/allowHttp10`\n- `ingress.otoroshi.io/logAnalyticsOnServer`\n- `ingress.otoroshi.io/useAkkaHttpClient`\n- `ingress.otoroshi.io/useNewWSClient`\n- `ingress.otoroshi.io/tcpUdpTunneling`\n- `ingress.otoroshi.io/detectApiKeySooner`\n- `ingress.otoroshi.io/letsEncrypt`\n- `ingress.otoroshi.io/publicPatterns`\n- `ingress.otoroshi.io/privatePatterns`\n- `ingress.otoroshi.io/additionalHeaders`\n- `ingress.otoroshi.io/additionalHeadersOut`\n- `ingress.otoroshi.io/missingOnlyHeadersIn`\n- `ingress.otoroshi.io/missingOnlyHeadersOut`\n- `ingress.otoroshi.io/removeHeadersIn`\n- `ingress.otoroshi.io/removeHeadersOut`\n- `ingress.otoroshi.io/headersVerification`\n- `ingress.otoroshi.io/matchingHeaders`\n- `ingress.otoroshi.io/ipFiltering.whitelist`\n- `ingress.otoroshi.io/ipFiltering.blacklist`\n- `ingress.otoroshi.io/api.exposeApi`\n- `ingress.otoroshi.io/api.openApiDescriptorUrl`\n- `ingress.otoroshi.io/healthCheck.enabled`\n- `ingress.otoroshi.io/healthCheck.url`\n- `ingress.otoroshi.io/jwtVerifier.ids`\n- `ingress.otoroshi.io/jwtVerifier.enabled`\n- `ingress.otoroshi.io/jwtVerifier.excludedPatterns`\n- `ingress.otoroshi.io/authConfigRef`\n- `ingress.otoroshi.io/redirection.enabled`\n- `ingress.otoroshi.io/redirection.code`\n- `ingress.otoroshi.io/redirection.to`\n- `ingress.otoroshi.io/clientValidatorRef`\n- `ingress.otoroshi.io/transformerRefs`\n- `ingress.otoroshi.io/transformerConfig`\n- `ingress.otoroshi.io/accessValidator.enabled`\n- `ingress.otoroshi.io/accessValidator.excludedPatterns`\n- `ingress.otoroshi.io/accessValidator.refs`\n- `ingress.otoroshi.io/accessValidator.config`\n- `ingress.otoroshi.io/preRouting.enabled`\n- `ingress.otoroshi.io/preRouting.excludedPatterns`\n- `ingress.otoroshi.io/preRouting.refs`\n- `ingress.otoroshi.io/preRouting.config`\n- `ingress.otoroshi.io/issueCert`\n- `ingress.otoroshi.io/issueCertCA`\n- `ingress.otoroshi.io/gzip.enabled`\n- `ingress.otoroshi.io/gzip.excludedPatterns`\n- `ingress.otoroshi.io/gzip.whiteList`\n- `ingress.otoroshi.io/gzip.blackList`\n- `ingress.otoroshi.io/gzip.bufferSize`\n- `ingress.otoroshi.io/gzip.chunkedThreshold`\n- `ingress.otoroshi.io/gzip.compressionLevel`\n- `ingress.otoroshi.io/cors.enabled`\n- `ingress.otoroshi.io/cors.allowOrigin`\n- `ingress.otoroshi.io/cors.exposeHeaders`\n- `ingress.otoroshi.io/cors.allowHeaders`\n- `ingress.otoroshi.io/cors.allowMethods`\n- `ingress.otoroshi.io/cors.excludedPatterns`\n- `ingress.otoroshi.io/cors.maxAge`\n- `ingress.otoroshi.io/cors.allowCredentials`\n- `ingress.otoroshi.io/clientConfig.useCircuitBreaker`\n- `ingress.otoroshi.io/clientConfig.retries`\n- `ingress.otoroshi.io/clientConfig.maxErrors`\n- `ingress.otoroshi.io/clientConfig.retryInitialDelay`\n- `ingress.otoroshi.io/clientConfig.backoffFactor`\n- `ingress.otoroshi.io/clientConfig.connectionTimeout`\n- `ingress.otoroshi.io/clientConfig.idleTimeout`\n- `ingress.otoroshi.io/clientConfig.callAndStreamTimeout`\n- `ingress.otoroshi.io/clientConfig.callTimeout`\n- `ingress.otoroshi.io/clientConfig.globalTimeout`\n- `ingress.otoroshi.io/clientConfig.sampleInterval`\n- `ingress.otoroshi.io/enforceSecureCommunication`\n- `ingress.otoroshi.io/sendInfoToken`\n- `ingress.otoroshi.io/sendStateChallenge`\n- `ingress.otoroshi.io/secComHeaders.claimRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateResponseName`\n- `ingress.otoroshi.io/secComTtl`\n- `ingress.otoroshi.io/secComVersion`\n- `ingress.otoroshi.io/secComInfoTokenVersion`\n- `ingress.otoroshi.io/secComExcludedPatterns`\n- `ingress.otoroshi.io/secComSettings.size`\n- `ingress.otoroshi.io/secComSettings.secret`\n- `ingress.otoroshi.io/secComSettings.base64`\n- `ingress.otoroshi.io/secComUseSameAlgo`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.size`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.base64`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.size`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.base64`\n- `ingress.otoroshi.io/secComAlgoInfoToken.size`\n- `ingress.otoroshi.io/secComAlgoInfoToken.secret`\n- `ingress.otoroshi.io/secComAlgoInfoToken.base64`\n- `ingress.otoroshi.io/securityExcludedPatterns`\n\nfor more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n\nwith the previous example, the ingress does not define any apikey, so the route is public. If you want to enable apikeys on it, you can deploy the following descriptor\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\n ingress.otoroshi.io/group: http-app-group\n ingress.otoroshi.io/forceHttps: 'true'\n ingress.otoroshi.io/sendOtoroshiHeadersBack: 'true'\n ingress.otoroshi.io/overrideHost: 'true'\n ingress.otoroshi.io/allowHttp10: 'false'\n ingress.otoroshi.io/publicPatterns: ''\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nnow you can use an existing apikey in the `http-app-group` to access your app\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get -u existing-apikey-1:secret-1\n```\n\n## Use Otoroshi CRDs for a better/full integration\n\nOtoroshi provides some Custom Resource Definitions for kubernetes in order to manage Otoroshi related entities in kubernetes\n\n- `service-groups`\n- `service-descriptors`\n- `apikeys`\n- `certificates`\n- `global-configs`\n- `jwt-verifiers`\n- `auth-modules`\n- `scripts`\n- `tcp-services`\n- `data-exporters`\n- `admins`\n- `teams`\n- `organizations`\n\nusing CRDs, you will be able to deploy and manager those entities from kubectl or the kubernetes api like\n\n```sh\nsudo kubectl get apikeys --all-namespaces\nsudo kubectl get service-descriptors --all-namespaces\ncurl -X GET \\\n -H 'Authorization: Bearer eyJhbGciOiJSUzI....F463SrpOehQRaQ' \\\n -H 'Accept: application/json' -k \\\n https://127.0.0.1:6443/apis/proxy.otoroshi.io/v1/apikeys | jq\n```\n\nYou can see this as better `Ingress` resources. Like any `Ingress` resource can define which controller it uses (using the `kubernetes.io/ingress.class` annotation), you can chose another kind of resource instead of `Ingress`. With Otoroshi CRDs you can even define resources like `Certificate`, `Apikey`, `AuthModules`, `JwtVerifier`, etc. It will help you to use all the power of Otoroshi while using the deployment model of kubernetes.\n \n@@@ warning\nwhen using Otoroshi CRDs, Kubernetes becomes the single source of truth for the synced entities. It means that any value in the descriptors deployed will overrides the one in Otoroshi datastore each time it's synced. So be careful if you use the Otoroshi UI or the API, some changes in configuration may be overriden by CRDs sync job.\n@@@\n\n### Resources examples\n\ngroup.yaml\n: @@snip [group.yaml](../snippets/crds/group.yaml) \n\napikey.yaml\n: @@snip [apikey.yaml](../snippets/crds/apikey.yaml) \n\nservice-descriptor.yaml\n: @@snip [service.yaml](../snippets/crds/service-descriptor.yaml) \n\ncertificate.yaml\n: @@snip [cert.yaml](../snippets/crds/certificate.yaml) \n\njwt.yaml\n: @@snip [jwt.yaml](../snippets/crds/jwt.yaml) \n\nauth.yaml\n: @@snip [auth.yaml](../snippets/crds/auth.yaml) \n\norganization.yaml\n: @@snip [orga.yaml](../snippets/crds/organization.yaml) \n\nteam.yaml\n: @@snip [team.yaml](../snippets/crds/team.yaml) \n\n\n### Configuration\n\nTo configure it, just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Otoroshi CRDs Controller`. Then add the following configuration for the job (with your own tweak of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"crds\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": false, // sync ingresses\n \"crds\": true, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities. can be \"default\" to use otoroshi default templates\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n \"organization\": {},\n \"team\": {},\n \"data-exporter\": {}\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nyou can find a more complete example of the configuration object [here](https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/plugins/jobs/kubernetes/config.scala#L134-L163)\n\n### Note about `apikeys` and `certificates` resources\n\nApikeys and Certificates are a little bit different than the other resources. They have ability to be defined without their secret part, but with an export setting so otoroshi will generate the secret parts and export the apikey or the certificate to kubernetes secret. Then any app will be able to mount them as volumes (see the full example below)\n\nIn those resources you can define \n\n```yaml\nexportSecret: true \nsecretName: the-secret-name\n```\n\nand omit `clientSecret` for apikey or `publicKey`, `privateKey` for certificates. For certificate you will have to provide a `csr` for the certificate in order to generate it\n\n```yaml\ncsr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n - httpapps.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n```\n\nwhen apikeys are exported as kubernetes secrets, they will have the type `otoroshi.io/apikey-secret` with values `clientId` and `clientSecret`\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: apikey-1\ntype: otoroshi.io/apikey-secret\ndata:\n clientId: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n clientSecret: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n```\n\nwhen certificates are exported as kubernetes secrets, they will have the type `kubernetes.io/tls` with the standard values `tls.crt` (the full cert chain) and `tls.key` (the private key). For more convenience, they will also have a `cert.crt` value containing the actual certificate without the ca chain and `ca-chain.crt` containing the ca chain without the certificate.\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: certificate-1\ntype: kubernetes.io/tls\ndata:\n tls.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n tls.key: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n cert.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n ca-chain.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA== \n```\n\n## Full CRD example\n\nthen you can deploy the previous example with better configuration level, and using mtls, apikeys, etc\n\nLet say the app looks like :\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\n// here we read the apikey to access http-app-2 from files mounted from secrets\nconst clientId = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientId').toString('utf8')\nconst clientSecret = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientSecret').toString('utf8')\n\nconst backendKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/tls.key').toString('utf8')\nconst backendCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/cert.crt').toString('utf8')\nconst backendCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/ca-chain.crt').toString('utf8')\n\nconst clientKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/tls.key').toString('utf8')\nconst clientCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/cert.crt').toString('utf8')\nconst clientCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/ca-chain.crt').toString('utf8')\n\nfunction callApi2() {\n return new Promise((success, failure) => {\n const options = { \n // using the implicit internal name (*.global.otoroshi.mesh) of the other service descriptor passing through otoroshi\n hostname: 'http-app-service-descriptor-2.global.otoroshi.mesh', \n port: 433, \n path: '/', \n method: 'GET',\n headers: {\n 'Accept': 'application/json',\n 'Otoroshi-Client-Id': clientId,\n 'Otoroshi-Client-Secret': clientSecret,\n },\n cert: clientCert,\n key: clientKey,\n ca: clientCa\n }; \n let data = '';\n const req = https.request(options, (res) => { \n res.on('data', (d) => { \n data = data + d.toString('utf8');\n }); \n res.on('end', () => { \n success({ body: JSON.parse(data), res });\n }); \n res.on('error', (e) => { \n failure(e);\n }); \n }); \n req.end();\n })\n}\n\nconst options = { \n key: backendKey, \n cert: backendCert, \n ca: backendCa, \n // we want mtls behavior\n requestCert: true, \n rejectUnauthorized: true\n}; \nhttps.createServer(options, (req, res) => { \n res.writeHead(200, {'Content-Type': 'application/json'});\n callApi2().then(resp => {\n res.write(JSON.stringify{ (\"message\": `Hello to ${req.socket.getPeerCertificate().subject.CN}`, api2: resp.body })); \n });\n}).listen(433);\n```\n\nthen, the descriptors will be :\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: foo/http-app\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 443\n name: \"https\"\n volumeMounts:\n - name: apikey-volume\n # here you will be able to read apikey from files \n # - /var/run/secrets/kubernetes.io/apikeys/clientId\n # - /var/run/secrets/kubernetes.io/apikeys/clientSecret\n mountPath: \"/var/run/secrets/kubernetes.io/apikeys\"\n readOnly: true\n volumeMounts:\n - name: backend-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/backend/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/backend/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/backend\"\n readOnly: true\n - name: client-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/client/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/client/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/client\"\n readOnly: true\n volumes:\n - name: apikey-volume\n secret:\n # here we reference the secret name from apikey http-app-2-apikey-1\n secretName: secret-2\n - name: backend-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-backend\n secretName: http-app-certificate-backend-secret\n - name: client-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-client\n secretName: http-app-certificate-client-secret\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8443\n targetPort: https\n name: https\n selector:\n run: http-app-deployment\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ServiceGroup\nmetadata:\n name: http-app-group\n annotations:\n otoroshi.io/id: http-app-group\nspec:\n description: a group to hold services about the http-app\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ApiKey\nmetadata:\n name: http-app-apikey-1\n# this apikey can be used to access the app\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-1\n authorizedEntities: \n - group_http-app-group\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-1\n# this apikey can be used to access another app in a different group\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-2\n authorizedEntities: \n - group_http-app-2-group\n---\napiVersion: proxy.otoroshi.io/v1\nkind: Certificate\nmetadata:\n name: http-app-certificate-frontend\nspec:\n description: certificate for the http-app on otorshi frontend\n autoRenew: true\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1\nkind: Certificate\nmetadata:\n name: http-app-certificate-backend\nspec:\n description: certificate for the http-app deployed on pods\n autoRenew: true\n # a secret name http-app-certificate-backend-secret will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: http-app-certificate-backend-secret\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - http-app-service \n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-back, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1\nkind: Certificate\nmetadata:\n name: http-app-certificate-client\nspec:\n description: certificate for the http-app\n autoRenew: true\n secretName: http-app-certificate-client-secret\n csr:\n issuer: CN=Otoroshi Root\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-client, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ServiceDescriptor\nmetadata:\n name: http-app-service-descriptor\nspec:\n description: the service descriptor for the http app\n groups: \n - http-app-group\n forceHttps: true\n hosts:\n - httpapp.foo.bar # hostname exposed oustide of the kubernetes cluster\n # - http-app-service-descriptor.global.otoroshi.mesh # implicit internal name inside the kubernetes cluster \n matchingRoot: /\n targets:\n - url: https://http-app-service:8443\n # alternatively, you can use serviceName and servicePort to use pods ip addresses\n # serviceName: http-app-service\n # servicePort: https\n mtlsConfig:\n # use mtls to contact the backend\n mtls: true\n certs: \n # reference the DN for the client cert\n - UID=httpapp-client, O=OtoroshiApps\n trustedCerts: \n # reference the DN for the CA cert \n - CN=Otoroshi Root\n sendOtoroshiHeadersBack: true\n xForwardedHeaders: true\n overrideHost: true\n allowHttp10: false\n publicPatterns:\n - /health\n additionalHeaders:\n x-foo: bar\n# here you can specify everything supported by otoroshi like jwt-verifiers, auth config, etc ... for more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n```\n\nnow with this descriptor deployed, you can access your app with a command like \n\n```sh\nCLIENT_ID=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientId}\" | base64 --decode`\nCLIENT_SECRET=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientSecret}\" | base64 --decode`\ncurl -X GET https://httpapp.foo.bar/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n## Expose Otoroshi to outside world\n\nIf you deploy Otoroshi on a kubernetes cluster, the Otoroshi service is deployed as a loadbalancer (service type: `LoadBalancer`). You'll need to declare in your DNS settings any name that can be routed by otoroshi going to the loadbalancer endpoint (CNAME or ip addresses) of your kubernetes distribution. If you use a managed kubernetes cluster from a cloud provider, it will work seamlessly as they will provide external loadbalancers out of the box. However, if you use a bare metal kubernetes cluster, id doesn't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples in the installation section.\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@ \n\n## Access a service from inside the k8s cluster\n\n### Using host header overriding\n\nYou can access any service referenced in otoroshi, through otoroshi from inside the kubernetes cluster by using the otoroshi service name (if you use a template based on https://github.com/MAIF/otoroshi/tree/master/kubernetes/base deployed in the otoroshi namespace) and the host header with the service domain like :\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET -H 'Host: httpapp.foo.bar' https://otoroshi-service.otoroshi.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using dedicated services\n\nit's also possible to define services that targets otoroshi deployment (or otoroshi workers deployment) and use then as valid hosts in otoroshi services \n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-awesome-service\nspec:\n selector:\n # run: otoroshi-deployment\n # or in cluster mode\n run: otoroshi-worker-deployment\n ports:\n - port: 8080\n name: \"http\"\n targetPort: \"http\"\n - port: 8443\n name: \"https\"\n targetPort: \"https\"\n```\n\nand access it like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-namspace.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using coredns integration\n\nYou can also enable the coredns integration to simplify the flow. You can use the the following keys in the plugin config :\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"coreDnsIntegration\": true, // enable coredns integration for intra cluster calls\n \"kubeSystemNamespace\": \"kube-system\", // the namespace where coredns is deployed\n \"corednsConfigMap\": \"coredns\", // the name of the coredns configmap\n \"otoroshiServiceName\": \"otoroshi-service\", // the name of the otoroshi service, could be otoroshi-workers-service\n \"otoroshiNamespace\": \"otoroshi\", // the namespace where otoroshi is deployed\n \"clusterDomain\": \"cluster.local\", // the domain for cluster services\n ...\n }\n}\n```\n\notoroshi will patch coredns config at startup then you can call your services like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh` or `${service-name}.${service-namespace}.svc.otoroshi.local`\n\n### Using coredns with manual patching\n\nyou can also patch the coredns config manually\n\n```sh\nkubectl edit configmaps coredns -n kube-system # or your own custom config map\n```\n\nand change the `Corefile` data to add the following snippet in at the end of the file\n\n```yaml\notoroshi.mesh:53 {\n errors\n health\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.arpa\n }\n rewrite name regex (.*)\\.otoroshi\\.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}\n```\n\nyou can also define simpler rewrite if it suits you use case better\n\n```\nrewrite name my-service.otoroshi.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n```\n\ndo not hesitate to change `otoroshi-worker-service.otoroshi` according to your own setup. If otoroshi is not in cluster mode, change it to `otoroshi-service.otoroshi`. If otoroshi is not deployed in the `otoroshi` namespace, change it to `otoroshi-service.the-namespace`, etc.\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh`\n\nthen you can call your service like \n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\n\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using old kube-dns system\n\nif your stuck with an old version of kubernetes, it uses kube-dns that is not supported by otoroshi, so you will have to provide your own coredns deployment and declare it as a stubDomain in the old kube-dns system. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the kube-dns integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"kubeDnsOperatorIntegration\": true, // enable kube-dns integration for intra cluster calls\n \"kubeDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"kubeDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"kubeDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\n### Using Openshift DNS operator\n\nOpenshift DNS operator does not allow to customize DNS configuration a lot, so you will have to provide your own coredns deployment and declare it as a stub in the Openshift DNS operator. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the Openshift DNS operator integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"openshiftDnsOperatorIntegration\": true, // enable openshift dns operator integration for intra cluster calls\n \"openshiftDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"openshiftDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"openshiftDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\ndon't forget to update the otoroshi `ClusterRole`\n\n```yaml\n- apiGroups:\n - operator.openshift.io\n resources:\n - dnses\n verbs:\n - get\n - list\n - watch\n - update\n```\n\n## CRD validation in kubectl\n\nIn order to get CRD validation before manifest deployments right inside kubectl, you can deploy a validation webhook that will do the trick. Also check that you have `otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator` request sink enabled.\n\nvalidation-webhook.yaml\n: @@snip [validation-webhook.yaml](../snippets/kubernetes/kustomize/base/validation-webhook.yaml)\n\n## Easier integration with otoroshi-sidecar\n\nOtoroshi can help you to easily use existing services without modifications while gettings all the perks of otoroshi like apikeys, mTLS, exchange protocol, etc. To do so, otoroshi will inject a sidecar container in the pod of your deployment that will handle call coming from otoroshi and going to otoroshi. To enable otoroshi-sidecar, you need to deploy the following admission webhook. Also check that you have `otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector` request sink enabled.\n\nsidecar-webhook.yaml\n: @@snip [sidecar-webhook.yaml](../snippets/kubernetes/kustomize/base/sidecar-webhook.yaml)\n\nthen it's quite easy to add the sidecar, just add the following label to your pod `otoroshi.io/sidecar: inject` and some annotations to tell otoroshi what certificates and apikeys to use.\n\n```yaml\nannotations:\n otoroshi.io/sidecar-apikey: backend-apikey\n otoroshi.io/sidecar-backend-cert: backend-cert\n otoroshi.io/sidecar-client-cert: oto-client-cert\n otoroshi.io/token-secret: secret\n otoroshi.io/expected-dn: UID=oto-client-cert, O=OtoroshiApps\n```\n\nnow you can just call you otoroshi handled apis from inside your pod like `curl http://my-service.namespace.otoroshi.mesh/api` without passing any apikey or client certificate and the sidecar will handle everything for you. Same thing for call from otoroshi to your pod, everything will be done in mTLS fashion with apikeys and otoroshi exchange protocol\n\nhere is a full example\n\nsidecar.yaml\n: @@snip [sidecar.yaml](../snippets/kubernetes/kustomize/base/sidecar.yaml)\n\n@@@ warning\nPlease avoid to use port `80` for your pod as it's the default port to access otoroshi from your pod and the call will be redirect to the sidecar via an iptables rule\n@@@\n\n## Daikoku integration\n\nIt is possible to easily integrate daikoku generated apikeys without any human interaction with the actual apikey secret. To do that, create a plan in Daikoku and setup the integration mode to `Automatic`\n\n@@@ div { .centered-img }\n\n@@@\n\nthen when a user subscribe for an apikey, he will only see an integration token\n\n@@@ div { .centered-img }\n\n@@@\n\nthen just create an ApiKey manifest with this token and your good to go \n\n```yaml\napiVersion: proxy.otoroshi.io/v1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-3\nspec:\n exportSecret: true \n secretName: secret-3\n daikokuToken: RShQrvINByiuieiaCBwIZfGFgdPu7tIJEN5gdV8N8YeH4RI9ErPYJzkuFyAkZ2xy\n```\n\n"},{"name":"scaling.md","id":"/deploy/scaling.md","url":"/deploy/scaling.html","title":"Scaling Otoroshi","content":"# Scaling Otoroshi\n\n## Using multiple instances with a front load balancer\n\nOtoroshi has been designed to work with multiple instances. If you already have an infrastructure using frontal load balancing, you just have to declare Otoroshi instances as the target of all domain names handled by Otoroshi\n\n## Using master / workers mode of Otoroshi\n\nYou can read everything about it in @ref:[the clustering section](../deploy/clustering.md) of the documentation.\n\n## Using IPVS\n\nYou can use [IPVS](https://en.wikipedia.org/wiki/IP_Virtual_Server) to load balance layer 4 traffic directly from the Linux Kernel to multiple instances of Otoroshi. You can find example of configuration [here](http://www.linuxvirtualserver.org/VS-DRouting.html) \n\n## Using DNS Round Robin\n\nYou can use [DNS round robin technique](https://en.wikipedia.org/wiki/Round-robin_DNS) to declare multiple A records under the domain names handled by Otoroshi.\n\n## Using software L4/L7 load balancers\n\nYou can use software L4 load balancers like NGINX or HAProxy to load balance layer 4 traffic directly from the Linux Kernel to multiple instances of Otoroshi.\n\nNGINX L7\n: @@snip [nginx-http.conf](../snippets/nginx-http.conf) \n\nNGINX L4\n: @@snip [nginx-tcp.conf](../snippets/nginx-tcp.conf) \n\nHA Proxy L7\n: @@snip [haproxy-http.conf](../snippets/haproxy-http.conf) \n\nHA Proxy L4\n: @@snip [haproxy-tcp.conf](../snippets/haproxy-tcp.conf) \n\n## Using a custom TCP load balancer\n\nYou can also use any other TCP load balancer, from a hardware box to a small js file like\n\ntcp-proxy.js\n: @@snip [tcp-proxy.js](../snippets/tcp-proxy.js) \n\ntcp-proxy.rs\n: @@snip [tcp-proxy.rs](../snippets/proxy.rs) \n\n"},{"name":"dev.md","id":"/dev.md","url":"/dev.html","title":"Developing Otoroshi","content":"# Developing Otoroshi\n\nIf you want to play with Otoroshis code, here are some tips\n\n## The tools\n\nYou will need\n\n* git\n* JDK >= 11\n* SBT >= 1.3.x\n* Node 13 + yarn 1.x\n\n## Clone the repository\n\n```sh\ngit clone https://github.com/MAIF/otoroshi.git\n```\n\nor fork otoroshi and clone your own repository.\n\n## Run otoroshi in dev mode\n\nto run otoroshi in dev mode, you'll need to run two separate process to serve the javascript UI and the server part.\n\n### Javascript side\n\njust go to `/otoroshi/javascript` and install the dependencies with\n\n```sh\nyarn install\n# or\nnpm install\n```\n\nthen run the dev server with\n\n```sh\nyarn start\n# or\nnpm run start\n```\n\n### Server side\n\nsetup SBT opts with\n\n```sh\nexport SBT_OPTS=\"-Xmx2G -Xss6M\"\n```\n\nthen just go to `/otoroshi` and run the sbt console with \n\n```sh\nsbt\n```\n\nthen in the sbt console run the following command\n\n```sh\n~reStart\n# to pass jvm args, you can use: ~reStart --- -Dotoroshi.storage=memory ...\n```\n\nyou can now access your otoroshi instance at `http://otoroshi.oto.tools:9999`\n\n## Test otoroshi\n\nto run otoroshi test just go to `/otoroshi` and run the main test suite with\n\n```sh\nsbt 'testOnly OtoroshiTests'\n```\n\n## Create a release\n\njust go to `/otoroshi/javascript` and then build the UI\n\n```sh\nyarn install\nyarn build\n```\n\nthen go to `/otoroshi` and build the otoroshi distribution\n\n```sh\nsbt ';clean;compile;dist;assembly'\n```\n\nthe otoroshi build is waiting for you in `/otoroshi/target/scala-2.12/otoroshi.jar` or `/otoroshi/target/universal/otoroshi-1.x.x.zip`\n\n## Build the documentation\n\nfrom the root of your repository run\n\n```sh\nsh ./scripts/doc.sh all\n```\n\nThe documentation is located at `manual/target/paradox/site/main/`\n\n## Format the sources\n\nfrom the root of your repository run\n\n```sh\nsh ./scripts/fmt.sh\n```\n"},{"name":"apikeys.md","id":"/entities/apikeys.md","url":"/entities/apikeys.html","title":"Apikeys","content":"# Apikeys\n\nAn API key is a unique identifier used to connect to, or perform, an route call. \n\n@@@ div { .centered-img }\n\n@@@\n\nYou can found a concrete example @ref:[here](../how-to-s/secure-with-apikey.md)\n\n* `ApiKey Id`: the id is a unique random key that will represent this API key\n* `ApiKey Secret`: the secret is a random key used to validate the API key\n* `ApiKey Name`: a name for the API key, used for debug purposes\n* `ApiKey description`: a useful description for this apikey\n* `Valid until`: auto disable apikey after this date\n* `Enabled`: if the API key is disabled, then any call using this API key will fail\n* `Read only`: if the API key is in read only mode, every request done with this api key will only work for GET, HEAD, OPTIONS verbs\n* `Allow pass by clientid only`: here you allow client to only pass client id in a specific header in order to grant access to the underlying api\n* `Constrained services only`: this apikey can only be used on services using apikey routing constraints\n* `Authorized on`: the groups/services linked to this api key\n\n### Metadata and tags\n\n* `Tags`: tags attached to the api key\n* `Metadata`: metadata attached to the api key\n\n### Automatic secret rotation\n\nAPI can handle automatic secret rotation by themselves. When enabled, the rotation changes the secret every `Rotation every` duration. During the `Grace period` both secret will be usable.\n \n* `Enabled`: enabled automatic apikey secret rotation\n* `Rotation every`: rotate secrets every\n* `Grace period`: period when both secrets can be used\n* `Next client secret`: display the next generated client secret\n\n### Restrictions\n\n* `Enabled`: enable restrictions\n* `Allow last`: Otoroshi will test forbidden and notFound paths before testing allowed paths\n* `Allowed`: allowed paths\n* `Forbidden`: forbidden paths\n* `Not Found`: not found paths\n\n### Call examples\n\n* `Curl Command`: simple request with the api key passed by header\n* `Basic Auth. Header`: authorization Header with the api key as base64 encoded format\n* `Curl Command with Basic Auth. Header`: simple request with api key passed in the Authorization header as base64 format\n\n### Quotas\n\n* `Throttling quota`: the authorized number of calls per second\n* `Daily quota`: the authorized number of calls per day\n* `Monthly quota`: the authorized number of calls per month\n\n@@@ warning\n\nDaily and monthly quotas are based on the following rules :\n\n* daily quota is computed between 00h00:00.000 and 23h59:59.999 of the current day\n* monthly qutoas is computed between the first day of the month at 00h00:00.000 and the last day of the month at 23h59:59.999\n@@@\n\n### Quotas consumption\n\n* `Consumed daily calls`: the number of calls consumed today\n* `Remaining daily calls`: the remaining number of calls for today\n* `Consumed monthly calls`: the number of calls consumed this month\n* `Remaining monthly calls`: the remaining number of calls for this month\n\n"},{"name":"auth-modules.md","id":"/entities/auth-modules.md","url":"/entities/auth-modules.html","title":"Authentication modules","content":"# Authentication modules\n\nThe authentication modules manage the access to Otoroshi UI and can protect a route.\n\nA `private Otoroshi app` is an Otoroshi route with the Authentication plugin enabled.\n\nThe list of supported authentication are :\n\n* `OAuth 2.0/2.1` : an authorization standard that allows a user to grant limited access to their resources on one site to another site, without having to expose their credentials\n* `OAuth 1.0a` : the original standard for access delegation\n* `In memory` : create users directly in Otoroshi with rights and metadata\n* `LDAP : Lightweight Directory Access Protocol` : connect users using a set of LDAP servers\n* `SAML V2 - Security Assertion Markup Language` : an open-standard, XML-based data format that allows businesses to communicate user authentication and authorization information to partner companies and enterprise applications their employees may use.\n\nAll authentication modules have a unique `id`, a `name` and a `description`.\n\nEach module has also the following fields : \n\n* `Tags`: list of tags associated to the module\n* `Metadata`: list of metadata associated to the module\n* `HttpOnly`: if enabled, the cookie cannot be accessed through client side script, prevent cross-site scripting (XSS) by not revealing the cookie to a third party\n* `Secure`: if enabled, avoid to include cookie in an HTTP Request without secure channel, typically HTTPs.\n* `Session max. age`: duration until the session expired\n* `User validators`: a list of validator that will check if, a user that successfully logged in has the right to actually, pass otoroshi based on the content of it's profile. A validator is composed of a [JSONPath](https://goessner.net/articles/JsonPath/) that will tell what to check and a value that is the expected value. The JSONPath will be applied on a document that will look like\n\n```javascript\n{\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"randomId\": \"xxxxx\",\n \"name\": \"john.doe@otoroshi.io\",\n \"email\": \"john.doe@otoroshi.io\",\n \"authConfigId\": \"xxxxxxxx\",\n \"profile\": { // the profile shape depends heavily on the identity provider\n \"sub\": \"xxxxxx\",\n \"nickname\": \"john.doe\",\n \"name\": \"john.doe@otoroshi.io\",\n \"picture\": \"https://foo.bar/avatar.png\",\n \"updated_at\": \"2022-04-20T12:57:39.723Z\",\n \"email\": \"john.doe@otoroshi.io\",\n \"email_verified\": true,\n \"rights\": [\"one\", \"two\"]\n },\n \"token\": { // the token shape depends heavily on the identity provider\n \"access_token\": \"xxxxxx\",\n \"refresh_token\": \"yyyyyy\",\n \"id_token\": \"zzzzzz\",\n \"scope\": \"openid profile email address phone offline_access\",\n \"expires_in\": 86400,\n \"token_type\": \"Bearer\"\n },\n \"realm\": \"global-oauth-xxxxxxx\",\n \"otoroshiData\": {\n ...\n },\n \"createdAt\": 1650459462650,\n \"expiredAt\": 1650545862652,\n \"lastRefresh\": 1650459462650,\n \"metadata\": {},\n \"tags\": []\n}\n```\n\nthe expected value support some syntax tricks like \n\n* `Not(value)` on a string to check if the current value does not equals another value\n* `Regex(regex)` on a string to check if the current value matches the regex\n* `RegexNot(regex)` on a string to check if the current value does not matches the regex\n* `Wildcard(*value*)` on a string to check if the current value matches the value with wildcards\n* `WildcardNot(*value*)` on a string to check if the current value does not matches the value with wildcards\n* `Contains(value)` on a string to check if the current value contains a value\n* `ContainsNot(value)` on a string to check if the current value does not contains a value\n* `Contains(Regex(regex))` on an array to check if one of the item of the array matches the regex\n* `ContainsNot(Regex(regex))` on an array to check if one of the item of the array does not matches the regex\n* `Contains(Wildcard(*value*))` on an array to check if one of the item of the array matches the wildcard value\n* `ContainsNot(Wildcard(*value*))` on an array to check if one of the item of the array does not matches the wildcard value\n* `Contains(value)` on an array to check if the array contains a value\n* `ContainsNot(value)` on an array to check if the array does not contains a value\n\nfor instance to check if the current user has the right `two`, you can write the following validator\n\n```js\n{\n \"path\": \"$.profile.rights\",\n \"value\": \"Contains(two)\"\n}\n```\n\n## OAuth 2.0 / OIDC provider\n\nIf you want to secure an app or your Otoroshi UI with this provider, you can check these tutorials : @ref[Secure an app with keycloak](../how-to-s/secure-app-with-keycloak.md) or @ref[Secure an app with auth0](../how-to-s/secure-app-with-auth0.md)\n\n* `Use cookie`: If your OAuth2 provider does not support query param in redirect uri, you can use cookies instead\n* `Use json payloads`: the access token, sended to retrieve the user info, will be pass in body as JSON. If disabled, it will sended as Map.\n* `Enabled PKCE flow`: This way, a malicious attacker can only intercept the Authorization Code, and they cannot exchange it for a token without the Code Verifier.\n* `Disable wildcard on redirect URIs`: As of OAuth 2.1, query parameters on redirect URIs are no longer allowed\n* `Refresh tokens`: Automatically refresh access token using the refresh token if available\n* `Read profile from token`: if enabled, the user profile will be read from the access token, otherwise the user profile will be retrieved from the user information url\n* `Super admins only`: All logged in users will have super admins rights\n* `Client ID`: a public identifier of your app\n* `Client Secret`: a secret known only to the application and the authorization server\n* `Authorize URL`: used to interact with the resource owner and get the authorization to access the protected resource\n* `Token URL`: used by the application in order to get an access token or a refresh token\n* `Introspection URL`: used to validate access tokens\n* `Userinfo URL`: used to retrieve the profile of the user\n* `Login URL`: used to redirect user to the login provider page\n* `Logout URL`: redirect uri used by the identity provider to redirect user after logging out\n* `Callback URL`: redirect uri sended to the identity provider to redirect user after successfully connecting\n* `Access token field name`: field used to search access token in the response body of the token URL call\n* `Scope`: presented scopes to the user in the consent screen. Scopes are space-separated lists of identifiers used to specify what access privileges are being requested\n* `Claims`: asked name/values pairs that contains information about a user.\n* `Name field name`: Retrieve name from token field\n* `Email field name`: Retrieve email from token field\n* `Otoroshi metadata field name`: Retrieve metadata from token field\n* `Otoroshi rights field name`: Retrieve user rights from user profile\n* `Extra metadata`: merged with the user metadata\n* `Data override`: merged with extra metadata when a user connects to a `private app`\n* `Rights override`: useful when you want erase the rights of an user with only specific rights. This field is the last to be applied on the user rights.\n* `Api key metadata field name`: used to extract api key metadata from the OIDC access token \n* `Api key tags field name`: used to extract api key tags from the OIDC access token \n* `Proxy host`: host of proxy behind the identify provider\n* `Proxy port`: port of proxy behind the identify provider\n* `Proxy principal`: user of proxy \n* `Proxy password`: password of proxy\n* `OIDC config url`: URI of the openid-configuration used to discovery documents. By convention, this URI ends with `.well-known/openid-configuration`\n* `Token verification`: What kind of algorithm you want to use to verify/sign your JWT token with\n* `SHA Size`: Word size for the SHA-2 hash function used\n* `Hmac secret`: The Hmac secret\n* `Base64 encoded secret`: Is the secret encoded with base64\n* `Custom TLS Settings`: TLS settings for JWKS fetching\n* `TLS loose`: if enabled, will block all untrustful ssl configs\n* `Trust all`: allows any server certificates even the self-signed ones\n* `Client certificates`: list of client certificates used to communicate with JWKS server\n* `Trusted certificates`: list of trusted certificates received from JWKS server\n\n## OAuth 1.0a provider\n\nIf you want to secure an app or your Otoroshi UI with this provider, you can check this tutorial : @ref[Secure an app with OAuth 1.0a](../how-to-s/secure-with-oauth1-client.md)\n\n* `Http Method`: method used to get request token and the access token \n* `Consumer key`: the identifier portion of the client credentials (equivalent to a username)\n* `Consumer secret`: the identifier portion of the client credentials (equivalent to a password)\n* `Request Token URL`: url to retrieve the request token\n* `Authorize URL`: used to redirect user to the login page\n* `Access token URL`: used to retrieve the access token from the server\n* `Profile URL`: used to get the user profile\n* `Callback URL`: used to redirect user when successfully connecting\n* `Rights override`: override the rights of the connected user. With JSON format, each authenticated user, using email, can be associated to a list of rights on tenants and Otoroshi teams.\n\n## LDAP Authentication provider\n\nIf you want to secure an app or your Otoroshi UI with this provider, you can check this tutorial : @ref[Secure an app with LDAP](../how-to-s/secure-app-with-ldap.md)\n\n* `Basic auth.`: if enabled, user and password will be extract from the `Authorization` header as a Basic authentication. It will skipped the login Otoroshi page \n* `Allow empty password`: LDAP servers configured by default with the possibility to connect without password can be secured by this module to ensure that user provides a password\n* `Super admins only`: All logged in users will have super admins rights\n* `Extract profile`: extract LDAP profile in the Otoroshi user\n* `LDAP Server URL`: list of LDAP servers to join. Otoroshi use this list in sequence and swap to the next server, each time a server breaks in timeout\n* `Search Base`: used to global filter\n* `Users search base`: concat with search base to search users in LDAP\n* `Mapping group filter`: map LDAP groups with Otoroshi rights\n* `Search Filter`: used to filter users. *\\${username}* is replace by the email of the user and compare to the given field\n* `Admin username (bind DN)`: holds the name of the environment property for specifying the identity of the principal for authenticating the caller to the service\n* `Admin password`: holds the name of the environment property for specifying the credentials of the principal for authenticating the caller to the service\n* `Extract profile filters attributes in`: keep only attributes which are matching the regex\n* `Extract profile filters attributes not in`: keep only attributes which are not matching the regex\n* `Name field name`: Retrieve name from LDAP field\n* `Email field name`: Retrieve email from LDAP field\n* `Otoroshi metadata field name`: Retrieve metadata from LDAP field\n* `Extra metadata`: merged with the user metadata\n* `Data override`: merged with extra metadata when a user connects to a `private app`\n* `Additional rights group`: list of virtual groups. A virtual group is composed of a list of users and a list of rights for each teams/organizations.\n* `Rights override`: useful when you want erase the rights of an user with only specific rights. This field is the last to be applied on the user rights.\n\n## In memory provider\n\n* `Basic auth.`: if enabled, user and password will be extract from the `Authorization` header as a Basic authentication. It will skipped the login Otoroshi page \n* `Login with WebAuthn` : enabled logging by WebAuthn\n* `Users`: list of users with *name*, *email* and *metadata*. The default password is *password*. The edit button is useful when you want to change the password of the user. The reset button reinitialize the password. \n* `Users raw`: show the registered users with their profile and their rights. You can edit directly each field, especially the rights of the user.\n\n## SAML v2 provider\n\n* `Single sign on URL`: the Identity Provider Single Sign-On URL\n* `The protocol binding for the login request`: the protocol binding for the login request\n* `Single Logout URL`: a SAML flow that allows the end-user to logout from a single session and be automatically logged out of all related sessions that were established during SSO\n* `The protocol binding for the logout request`: the protocol binding for the logout request\n* `Sign documents`: Should SAML Request be signed by Otoroshi ?\n* `Validate Assertions Signature`: Enable/disable signature validation of SAML assertions\n* `Validate assertions with Otoroshi certificate`: validate assertions with Otoroshi certificate. If disabled, the `Encryption Certificate` and `Encryption Private Key` fields can be used to pass a certificate and a private key to validate assertions.\n* `Encryption Certificate`: certificate used to verify assertions\n* `Encryption Private Key`: privaye key used to verify assertions\n* `Signing Certificate`: certicate used to sign documents\n* `Signing Private Key`: private key to sign documents\n* `Signature al`: the signature algorithm to use to sign documents\n* `Canonicalization Method`: canonicalization method for XML signatures \n* `Encryption KeyPair`: the keypair used to sign/verify assertions\n* `Name ID Format`: SP and IdP usually communicate each other about a subject. That subject should be identified through a NAME-IDentifier, which should be in some format so that It is easy for the other party to identify it based on the Format\n* `Use NameID format as email`: use NameID format as email. If disabled, the email will be search from the attributes\n* `URL issuer`: provide the URL to the IdP's who will issue the security token\n* `Validate Signature`: enable/disable signature validation of SAML responses\n* `Validate Assertions Signature`: should SAML Assertions to be decrypted ?\n* `Validating Certificates`: the certificate in PEM format that must be used to check for signatures.\n\n## Special routes\n\nwhen using private apps with auth. modules, you can access special routes that can help you \n\n```sh \nGET 'http://xxxxxxxx.xxxx.xx/.well-known/otoroshi/logout' # trigger logout for the current auth. module\nGET 'http://xxxxxxxx.xxxx.xx/.well-known/otoroshi/me' # get the current logged user profile (do not forget to pass cookies)\n```\n\n## Related pages\n* @ref[Secure an app with auth0](../how-to-s/secure-app-with-auth0.md)\n* @ref[Secure an app with keycloak](../how-to-s/secure-app-with-keycloak.md)\n* @ref[Secure an app with LDAP](../how-to-s/secure-app-with-ldap.md)\n* @ref[Secure an app with OAuth 1.0a](../how-to-s/secure-with-oauth1-client.md)"},{"name":"backends.md","id":"/entities/backends.md","url":"/entities/backends.html","title":"Backends","content":"# Backends\n\nA backend represent a list of server to target in a route and its client settings, load balancing, etc.\n\nThe backends can be define directly on the route designer or on their dedicated page in order to be reusable.\n\n## UI page\n\nYou can find all backends [here](http://otoroshi.oto.tools:8080/bo/dashboard/backends)\n\n## Global Properties\n\n* `Targets root path`: the path to add to each request sent to the downstream service \n* `Full path rewrite`: When enabled, the path of the uri will be totally stripped and replaced by the value of `Targets root path`. If this value contains expression language expressions, they will be interpolated before forwading the request to the backend. When combined with things like named path parameters, it is possible to perform a ful url rewrite on the target path like\n\n* input: `subdomain.domain.tld/api/users/$id<[0-9]+>/bills`\n* output: `target.domain.tld/apis/v1/basic_users/${req.pathparams.id}/all_bills`\n\n## Targets\n\nThe list of target that Otoroshi will proxy and expose through the subdomain defined before. Otoroshi will do round-robin load balancing between all those targets with circuit breaker mecanism to avoid cascading failures.\n\n* `id`: unique id of the target\n* `Hostname`: the hostname of the target without scheme\n* `Port`: the port of the target\n* `TLS`: call the target via https\n* `Weight`: the weight of the target. This valus is used by the load balancing strategy to dispatch the traffic between all targets\n* `Predicate`: a function to filter targets from the target list based on a predefined predicate\n* `Protocol`: protocol used to call the target, can be only equals to `HTTP/1.0`, `HTTP/1.1`, `HTTP/2.0` or `HTTP/3.0`\n* `IP address`: the ip address of the target\n* `TLS Settings`:\n * `Enabled`: enable this section\n * `TLS loose`: if enabled, will block all untrustful ssl configs\n * `TrustAll`: allows any server certificates even the self-signed ones\n * `Client certificates`: list of client certificates used to communicate with the downstream service\n * `Trusted certificates`: list of trusted certificates received from the downstream service\n\n\n## Heatlh check\n\n* `Enabled`: if enabled, the health check URL will be called at regular intervals\n* `URL`: the URL to call to run the health check\n\n## Load balancing\n\n* `Type`: the load balancing algorithm used\n\n## Client settings\n\n* `backoff factor`: specify the factor to multiply the delay for each retry (default value 2)\n* `retries`: specify how many times the client will retry to fetch the result of the request after an error before giving up. (default value 1)\n* `max errors`: specify how many errors can pass before opening the circuit breaker (default value 20)\n* `global timeout`: specify how long the global call (with retries) should last at most in milliseconds. (default value 30000)\n* `connection timeout`: specify how long each connection should last at most in milliseconds. (default value 10000)\n* `idle timeout`: specify how long each connection can stay in idle state at most in milliseconds (default value 60000)\n* `call timeout`: Specify how long each call should last at most in milliseconds. (default value 30000)\n* `call and stream timeout`: specify how long each call should last at most in milliseconds for handling the request and streaming the response. (default value 120000)\n* `initial delay`: delay after which first retry will happens if needed (default value 50)\n* `sample interval`: specify the delay between two retries. Each retry, the delay is multiplied by the backoff factor (default value 2000)\n* `cache connection`: try to keep tcp connection alive between requests (default value false)\n* `cache connection queue size`: queue size for an open tcp connection (default value 2048)\n* `custom timeouts` (list): \n * `Path`: the path on which the timeout will be active\n * `Client connection timeout`: specify how long each connection should last at most in milliseconds.\n * `Client idle timeout`: specify how long each connection can stay in idle state at most in milliseconds.\n * `Client call and stream timeout`: specify how long each call should last at most in milliseconds for handling the request and streaming the response.\n * `Call timeout`: Specify how long each call should last at most in milliseconds.\n * `Client global timeout`: specify how long the global call (with retries) should last at most in milliseconds.\n\n## Proxy\n\n* `host`: host of proxy behind the identify provider\n* `port`: port of proxy behind the identify provider\n* `protocol`: protocol of proxy behind the identify provider\n* `principal`: user of proxy \n* `password`: password of proxy\n"},{"name":"certificates.md","id":"/entities/certificates.md","url":"/entities/certificates.html","title":"Certificates","content":"# Certificates\n\nAll generated and imported certificates are listed in the `https://otoroshi.xxxx/bo/dashboard/certificates` page. All those certificates can be used to serve traffic with TLS, perform mTLS calls, sign and verify JWT tokens.\n\nThe list of available actions are:\n\n* `Add item`: redirects the user on the certificate creation page. It's useful when you already had a certificate (like a pem file) and that you want to load it in Otoroshi.\n* `Let's Encrypt certificate`: asks a certificate matching a given host to Let's encrypt \n* `Create certificate`: issues a certificate with an existing Otoroshi certificate as CA.\n* `Import .p12 file`: loads a p12 file as certificate\n\n## Add item\n\n* `Id`: the generated unique id of the certificate\n* `Name`: the name of the certificate\n* `Description`: the description of the certificate\n* `Auto renew cert.`: certificate will be issued when it will be expired. Only works with a CA from Otoroshi and a known private key\n* `Client cert.`: the certificate generated will be used to identicate a client to a server\n* `Keypair`: the certificate entity will be a pair of public key and private key.\n* `Public key exposed`: if true, the public key will be exposed on `http://otoroshi-api.your-domain/.well-known/jwks.json`\n* `Certificate status`: the current status of the certificate. It can be valid if the certificate is not revoked and not expired, or equal to the reason of the revocation\n* `Certificate full chain`: list of certificates used to authenticate a client or a server\n* `Certificate private key`: the private key of the certificate or nothing if wanted. You can omit it if you want just add a certificte full chain to trust them.\n* `Private key password`: the password to protect the private key\n* `Certificate tags`: the tags attached to the certificate\n* `Certaificate metadata`: the metadata attached to the certificate\n\n## Let's Encrypt certificate\n\n* `Let's encrypt`: if enabled, the certificate will be generated by Let's Encrypt. If disabled, the user will be redirect to the `Create certificate` page\n* `Host`: the host send to Let's encrypt to issue the certificate\n\n## Create certificate view\n\n* `Issuer`: the CA used to sign your certificate\n* `CA certificate`: if enabled, the certificate will be used as an authority certificate. Once generated, it will be use as CA to sign the new certificates\n* `Let's Encrypt`: redirects to the Let's Encrypt page to request a certificate\n* `Client certificate`: the certificate generated will be used to identicate a client to a server\n* `Include A.I.A`: include authority information access urls in the certificate\n* `Key Type`: the type of the private key\n* `Key Size`: the size of the private key\n* `Signature Algorithm`: the signature algorithm used to sign the certificate\n* `Digest Algorithm`: the digest algorithm used\n* `Validity`: how much time your certificate will be valid\n* `Subject DN`: the subject DN of your certificate\n* `Hosts`: the hosts of your certificate\n\n"},{"name":"data-exporters.md","id":"/entities/data-exporters.md","url":"/entities/data-exporters.html","title":"Data exporters","content":"# Data exporters\n\nThe data exporters are the way to export alerts and events from Otoroshi to an external storage.\n\nTo try them, you can folllow @ref[this tutorial](../how-to-s/export-alerts-using-mailgun.md).\n\n## Common fields\n\n* `Type`: the type of event exporter\n* `Enabled`: enabled or not the exporter\n* `Name`: given name to the exporter\n* `Description`: the data exporter description\n* `Tags`: list of tags associated to the module\n* `Metadata`: list of metadata associated to the module\n\nAll exporters are split in three parts. The first and second parts are common and the last are specific by exporter.\n\n* `Filtering and projection` : section to filter the list of sent events and alerts. The projection field allows you to export only certain event fields and reduce the size of exported data. It's composed of `Filtering` and `Projection` fields. To get a full usage of this elements, read @ref:[this section](#matching-and-projections)\n* `Queue details`: set of fields to adjust the workers of the exporter. \n * `Buffer size`: if elements are pushed onto the queue faster than the source is consumed the overflow will be handled with a strategy specified by the user. Keep in memory the number of events.\n * `JSON conversion workers`: number of workers used to transform events to JSON format in paralell\n * `Send workers`: number of workers used to send transformed events\n * `Group size`: chunk up this stream into groups of elements received within a time window (the time window is the next field)\n * `Group duration`: waiting time before sending the group of events. If the group size is reached before the group duration, the events will be instantly sent\n \nFor the last part, the `Exporter configuration` will be detail individually.\n\n## Matching and projections\n\n**Filtering** is used to **include** or **exclude** some kind of events and alerts. For each include and exclude field, you can add a list of key-value. \n\nLet's say we only want to keep Otoroshi alerts\n```json\n{ \"include\": [{ \"@type\": \"AlertEvent\" }] }\n```\n\nOtoroshi provides a list of rules to keep only events with specific values. We will use the following event to illustrate.\n\n```json\n{\n \"foo\": \"bar\",\n \"type\": \"AlertEvent\",\n \"alert\": \"big-alert\",\n \"status\": 200,\n \"codes\": [\"a\", \"b\"],\n \"inner\": {\n \"foo\": \"bar\",\n \"bar\": \"foo\"\n }\n}\n```\n\nThe rules apply with the previous example as event.\n\n@@@div { #filtering }\n \n@@@\n\n\n\n**Projection** is a list of fields to export. In the case of an empty list, all the fields of an event will be exported. In other case, **only** the listed fields will be exported.\n\nLet's say we only want to keep Otoroshi alerts and only type, timestamp and id of each exported events\n```json\n{\n \"@type\": true,\n \"@timestamp\": true,\n \"@id\": true\n}\n```\n\nAn other possibility is to **rename** the exported field. This value will be the same but the exported field will have a different name.\n\nLet's say we want to rename all `@id` field with `unique-id` as key\n\n```json\n{ \"@id\": \"unique-id\" }\n```\n\nThe last possiblity is to retrieve a sub-object of an event. Let's say we want to get the name of each exported user of events.\n\n```json\n{ \"user\": { \"name\": true } }\n```\n\nYou can also expand the entire source object with \n\n```json\n{\n \"$spread\": true\n}\n```\n\nand the remove fields you don't want with \n\n```json\n{\n \"fieldthatidontwant\": false\n}\n```\n\n## Elastic\n\nWith this kind of exporter, every matching event will be sent to an elastic cluster (in batch). It is quite useful and can be used in combination with [elastic read in global config](./global-config.html#analytics-elastic-dashboard-datasource-read-)\n\n* `Cluster URI`: Elastic cluster URI\n* `Index`: Elastic index \n* `Type`: Event type (not needed for elasticsearch above 6.x)\n* `User`: Elastic User (optional)\n* `Password`: Elastic password (optional)\n* `Version`: Elastic version (optional, if none provided it will be fetched from cluster)\n* `Apply template`: Automatically apply index template\n* `Check Connection`: Button to test the configuration. It will displayed a modal with checked point, and if the case of it's successfull, it will displayed the found version of the Elasticsearch and the index used\n* `Manually apply index template`: try to put the elasticsearch template by calling the api of elasticsearch\n* `Show index template`: try to retrieve the current index template presents in elasticsearch\n* `Client side temporal indexes handling`: When enabled, Otoroshi will manage the creation of indexes. When it's disabled, Otoroshi will push in the same index\n* `One index per`: When the previous field is enabled, you can choose the interval of time between the creation of a new index in elasticsearch \n* `Custom TLS Settings`: Enable the TLS configuration for the communication with Elasticsearch\n * `TLS loose`: if enabled, will block all untrustful ssl configs\n * `TrustAll`: allows any server certificates even the self-signed ones\n * `Client certificates`: list of client certificates used to communicate with elasticsearch\n * `Trusted certificates`: list of trusted certificates received from elasticsearch\n\n## Webhook \n\nWith this kind of exporter, every matching event will be sent to a URL (in batch) using a POST method and an JSON array body.\n\n* `Alerts hook URL`: url used to post events\n* `Hook Headers`: headers add to the post request\n* `Custom TLS Settings`: Enable the TLS configuration for the communication with Elasticsearch\n * `TLS loose`: if enabled, will block all untrustful ssl configs\n * `TrustAll`: allows any server certificates even the self-signed ones\n * `Client certificates`: list of client certificates used to communicate with elasticsearch\n * `Trusted certificates`: list of trusted certificates received from elasticsearch\n\n\n## Pulsar \n\nWith this kind of exporter, every matching event will be sent to an [Apache Pulsar topic](https://pulsar.apache.org/)\n\n\n* `Pulsar URI`: URI of the pulsar server\n* `Custom TLS Settings`: Enable the TLS configuration for the communication with Elasticsearch\n * `TLS loose`: if enabled, will block all untrustful ssl configs\n * `TrustAll`: allows any server certificates even the self-signed ones\n * `Client certificates`: list of client certificates used to communicate with elasticsearch\n * `Trusted certificates`: list of trusted certificates received from elasticsearch\n* `Pulsar tenant`: tenant on the pulsar server\n* `Pulsar namespace`: namespace on the pulsar server\n* `Pulsar topic`: topic on the pulsar server\n\n## Kafka \n\nWith this kind of exporter, every matching event will be sent to an [Apache Kafka topic](https://kafka.apache.org/). You can find few @ref[tutorials](../how-to-s/communicate-with-kafka.md) about the connection between Otoroshi and Kafka based on docker images.\n\n* `Kafka Servers`: the list of servers to contact to connect the Kafka client with the Kafka cluster\n* `Kafka topic`: the topic on which Otoroshi alerts will be sent\n\nBy default, Kafka is installed with no authentication. Otoroshi supports the following authentication mechanisms and protocols for Kafka brokers.\n\n### SASL\n\nThe Simple Authentication and Security Layer (SASL) [RFC4422] is a\nmethod for adding authentication support to connection-based\nprotocols.\n\n* `SASL username`: the client username \n* `SASL password`: the client username \n* `SASL Mechanism`: \n * `PLAIN`: SASL/PLAIN uses a simple username and password for authentication.\n * `SCRAM-SHA-256` and `SCRAM-SHA-512`: SASL/SCRAM uses usernames and passwords stored in ZooKeeper. Credentials are created during installation.\n\n### SSL \n\n* `Kafka keypass`: the keystore password if you use a keystore/truststore to connect to Kafka cluster\n* `Kafka keystore path`: the keystore path on the server if you use a keystore/truststore to connect to Kafka cluster\n* `Kafka truststore path`: the truststore path on the server if you use a keystore/truststore to connect to Kafka cluster\n* `Custom TLS Settings`: enable the TLS configuration for the communication with Elasticsearch\n * `TLS loose`: if enabled, will block all untrustful ssl configs\n * `TrustAll`: allows any server certificates even the self-signed ones\n * `Client certificates`: list of client certificates used to communicate with elasticsearch\n * `Trusted certificates`: list of trusted certificates received from elasticsearch\n\n### SASL + SSL\n\nThis mechanism uses the SSL configuration and the SASL configuration.\n\n## Mailer \n\nWith this kind of exporter, every matching event will be sent in batch as an email (using one of the following email provider)\n\nOtoroshi supports 5 exporters of email type.\n\n### Console\n\nNothing to add. The events will be write on the standard output.\n\n### Generic\n\n* `Mailer url`: URL used to push events\n* `Headers`: headers add to the push requests\n* `Email addresses`: recipients of the emails\n\n### Mailgun\n\n* `EU`: is EU server ? if enabled, *https://api.eu.mailgun.net/* will be used, otherwise, the US URL will be used : *https://api.mailgun.net/*\n* `Mailgun api key`: API key of the mailgun account\n* `Mailgun domain`: domain name of the mailgun account\n* `Email addresses`: recipients of the emails\n\n### Mailjet\n\n* `Public api key`: public key of the mailjet account\n* `Private api key`: private key of the mailjet account\n* `Email addresses`: recipients of the emails\n\n### Sendgrid\n\n* `Sendgrid api key`: api key of the sendgrid account\n* `Email addresses`: recipients of the emails\n\n## File \n\n* `File path`: path where the logs will be write \n* `Max file size`: when size is reached, Otoroshi will create a new file postfixed by the current timestamp\n\n## GoReplay file\n\nWith this kind of exporter, every matching event will be sent to a `.gor` file compatible with [GoReplay](https://goreplay.org/). \n\n@@@ warning\nthis exporter will only be able to catch `TrafficCaptureEvent`. Those events are created when a route (or the global config) of the @ref:[new proxy engine](../topics/engine.md) is setup to capture traffic using the `capture` flag.\n@@@\n\n* `File path`: path where the logs will be write \n* `Max file size`: when size is reached, Otoroshi will create a new file postfixed by the current timestamp\n* `Capture requests`: capture http requests in the `.gor` file\n* `Capture responses`: capture http responses in the `.gor` file\n\n## Console \n\nNothing to add. The events will be write on the standard output.\n\n## Custom \n\nThis type of exporter let you the possibility to write your own exporter with your own rules. To create an exporter, we need to navigate to the plugins page, and to create a new item of type exporter.\n\nWhen it's done, the exporter will be visible in this list.\n\n* `Exporter config.`: the configuration of the custom exporter.\n\n## Metrics \n\nThis plugin is useful to rewrite the metric labels exposed on the `/metrics` endpoint.\n\n* `Labels`: list of metric labels. Each pair contains an existing field name and the new name."},{"name":"global-config.md","id":"/entities/global-config.md","url":"/entities/global-config.html","title":"Global config","content":"# Global config\n\nThe global config, named `Danger zone` in Otoroshi, is the place to configure Otoroshi globally. \n\n> Warning: In this page, the configuration is really sensitive and affects the global behaviour of Otoroshi.\n\n\n### Misc. Settings\n\n\n* `Maintenance mode` : It passes every single service in maintenance mode. If a user calls a service, the maintenance page will be displayed\n* `No OAuth login for BackOffice` : Forces admins to login only with user/password or user/password/u2F device\n* `API Read Only`: Freeze Otoroshi datastore in read only mode. Only people with access to the actual underlying datastore will be able to disable this.\n* `Auto link default` : When no group is specified on a service, it will be assigned to default one\n* `Use circuit breakers` : Use circuit breaker on all services\n* `Use new http client as the default Http client` : All http calls will use the new http client by default\n* `Enable live metrics` : Enable live metrics in the Otoroshi cluster. Performs a lot of writes in the datastore\n* `Digitus medius` : Use middle finger emoji as a response character for endless HTTP responses (see [IP address filtering settings](#ip-address-filtering-settings)).\n* `Limit conc. req.` : Limit the number of concurrent request processed by Otoroshi to a certain amount. Highly recommended for resilience\n* `Use X-Forwarded-* headers for routing` : When evaluating routing of a request, X-Forwarded-* headers will be used if presents\n* `Max conc. req.` : Maximum number of concurrent requests processed by otoroshi.\n* `Max HTTP/1.0 resp. size` : Maximum size of an HTTP/1.0 response in bytes. After this limit, response will be cut and sent as is. The best value here should satisfy (maxConcurrentRequests * maxHttp10ResponseSize) < process.memory for worst case scenario.\n* `Max local events` : Maximum number of events stored.\n* `Lines` : *deprecated* \n\n### IP address filtering settings\n\n* `IP allowed list`: Only IP addresses that will be able to access Otoroshi exposed services\n* `IP blocklist`: IP addresses that will be refused to access Otoroshi exposed services\n* `Endless HTTP Responses`: IP addresses for which each request will return around 128 Gb of 0s\n\n\n### Quotas settings\n\n* `Global throttling`: The max. number of requests allowed per second globally on Otoroshi\n* `Throttling per IP`: The max. number of requests allowed per second per IP address globally on Otoroshi\n\n### Analytics: Elastic dashboard datasource (read)\n\n* `Cluster URI`: Elastic cluster URI\n* `Index`: Elastic index \n* `Type`: Event type (not needed for elasticsearch above 6.x)\n* `User`: Elastic User (optional)\n* `Password`: Elastic password (optional)\n* `Version`: Elastic version (optional, if none provided it will be fetched from cluster)\n* `Apply template`: Automatically apply index template\n* `Check Connection`: Button to test the configuration. It will displayed a modal with a connection checklist, if connection is successfull, it will display the found version of the Elasticsearch and the index used\n* `Manually apply index template`: try to put the elasticsearch template by calling the api of elasticsearch\n* `Show index template`: try to retrieve the current index template present in elasticsearch\n* `Client side temporal indexes handling`: When enabled, Otoroshi will manage the creation of indexes over time. When it's disabled, Otoroshi will push in the same index\n* `One index per`: When the previous field is enabled, you can choose the interval of time between the creation of a new index in elasticsearch \n* `Custom TLS Settings`: Enable the TLS configuration for the communication with Elasticsearch\n* `TLS loose`: if enabled, will block all untrustful ssl configs\n* `TrustAll`: allows any server certificates even the self-signed ones\n* `Client certificates`: list of client certificates used to communicate with elasticsearch\n* `Trusted certificates`: list of trusted certificates received from elasticsearch\n\n\n### Statsd settings\n\n* `Datadog agent`: The StatsD agent is a Datadog agent\n* `StatsD agent host`: The host on which StatsD agent is listening\n* `StatsD agent port`: The port on which StatsD agent is listening (default is 8125)\n\n\n### Backoffice auth. settings\n\n* `Backoffice auth. config`: the authentication module used in front of Otoroshi. It will be used to connect to Otoroshi on the login page\n\n### Let's encrypt settings\n\n* `Enabled`: when enabled, Otoroshi will have the possiblity to sign certificate from let's encrypt notably in the SSL/TSL Certificates page \n* `Server URL`: ACME endpoint of let's encrypt \n* `Email addresses`: (optional) list of addresses used to order the certificates \n* `Contact URLs`: (optional) list of addresses used to order the certificates \n* `Public Key`: used to ask a certificate to let's encrypt, generated by Otoroshi \n* `Private Key`: used to ask a certificate to let's encrypt, generated by Otoroshi \n\n\n### CleverCloud settings\n\nOnce configured, you can register one clever cloud app of your organization directly as an Otoroshi service.\n\n* `CleverCloud consumer key`: consumer key of your clever cloud OAuth 1.0 app\n* `CleverCloud consumer secret`: consumer secret of your clever cloud OAuth 1.0 app\n* `OAuth Token`: oauth token of your clever cloud OAuth 1.0 app\n* `OAuth Secret`: oauth token secret of your clever cloud OAuth 1.0 app \n* `CleverCloud orga. Id`: id of your clever cloud organization\n\n### Global scripts\n\nGlobal scripts will be deprecated soon, please use global plugins instead (see the next section)!\n\n### Global plugins\n\n* `Enabled`: enable the use of global plugins\n* `Plugins on new Otoroshi engine`: list of plugins used by the new Otoroshi engine\n* `Plugins on old Otoroshi engine`: list of plugins used by the old Otoroshi engine\n* `Plugin configuration`: the overloaded configuration of plugins\n\n### Proxies\n\nIn this section, you can add a list of proxies for :\n\n* Proxy for alert emails (mailgun)\n* Proxy for alert webhooks\n* Proxy for Clever-Cloud API access\n* Proxy for services access\n* Proxy for auth. access (OAuth, OIDC)\n* Proxy for client validators\n* Proxy for JWKS access\n* Proxy for elastic access\n\nEach proxy has the following fields \n\n* `Proxy host`: host of proxy\n* `Proxy port`: port of proxy\n* `Proxy principal`: user of proxy\n* `Proxy password`: password of proxy\n* `Non proxy host`: IP address that can access the service\n\n### Quotas alerting settings\n\n* `Enable quotas exceeding alerts`: When apikey quotas is almost exceeded, an alert will be sent \n* `Daily quotas threshold`: The percentage of daily calls before sending alerts\n* `Monthly quotas threshold`: The percentage of monthly calls before sending alerts\n\n### User-Agent extraction settings\n\n* `User-Agent extraction`: Allow user-agent details extraction. Can have impact on consumed memory. \n\n### Geolocation extraction settings\n\nExtract a geolocation for each call to Otoroshi.\n\n### Tls Settings\n\n* `Use random cert.`: Use the first available cert when none matches the current domain\n* `Default domain`: When the SNI domain cannot be found, this one will be used to find the matching certificate \n* `Trust JDK CAs (server)`: Trust JDK CAs. The CAs from the JDK CA bundle will be proposed in the certificate request when performing TLS handshake \n* `Trust JDK CAs (trust)`: Trust JDK CAs. The CAs from the JDK CA bundle will be used as trusted CAs when calling HTTPS resources \n* `Trusted CAs (server)`: Select the trusted CAs you want for TLS terminaison. Those CAs only will be proposed in the certificate request when performing TLS handshake \n\n\n### Auto Generate Certificates\n\n* `Enabled`: Generate certificates on the fly when they don't exist\n* `Reply Nicely`: When receiving request from a not allowed domain name, accept connection and display a nice error message \n* `CA`: certificate CA used to generate missing certificate\n* `Allowed domains`: Allowed domains\n* `Not allowed domains`: Not allowed domains\n \n\n### Global metadata\n\n* `Tags`: tags attached to the global config\n* `Metadata`: metadata attached to the global config\n\n### Actions at the bottom of the page\n\n* `Recover from a full export file`: Load global configuration from a previous export\n* `Full export`: Export with all created entities\n* `Full export (ndjson)`: Export your full state of database to ndjson format\n* `JSON`: Get the global config at JSON format \n* `YAML`: Get the global config at YAML format \n* `Enable Panic Mode`: Log out all users from UI and prevent any changes to the database by setting the admin Otoroshi api to read-only. The only way to exit of this mode is to disable this mode directly in the database. "},{"name":"index.md","id":"/entities/index.md","url":"/entities/index.html","title":"","content":"\n# Main entities\n\nIn this section, we will pass through all the main Otoroshi entities. Otoroshi entities are the main items stored in otoroshi datastore that will be used to configure routing, authentication, etc.\n\nAny entity has the following properties\n\n* `location` or `_loc`: the location of the entity (organization and team)\n* `id`: the id of the entity (except for apikeys)\n* `name`: the name of the entity\n* `description`: the description of the entity (optional)\n* `tags`: free tags that you can put on any entity to help you manage it, automate it, etc.\n* `metadata`: free key/value tuples that you can put on any entity to help you manage it, automate it, etc.\n\n@@@div { .plugin .entities }\n\n
\nRoutes\nProxy your applications with routes\n
\n@ref:[View](./routes.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nBackends\nReuse route targets\n
\n@ref:[View](./backends.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nApikeys\nAdd security to your services using apikeys\n
\n@ref:[View](./apikeys.md)\n@@@\n\n\n@@@div { .plugin .entities }\n\n
\nOrganizations\nThis the most high level for grouping resources.\n
\n@ref:[View](./organizations.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nTeams\nOrganize your resources by teams\n
\n@ref:[View](./teams.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nService groups\nGroup your services\n
\n@ref:[View](./service-groups.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nJWT verifiers\nVerify and forge token by services.\n
\n@ref:[View](./jwt-verifiers.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nGlobal Config\nThe danger zone of Otoroshi\n
\n@ref:[View](./global-config.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nTCP services\n\n
\n@ref:[View](./tcp-services.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nAuth. modules\nSecure the Otoroshi UI and your web apps\n
\n@ref:[View](./auth-modules.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nCertificates\nAdd secure communication between Otoroshi, clients and services\n
\n@ref:[View](./certificates.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nData exporters\nExport alerts, events ands logs\n
\n@ref:[View](./data-exporters.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nScripts\n\n
\n@ref:[View](./scripts.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nService descriptors\nProxy your applications with service descriptors\n
\n@ref:[View](./service-descriptors.md)\n@@@\n\n@@@ index\n\n* [Routes](./routes.md)\n* [Backends](./backends.md)\n* [Organizations](./organizations.md)\n* [Teams](./teams.md)\n* [Global Config](./global-config.md)\n* [Apikeys](./apikeys.md)\n* [Service groups](./service-groups.md)\n* [Auth. modules](./auth-modules.md)\n* [Certificates](./certificates.md)\n* [JWT verifiers](./jwt-verifiers.md)\n* [Data exporters](./data-exporters.md)\n* [Scripts](./scripts.md)\n* [TCP services](./tcp-services.md)\n* [Service descriptors](./service-descriptors.md)\n\n@@@\n"},{"name":"jwt-verifiers.md","id":"/entities/jwt-verifiers.md","url":"/entities/jwt-verifiers.html","title":"JWT verifiers","content":"# JWT verifiers\n\nSometimes, it can be pretty useful to verify Jwt tokens coming from other provider on some services. Otoroshi provides a tool to do that per service.\n\n* `Name`: name of the JWT verifier\n* `Description`: a simple description\n* `Strict`: if not strict, request without JWT token will be allowed to pass. This option is helpful when you want to force the presence of tokens in each request on a specific service \n* `Tags`: list of tags associated to the module\n* `Metadata`: list of metadata associated to the module\n\nEach JWT verifier is configurable in three steps : the `location` where find the token in incoming requests, the `validation` step to check the signature and the presence of claims in tokens, and the last step, named `Strategy`.\n\n## Token location\n\nAn incoming token can be found in three places.\n\n#### In query string\n\n* `Source`: JWT token location in query string\n* `Query param name`: the name of the query param where JWT is located\n\n#### In a header\n\n* `Source`: JWT token location in a header\n* `Header name`: the name of the header where JWT is located\n* `Remove value`: when the token is read, this value will be remove of header value (example: if the header value is *Bearer xxxx*, the *remove value* could be Bearer  don't forget the space at the end of the string)\n\n#### In a cookie\n\n* `Source`: JWT token location in a cookie\n* `Cookie name`: the name of the cookie where JWT is located\n\n## Token validation\n\nThis section is used to verify the extracted token from specified location.\n\n* `Algo.`: What kind of algorithm you want to use to verify/sign your JWT token with\n\nAccording to the selected algorithm, the validation form will change.\n\n#### Hmac + SHA\n* `SHA Size`: Word size for the SHA-2 hash function used\n* `Hmac secret`: used to verify the token\n* `Base64 encoded secret`: if enabled, the extracted token will be base64 decoded before it is verifier\n\n#### RSASSA-PKCS1 + SHA\n* `SHA Size`: Word size for the SHA-2 hash function used\n* `Public key`: the RSA public key\n* `Private key`: the RSA private key that can be empty if not used for JWT token signing\n\n#### ECDSA + SHA\n* `SHA Size`: Word size for the SHA-2 hash function used\n* `Public key`: the ECDSA public key\n* `Private key`: the ECDSA private key that can be empty if not used for JWT token signing\n\n#### RSASSA-PKCS1 + SHA from KeyPair\n* `SHA Size`: Word size for the SHA-2 hash function used\n* `KeyPair`: used to sign/verify token. The displayed list represents the key pair registered in the Certificates page\n \n#### ECDSA + SHA from KeyPair\n* `SHA Size`: Word size for the SHA-2 hash function used\n* `KeyPair`: used to sign/verify token. The displayed list represents the key pair registered in the Certificates page\n\n#### Otoroshi KeyPair from token kid (only for verification)\n* `Use only exposed keypairs`: if enabled, Otoroshi will only use the key pairs that are exposed on the well-known. If disabled, it will search on any registered key pairs.\n\n#### JWK Set (only for verification)\n\n* `URL`: the JWK set URL where the public keys are exposed\n* `HTTP call timeout`: timeout for fetching the keyset\n* `TTL`: cache TTL for the keyset\n* `HTTP Headers`: the HTTP headers passed\n* `Key type`: type of the key searched in the jwks\n\n*TLS settings for JWKS fetching*\n\n* `Custom TLS Settings`: TLS settings for JWKS fetching\n* `TLS loose`: if enabled, will block all untrustful ssl configs\n* `Trust all`: allows any server certificates even the self-signed ones\n* `Client certificates`: list of client certificates used to communicate with JWKS server\n* `Trusted certificates`: list of trusted certificates received from JWKS server\n\n*Proxy*\n\n* `Proxy host`: host of proxy behind the identify provider\n* `Proxy port`: port of proxy behind the identify provider\n* `Proxy principal`: user of proxy \n* `Proxy password`: password of proxy\n\n## Strategy\n\nThe first step is to select the verifier strategy. Otoroshi supports 4 types of JWT verifiers:\n\n* `Default JWT token` will add a token if no present. \n* `Verify JWT token` will only verifiy token signing and fields values if provided. \n* `Verify and re-sign JWT token` will verify the token and will re-sign the JWT token with the provided algo. settings. \n* `Verify, re-sign and transform JWT token` will verify the token, re-sign and will be able to transform the token.\n\nAll verifiers has the following properties: \n\n* `Verify token fields`: when the JWT token is checked, each field specified here will be verified with the provided value\n* `Verify token array value`: when the JWT token is checked, each field specified here will be verified if the provided value is contained in the array\n\n\n#### Default JWT token\n\n* `Strict`: if token is already present, the call will fail\n* `Default value`: list of claims of the generated token. These fields support raw values or language expressions. See the documentation about @ref:[the expression language](../topics/expression-language.md)\n\n#### Verify JWT token\n\nNo specific values needed. This kind of verifier needs only the two fields `Verify token fields` and `Verify token array value`.\n\n#### Verify and re-sign JWT token\n\nWhen `Verify and re-sign JWT token` is chosen, the `Re-sign settings` appear. All fields of `Re-sign settings` are the same of the `Token validation` section. The only difference is that the values are used to sign the new token and not to validate the token.\n\n\n#### Verify, re-sign and transform JWT token\n\nWhen `Verify, re-sign and transform JWT token` is chosen, the `Re-sign settings` and `Transformation settings` appear.\n\nThe `Re-sign settings` are used to sign the new token and has the same fields than the `Token validation` section.\n\nFor the `Transformation settings` section, the fields are:\n\n* `Token location`: the location where to find/set the JWT token\n* `Header name`: the name of the header where JWT is located\n* `Prepend value`: remove a value inside the header value\n* `Rename token fields`: when the JWT token is transformed, it is possible to change a field name, just specify origin field name and target field name\n* `Set token fields`: when the JWT token is transformed, it is possible to add new field with static values, just specify field name and value\n* `Remove token fields`: when the JWT token is transformed, it is possible to remove fields"},{"name":"organizations.md","id":"/entities/organizations.md","url":"/entities/organizations.html","title":"Organizations","content":"# Organizations\n\nThe resources of Otoroshi are grouped by `Organization`. This the highest level for grouping resources.\n\nAn organization have a unique `id`, a `name` and a `description`. As all Otoroshi resources, an Organization have a list of tags and metadata associated.\n\nFor example, you can use the organizations as a mean of :\n\n* to seperate resources by services or entities in your enterprise\n* to split internal and external usage of the resources (it's useful when you have a list of services deployed in your company and another one deployed by your partners)\n\n@@@ div { .centered-img }\n\n@@@\n\n## Access to the list of organizations\n\nTo visualize and edit the list of organizations, you can navigate to your instance on the `https://otoroshi.xxxxxx/bo/dashboard/organizations` route or click on the cog icon and select the organizations button.\n\nOnce on the page, you can create a new item, edit an existing organization or delete an existing one.\n\n> When an organization is deleted, the resources associated are not deleted. On the other hand, the organization and the team of associated resources are let empty.\n\n## Entities location\n\nAny otoroshi entity has a location property (`_loc` when serialized to json) explaining where and by whom the entity can be seen. \n\nAn entity can be part of one organization (`tenant` in the json document)\n\n```javascript\n{\n \"_loc\": {\n \"tenant\": \"tenant-1\",\n \"teams\": ...\n }\n ...\n}\n```\n\nor all organizations\n\n```javascript\n{\n \"_loc\": {\n \"tenant\": \"*\",\n \"teams\": ...\n }\n ...\n}\n```\n\n"},{"name":"routes.md","id":"/entities/routes.md","url":"/entities/routes.html","title":"Routes","content":"# Routes\n\nA route is an unique routing rule based on hostname, path, method and headers that will execute a bunch of plugins and eventually forward the request to the backend application.\n\n## UI page\n\nYou can find all routes [here](http://otoroshi.oto.tools:8080/bo/dashboard/routes)\n\n## Global Properties\n\n* `location`: the location of the entity\n* `id`: the id of the route\n* `name`: the name of the route\n* `description`: the description of the route\n* `tags`: the tags of the route. can be useful for api automation\n* `metadata`: the metadata of the route. can be useful for api automation. There are a few reserved metadata used by otorshi that can be found @ref[below](./routes.md#reserved-metadata)\n* `enabled`: is the route enabled ? if not, the router will not consider this route\n* `debugFlow`: the debug flag. If enabled, the execution report for this route will contain all input/output values through steps of the proxy engine. For more informations, check the @ref[engine documentation](../topics/engine.md#reporting)\n* `capture`: if enabled, otoroshi will generate events containing the whole content of each request. Use with caution ! For more informations, check the @ref[engine documentation](../topics/engine.md#http-traffic-capture)\n* `exportReporting`: if enabled, execution reports of the proxy engine will be generated for each request. Those reports are exportable using @ref[data exporters](./data-exporters.md) . For more informations, check the @ref[engine documentation](../topics/engine.md#reporting)\n* `groups`: each route is attached to a group. A group can have one or more services/routes. Each API key is linked to groups/routes/services and allow access to every entities in the groups.\n\n### Reserved metadata\n\nsome metadata are reserved for otoroshi usage. Here is the list of reserved metadata\n\n* `otoroshi-core-user-facing`: is this a user facing app for the snow monkey\n* `otoroshi-core-use-akka-http-client`: use the pure akka http client\n* `otoroshi-core-use-netty-http-client`: use the pure netty http client\n* `otoroshi-core-use-akka-http-ws-client`: use the modern websocket client\n* `otoroshi-core-issue-lets-encrypt-certificate`: enabled let's encrypt certificate issue for this route. true or false\n* `otoroshi-core-issue-certificate`: enabled certificate issue for this route. true or false\n* `otoroshi-core-issue-certificate-ca`: the id of the CA cert to generate the certificate for this route\n* `otoroshi-core-openapi-url`: the openapi url for this route\n* `otoroshi-core-env`: the env for this route. here for legacy reasons\n* `otoroshi-deployment-providers`: in the case of relay routing, the providers for this route\n* `otoroshi-deployment-regions`: in the case of relay routing, the network regions for this route\n* `otoroshi-deployment-zones`: in the case of relay routing, the network zone for this route \n* `otoroshi-deployment-dcs`: in the case of relay routing, the datacenter for this route \n* `otoroshi-deployment-racks`: in the case of relay routing, the rack for this route \n\n## Frontend configuration\n\n* `frontend`: the frontend of the route. It's the configuration that will configure how otoroshi router will match this route. A frontend has the following shape. \n\n```javascript\n{\n \"domains\": [ // the matched domains and paths\n \"new-route.oto.tools/path\" // here you can use wildcard in domain and path, also you can use named path params\n ],\n \"strip_path\": true, // is the matched path stripped in the forwarded request\n \"exact\": false, // perform exact matching on path, if not, will be matched on /path*\n \"headers\": {}, // the matched http headers. if none provided, any header will be matched\n \"query\": {}, // the matched http query params. if none provided, any query params will be matched\n \"methods\": [] // the matched http methods. if none provided, any method will be matched\n}\n```\n\nFor more informations about routing, check the @ref[engine documentation](../topics/engine.md#routing)\n\n## Backend configuration\n\n* `backend`: a backend to forward requests to. For more informations, go to the @ref[backend documentation](./backends.md)\n* `backendRef`: a reference to an existing backend id\n\n## Plugins\n\nthe liste of plugins used on this route. Each plugin definition has the following shape:\n\n```javascript\n{\n \"enabled\": false, // is the plugin enabled\n \"debug\": false, // is debug enabled of this specific plugin\n \"plugin\": \"cp:otoroshi.next.plugins.Redirection\", // the id of the plugin\n \"include\": [], // included paths. if none, all paths are included\n \"exclude\": [], // excluded paths. if none, none paths are excluded\n \"config\": { // the configuration of the plugin\n \"code\": 303,\n \"to\": \"https://www.otoroshi.io\"\n },\n \"plugin_index\": { // the position of the plugin. if none provided, otoroshi will use the order in the plugin array\n \"pre_route\": 0\n }\n}\n```\n\nfor more informations about the available plugins, go @ref[here](../plugins/built-in-plugins.md)\n\n\n"},{"name":"scripts.md","id":"/entities/scripts.md","url":"/entities/scripts.html","title":"Scripts","content":"# Scripts\n\nScript are a way to create plugins for otoroshi without deploying them as jar files. With scripts, you just have to store the scala code of your plugins inside the otoroshi datastore and otoroshi will compile and deploy them at startup. You can find all your scripts in the UI at `cog icon / Plugins`. You can find all the documentation about plugins @ref:[here](../plugins/index.md)\n\n@@@ warning\nThe compilation of your plugins can be pretty long and resources consuming. As the compilation happens during otoroshi boot sequence, your instance will be blocked until all plugins have compiled. This behavior can be disabled. If so, the plugins will not work until they have been compiled. Any service using a plugin that is not compiled yet will fail\n@@@\n\nLike any entity, the script has has the following properties\n\n* `id`\n* `plugin name`\n* `plugin description`\n* `tags`\n* `metadata`\n\nAnd you also have\n\n* `type`: the kind of plugin you are building with this script\n* `plugin code`: the code for your plugin\n\n## Compile\n\nYou can use the compile button to check if the code you write in `plugin code` is valid. It will automatically save your script and try to compile. As mentionned earlier, script compilation is quite resource intensive. It will affect your CPU load and your memory consumption. Don't forget to adjust your VM settings accordingly.\n"},{"name":"service-descriptors.md","id":"/entities/service-descriptors.md","url":"/entities/service-descriptors.html","title":"Service descriptors","content":"# Service descriptors\n\nServices or service descriptor, let you declare how to proxy a call from a domain name to another domain name (or multiple domain names). \n\n@@@ div { .centered-img }\n\n@@@\n\nLet’s say you have an API exposed on http://192.168.0.42 and I want to expose it on https://my.api.foo. Otoroshi will proxy all calls to https://my.api.foo and forward them to http://192.168.0.42. While doing that, it will also log everyhting, control accesses, etc.\n\n\n* `Id`: a unique random string to identify your service\n* `Groups`: each service descriptor is attached to a group. A group can have one or more services. Each API key is linked to a group and allow access to every service in the group.\n* `Create a new group`: you can create a new group to host this descriptor\n* `Create dedicated group`: you can create a new group with an auto generated name to host this descriptor\n* `Name`: the name of your service. Only for debug and human readability purposes.\n* `Description`: the description of your service. Only for debug and human readability purposes.\n* `Service enabled`: activate or deactivate your service. Once disabled, users will get an error page saying the service does not exist.\n* `Read only mode`: authorize only GET, HEAD, OPTIONS calls on this service\n* `Maintenance mode`: display a maintainance page when a user try to use the service\n* `Construction mode`: display a construction page when a user try to use the service\n* `Log analytics`: Log analytics events for this service on the servers\n* `Use new http client`: will use Akka Http Client for every request\n* `Detect apikey asap`: If the service is public and you provide an apikey, otoroshi will detect it and validate it. Of course this setting may impact performances because of useless apikey lookups.\n* `Send Otoroshi headers back`: when enabled, Otoroshi will send headers to consumer like request id, client latency, overhead, etc ...\n* `Override Host header`: when enabled, Otoroshi will automatically set the Host header to corresponding target host\n* `Send X-Forwarded-* headers`: when enabled, Otoroshi will send X-Forwarded-* headers to target\n* `Force HTTPS`: will force redirection to `https://` if not present\n* `Allow HTTP/1.0 requests`: will return an error on HTTP/1.0 request\n* `Use new WebSocket client`: will use the new websocket client for every websocket request\n* `TCP/UDP tunneling`: with this setting enabled, otoroshi will not proxy http requests anymore but instead will create a secured tunnel between a cli on your machine and otoroshi to proxy any tcp connection with all otoroshi security features enabled\n\n### Service exposition settings\n\n* `Exposed domain`: the domain used to expose your service. Should follow pattern: `(http|https)://subdomain?.env?.domain.tld?/root?` or regex `(http|https):\\/\\/(.*?)\\.?(.*?)\\.?(.*?)\\.?(.*)\\/?(.*)`\n* `Legacy domain`: use `domain`, `subdomain`, `env` and `matchingRoot` for routing in addition to hosts, or just use hosts.\n* `Strip path`: when matching, strip the matching prefix from the upstream request URL. Defaults to true\n* `Issue Let's Encrypt cert.`: automatically issue and renew let's encrypt certificate based on domain name. Only if Let's Encrypt enabled in global config.\n* `Issue certificate`: automatically issue and renew a certificate based on domain name\n* `Possible hostnames`: all the possible hostnames for your service\n* `Possible matching paths`: all the possible matching paths for your service\n\n### Redirection\n\n* `Redirection enabled`: enabled the redirection. If enabled, a call to that service will redirect to the chosen URL\n* `Http redirection code`: type of redirection used\n* `Redirect to`: URL used to redirect user when the service is called\n\n### Service targets\n\n* `Redirect to local`: if you work locally with Otoroshi, you may want to use that feature to redirect one specific service to a local host. For example, you can relocate https://foo.preprod.bar.com to http://localhost:8080 to make some tests\n* `Load balancing`: the load balancing algorithm used\n* `Targets`: the list of target that Otoroshi will proxy and expose through the subdomain defined before. Otoroshi will do round-robin load balancing between all those targets with circuit breaker mecanism to avoid cascading failures\n* `Targets root`: Otoroshi will append this root to any target choosen. If the specified root is `/api/foo`, then a request to https://yyyyyyy/bar will actually hit https://xxxxxxxxx/api/foo/bar\n\n### URL Patterns\n\n* `Make service a 'public ui'`: add a default pattern as public routes\n* `Make service a 'private api'`: add a default pattern as private routes\n* `Public patterns`: by default, every services are private only and you'll need an API key to access it. However, if you want to expose a public UI, you can define one or more public patterns (regex) to allow access to anybody. For example if you want to allow anybody on any URL, just use `/.*`\n* `Private patterns`: if you define a public pattern that is a little bit too much, you can make some of public URL private again\n\n### Restrictions\n\n* `Enabled`: enable restrictions\n* `Allow last`: Otoroshi will test forbidden and notFound paths before testing allowed paths\n* `Allowed`: allowed paths\n* `Forbidden`: forbidden paths\n* `Not Found`: not found paths\n\n### Otoroshi exchange protocol\n\n* `Enabled`: when enabled, Otoroshi will try to exchange headers with backend service to ensure no one else can use the service from outside.\n* `Send challenge`: when disbaled, Otoroshi will not check if target service respond with sent random value.\n* `Send info. token`: when enabled, Otoroshi add an additional header containing current call informations\n* `Challenge token version`: version the otoroshi exchange protocol challenge. This option will be set to V2 in a near future.\n* `Info. token version`: version the otoroshi exchange protocol info token. This option will be set to Latest in a near future.\n* `Tokens TTL`: the number of seconds for tokens (state and info) lifes\n* `State token header name`: the name of the header containing the state token. If not specified, the value will be taken from the configuration (otoroshi.headers.comm.state)\n* `State token response header name`: the name of the header containing the state response token. If not specified, the value will be taken from the configuration (otoroshi.headers.comm.stateresp)\n* `Info token header name`: the name of the header containing the info token. If not specified, the value will be taken from the configuration (otoroshi.headers.comm.claim)\n* `Excluded patterns`: by default, when security is enabled, everything is secured. But sometimes you need to exlude something, so just add regex to matching path you want to exlude.\n* `Use same algo.`: when enabled, all JWT token in this section will use the same signing algorithm. If `use same algo.` is disabled, three more options will be displayed to select an algorithm for each step of the calls :\n * Otoroshi to backend\n * Backend to otoroshi\n * Info. token\n\n* `Algo.`: What kind of algorithm you want to use to verify/sign your JWT token with\n* `SHA Size`: Word size for the SHA-2 hash function used\n* `Hmac secret`: used to verify the token\n* `Base64 encoded secret`: if enabled, the extracted token will be base64 decoded before it is verifier\n\n### Authentication\n\n* `Enforce user authentication`: when enabled, user will be allowed to use the service (UI) only if they are registered users of the chosen authentication module.\n* `Auth. config`: authentication module used to protect the service\n* `Create a new auth config.`: navigate to the creation of authentication module page\n* `all auth config.`: navigate to the authentication pages\n\n* `Excluded patterns`: by default, when security is enabled, everything is secured. But sometimes you need to exlude something, so just add regex to matching path you want to exlude.\n* `Strict mode`: strict mode enabled\n\n### Api keys constraints\n\n* `From basic auth.`: you can pass the api key in Authorization header (ie. from 'Authorization: Basic xxx' header)\n* `Allow client id only usage`: you can pass the api key using client id only (ie. from Otoroshi-Token header)\n* `From custom headers`: you can pass the api key using custom headers (ie. Otoroshi-Client-Id and Otoroshi-Client-Secret headers)\n* `From JWT token`: you can pass the api key using a JWT token (ie. from 'Authorization: Bearer xxx' header)\n\n#### Basic auth. Api Key\n\n* `Custom header name`: the name of the header to get Authorization\n* `Custom query param name`: the name of the query param to get Authorization\n\n#### Client ID only Api Key\n\n* `Custom header name`: the name of the header to get the client id\n* `Custom query param name`: the name of the query param to get the client id\n\n#### Custom headers Api Key\n\n* `Custom client id header name`: the name of the header to get the client id\n* `Custom client secret header name`: the name of the header to get the client secret\n\n#### JWT Token Api Key\n\n* `Secret signed`: JWT can be signed by apikey secret using HMAC algo.\n* `Keypair signed`: JWT can be signed by an otoroshi managed keypair using RSA/EC algo.\n* `Include Http request attrs.`: if enabled, you have to put the following fields in the JWT token corresponding to the current http call (httpPath, httpVerb, httpHost)\n* `Max accepted token lifetime`: the maximum number of second accepted as token lifespan\n* `Custom header name`: the name of the header to get the jwt token\n* `Custom query param name`: the name of the query param to get the jwt token\n* `Custom cookie name`: the name of the cookie to get the jwt token\n\n### Routing constraints\n\n* `All Tags in` : have all of the following tags\n* `No Tags in` : not have one of the following tags\n* `One Tag in` : have at least one of the following tags\n* `All Meta. in` : have all of the following metadata entries\n* `No Meta. in` : not have one of the following metadata entries\n* `One Meta. in` : have at least one of the following metadata entries\n* `One Meta key in` : have at least one of the following key in metadata\n* `All Meta key in` : have all of the following keys in metadata\n* `No Meta key in` : not have one of the following keys in metadata\n\n### CORS support\n\n* `Enabled`: if enabled, CORS header will be check for each incoming request\n* `Allow credentials`: if enabled, the credentials will be sent. Credentials are cookies, authorization headers, or TLS client certificates.\n* `Allow origin`: if enabled, it will indicates whether the response can be shared with requesting code from the given\n* `Max age`: response header indicates how long the results of a preflight request (that is the information contained in the Access-Control-Allow-Methods and Access-Control-Allow-Headers headers) can be cached.\n* `Expose headers`: response header allows a server to indicate which response headers should be made available to scripts running in the browser, in response to a cross-origin request.\n* `Allow headers`: response header is used in response to a preflight request which includes the Access-Control-Request-Headers to indicate which HTTP headers can be used during the actual request.\n* `Allow methods`: response header specifies one or more methods allowed when accessing a resource in response to a preflight request.\n* `Excluded patterns`: by default, when cors is enabled, everything has cors. But sometimes you need to exlude something, so just add regex to matching path you want to exlude.\n\n#### Related documentations\n\n* @link[Access-Control-Allow-Credentials](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Credentials) { open=new }\n* @link[Access-Control-Allow-Origin](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin) { open=new }\n* @link[Access-Control-Max-Age](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Max-Age) { open=new }\n* @link[Access-Control-Allow-Methods](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Methods) { open=new }\n* @link[Access-Control-Allow-Headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers) { open=new }\n\n### JWT tokens verification\n\n* `Verifiers`: list of selected verifiers to apply on the service\n* `Enabled`: if enabled, Otoroshi will enabled each verifier of the previous list\n* `Excluded patterns`: list of routes where the verifiers will not be apply\n\n### Pre Routing\n\nThis part has been deprecated and moved to the plugin section.\n\n### Access validation\nThis part has been deprecated and moved to the plugin section.\n\n### Gzip support\n\n* `Mimetypes allowed list`: gzip only the files that are matching to a format in the list\n* `Mimetypes blocklist`: will not gzip files matching to a format in the list. A possible way is to allowed all format by default by setting a `*` on the `Mimetypes allowed list` and to add the unwanted format in this list.\n* `Compression level`: the compression level where 9 gives us maximum compression but at the slowest speed. The default compression level is 5 and is a good compromise between speed and compression ratio.\n* `Buffer size`: chunking up a stream of bytes into limited size\n* `Chunk threshold`: if the content type of a request reached over the threshold, the response will be chunked\n* `Excluded patterns`: by default, when gzip is enabled, everything has gzip. But sometimes you need to exlude something, so just add regex to matching path you want to exlude.\n\n### Client settings\n\n* `Use circuit breaker`: use a circuit breaker to avoid cascading failure when calling chains of services. Highly recommended !\n* `Cache connections`: use a cache at host connection level to avoid reconnection time\n* `Client attempts`: specify how many times the client will retry to fetch the result of the request after an error before giving up.\n* `Client call timeout`: specify how long each call should last at most in milliseconds.\n* `Client call and stream timeout`: specify how long each call should last at most in milliseconds for handling the request and streaming the response.\n* `Client connection timeout`: specify how long each connection should last at most in milliseconds.\n* `Client idle timeout`: specify how long each connection can stay in idle state at most in milliseconds.\n* `Client global timeout`: specify how long the global call (with retries) should last at most in milliseconds.\n* `C.breaker max errors`: specify how many errors can pass before opening the circuit breaker\n* `C.breaker retry delay`: specify the delay between two retries. Each retry, the delay is multiplied by the backoff factor\n* `C.breaker backoff factor`: specify the factor to multiply the delay for each retry\n* `C.breaker window`: specify the sliding window time for the circuit breaker in milliseconds, after this time, error count will be reseted\n\n#### Custom timeout settings (list)\n\n* `Path`: the path on which the timeout will be active\n* `Client connection timeout`: specify how long each connection should last at most in milliseconds.\n* `Client idle timeout`: specify how long each connection can stay in idle state at most in milliseconds.\n* `Client call and stream timeout`: specify how long each call should last at most in milliseconds for handling the request and streaming the response.\n* `Call timeout`: Specify how long each call should last at most in milliseconds.\n* `Client global timeout`: specify how long the global call (with retries) should last at most in milliseconds.\n\n#### Proxy settings\n\n* `Proxy host`: host of proxy behind the identify provider\n* `Proxy port`: port of proxy behind the identify provider\n* `Proxy principal`: user of proxy \n* `Proxy password`: password of proxy\n\n### HTTP Headers\n\n* `Additional Headers In`: specify headers that will be added to each client request (from Otoroshi to target). Useful to add authentication.\n* `Additional Headers Out`: specify headers that will be added to each client response (from Otoroshi to client).\n* `Missing only Headers In`: specify headers that will be added to each client request (from Otoroshi to target) if not in the original request.\n* `Missing only Headers Out`: specify headers that will be added to each client response (from Otoroshi to client) if not in the original response.\n* `Remove incoming headers`: remove headers in the client request (from client to Otoroshi).\n* `Remove outgoing headers`: remove headers in the client response (from Otoroshi to client).\n* `Security headers`:\n* `Utility headers`:\n* `Matching Headers`: specify headers that MUST be present on client request to route it (pre routing). Useful to implement versioning.\n* `Headers verification`: verify that some headers has a specific value (post routing)\n\n### Additional settings \n\n* `OpenAPI`: specify an open API descriptor. Useful to display the documentation\n* `Tags`: specify tags for the service\n* `Metadata`: specify metadata for the service. Useful for analytics\n* `IP allowed list`: IP address that can access the service\n* `IP blocklist`: IP address that cannot access the service\n\n### Canary mode\n\n* `Enabled`: Canary mode enabled\n* `Traffic split`: Ratio of traffic that will be sent to canary targets. For instance, if traffic is at 0.2, for 10 request, 2 request will go on canary targets and 8 will go on regular targets.\n* `Targets`: The list of target that Otoroshi will proxy and expose through the subdomain defined before. Otoroshi will do round-robin load balancing between all those targets with circuit breaker mecanism to avoid cascading failures\n * `Target`:\n * `Targets root`: Otoroshi will append this root to any target choosen. If the specified root is '/api/foo', then a request to https://yyyyyyy/bar will actually hit https://xxxxxxxxx/api/foo/bar\n* `Campaign stats`:\n* `Use canary targets as standard targets`:\n\n### Healthcheck settings\n\n* `HealthCheck enabled`: to help failing fast, you can activate healthcheck on a specific URL.\n* `HealthCheck url`: the URL to check. Should return an HTTP 200 response. You can also respond with an 'Opun-Health-Check-Logic-Test-Result' header set to the value of the 'Opun-Health-Check-Logic-Test' request header + 42. to make the healthcheck complete.\n\n### Fault injection\n\n* `User facing app.`: if service is set as user facing, Snow Monkey can be configured to not being allowed to create outage on them.\n* `Chaos enabled`: activate or deactivate chaos setting on this service descriptor.\n\n### Custom errors template\n\n* `40x template`: html template displayed when 40x error occurred\n* `50x template`: html template displayed when 50x error occurred\n* `Build mode template`: html template displayed when the build mode is enabled\n* `Maintenance mode template`: html template displayed when the maintenance mode is enabled\n* `Custom messages`: override error message one by one\n\n### Request transformation\n\nThis part has been deprecated and moved to the plugin section.\n\n### Plugins\n\n* `Plugins`:\n \n * `Inject default config`: injects, if present, the default configuration of a selected plugin in the configuration object\n * `Documentation`: link to the documentation website of the plugin\n * `show/hide config. panel`: shows and hides the plugin panel which contains the plugin description and configuration\n* `Excluded patterns`: by default, when plugins are enabled, everything pass in. But sometimes you need to exclude something, so just add regex to matching path you want to exlude.\n* `Configuration`: the configuration of each enabled plugin, split by names and grouped in the same configuration object."},{"name":"service-groups.md","id":"/entities/service-groups.md","url":"/entities/service-groups.html","title":"Service groups","content":"# Service groups\n\nA service group is composed of an unique `id`, a `Group name`, a `Group description`, an `Organization` and a `Team`. As all Otoroshi resources, a service group have a list of tags and metadata associated.\n\n@@@ div { .centered-img }\n\n@@@\n\nThe first instinctive usage of service group is to group a list of services. \n\nWhen it's done, you can authorize an api key on a specific group. Instead of authorize an api key for each service, you can regroup a list of services together, and give authorization on the group (read the page on the api keys and the usage of the `Authorized on.` field).\n\n## Access to the list of service groups\n\nTo visualize and edit the list of groups, you can navigate to your instance on the `https://otoroshi.xxxxx/bo/dashboard/groups` route or click on the cog icon and select the Service groups button.\n\nOnce on the page, you can create a new item, edit an existing service group or delete an existing one.\n\n> When a service group is deleted, the resources associated are not deleted. On the other hand, the service group of associated resources is let empty.\n\n"},{"name":"tcp-services.md","id":"/entities/tcp-services.md","url":"/entities/tcp-services.html","title":"TCP services","content":"# TCP services\n\nTCP service are special kind of otoroshi services meant to proxy pure TCP connections (ssh, database, http, etc)\n\n## Global information\n\n* `Id`: generated unique identifier\n* `TCP service name`: the name of your TCP service\n* `Enabled`: enable and disable the service\n* `TCP service port`: the listening port\n* `TCP service interface`: network interface listen by the service\n* `Tags`: list of tags associated to the service\n* `Metadata`: list of metadata associated to the service\n\n## TLS\n\nthis section controls the TLS exposition of the service\n\n* `TLS mode`\n * `Disabled`: no TLS\n * `PassThrough`: as the target exposes TLS, the call will pass through otoroshi and use target TLS\n * `Enabled`: the service will be exposed using TLS and will chose certificate based on SNI\n* `Client Auth.`\n * `None` no mTLS needed to pass\n * `Want` pass with or without mTLS\n * `Need` need mTLS to pass\n\n## Server Name Indication (SNI)\n\nthis section control how SNI should be treated\n\n* `SNI routing enabled`: if enabled, the server will use the SNI hostname to determine which certificate to present to the client\n* `Forward to target if no SNI match`: if enabled, a call without any SNI match will be forward to the target\n* `Target host`: host of the target called if no SNI\n* `Target ip address`: ip of the target called if no SNI\n* `Target port`: port of the target called if no SNI\n* `TLS call`: encrypt the communication with TLS\n\n## Rules\n\nfor any listening TCP proxy, it is possible to route to multiple targets based on SNI or extracted http host (if proxying http)\n\n* `Matching domain name`: regex used to filter the list of domains where the rule will be applied\n* `Target host`: host of the target\n* `Target ip address`: ip of the target\n* `Target port`: port of the target\n* `TLS call`: enable this flag if the target is exposed using TLS\n"},{"name":"teams.md","id":"/entities/teams.md","url":"/entities/teams.html","title":"Teams","content":"# Teams\n\nIn Otoroshi, all resources are attached to an `Organization` and a `Team`. \n\nA team is composed of an unique `id`, a `name`, a `description` and an `Organization`. As all Otoroshi resources, a Team have a list of tags and metadata associated.\n\nA team have an unique organization and can be use on multiples resources (services, api keys, etc ...).\n\nA connected user on Otoroshi UI has a list of teams and organizations associated. It can be helpful when you want restrict the rights of a connected user.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Access to the list of teams\n\nTo visualize and edit the list of teams, you can navigate to your instance on the `https://otoroshi.xxxxxx/bo/dashboard/teams` route or click on the cog icon and select the teams button.\n\nOnce on the page, you can create a new item, edit an existing team or delete an existing one.\n\n> When a team is deleted, the resources associated are not deleted. On the other hand, the team of associated resources is let empty.\n\n## Entities location\n\nAny otoroshi entity has a location property (`_loc` when serialized to json) explaining where and by whom the entity can be seen. \n\nAn entity can be part of multiple teams in an organization\n\n```javascript\n{\n \"_loc\": {\n \"tenant\": \"tenant-1\",\n \"teams\": [\n \"team-1\",\n \"team-2\"\n ]\n }\n ...\n}\n```\n\nor all teams\n\n```javascript\n{\n \"_loc\": {\n \"tenant\": \"tenant-1\",\n \"teams\": [\n \"*\"\n ]\n }\n ...\n}\n```"},{"name":"features.md","id":"/features.md","url":"/features.html","title":"Features","content":"# Features\n\n**Traffic Management**\n\n* Can proxy any HTTP(s) service (apis, webapps, websocket, etc)\n* Can proxy any TCP service (app, database, etc)\n* Can proxy any GRPC service\n* Multiple load-balancing options: \n * RoundRobin\n * Random, Sticky\n * Ip address hash\n * Best Response Time\n* Distributed in-flight request limiting\t\n* Distributed rate limiting \n* End-to-end HTTP/1.1 support\n* End-to-end H2 support\n* End-to-end H3 support\n* Traffic mirroring\n* Traffic capture\n* Canary deployments\n* Relay routing \n* Tunnels for easier network exposition\n* Error templates\n\n**Routing**\n\n* Router can support ten of thousands of concurrent routes\n* Router support path params extraction (can be regex validated)\n* Routing based on \n * method\n * hostname (exact, wildcard)\n * path (exact, wildcard)\n * header values (exact, regex, wildcard)\n * query param values (exact, regex, wildcard)\n* Support full url rewriting\n\n**Routes customization**\n\n* Dozens of built-in middlewares (policies/plugins) \n * circuit breakers\n * automatic retries\n * buffering\n * gzip\n * headers manipulation\n * cors\n * body transformation\n * graphql gateway\n * etc \n* Support middlewares compiled to WASM (using extism)\n* Support Open Policy Agent policies for traffic control\n* Write your own custom middlewares\n * in scala deployed as jar files\n * in whatever language you want that can be compiled to WASM\n\n**Routes Monitoring**\n\n* Active healthchecks\n* Route state for the last 90 days\n* Calls tracing using W3C trace context\n* Export alerts and events to external database\n * file\n * S3\n * elastic\n * pulsar\n * kafka\n * webhook\n * mailer\n * logger\n* Real-time traffic metrics\n* Real-time traffic metrics (Datadog, Prometheus, StatsD)\n\n**Services discovery**\n\n* through DNS\n* through Eureka 2\n* through Kubernetes API\n* through custom otoroshi protocol\n\n**API security**\n\n* Access management with apikeys and quotas\n* Automatic apikeys secrets rotation\n* HTTPS and TLS\n* End-to-end mTLS calls \n* Routing constraints\n* Routing restrictions\n* JWT tokens validation and manipulation\n * can support multiple validator on the same routes\n\n**Administration UI**\n\n* Manage and organize all resources\n* Secured users access with Authentication module\n* Audited users actions\n* Dynamic changes at runtime without full reload\n* Test your routes without any external tools\n\n**Webapp authentication and security**\n\n* OAuth2.0/2.1 authentication\n* OpenID Connect (OIDC) authentication\n* LDAP authentication\n* JWT authentication\n* OAuth 1.0a authentication\n* SAML V2 authentication\n* Internal users management\n* Secret vaults support\n * Environment variables\n * Hashicorp Vault\n * Azure key vault\n * AWS secret manager\n * Google secret manager\n * Kubernetes secrets\n * Izanami\n * Spring Cloud Config\n * Http\n * Local\n\n**Certificates management**\n\n* Dynamic TLS certificates store \n* Dynamic TLS termination\n* Internal PKI\n * generate self signed certificates/CAs\n * generate/sign certificates/CAs/subCAs\n * AIA\n * OCSP responder\n * import P12/certificate bundles\n* ACME / Let's Encrypt support\n* On-the-fly certificate generation based on a CA certificate without request loss\n* JWKS exposition for public keypair\n* Default certificate\n* Customize mTLS trusted CAs in the TLS handshake\n\n**Clustering**\n\n* based on a control plane/data plane pattern\n* encrypted communication\n* backup capabilities to allow data plane to start without control plane reachable to improve resilience\n* relay routing to forward traffic from one network zone to others\n* distributed web authentication accross nodes\n\n**Performances and testing**\n\n* Chaos engineering\n* Horizontal Scalability or clustering\n* Canary testing\n* Http client in UI\n* Request debugging\n* Traffic capture\n\n**Kubernetes integration**\n\n* Standard Ingress controller\n* Custom Ingress controller\n * Manage Otoroshi resources from Kubernetes\n* Validation of resources via webhook\n* Service Mesh for easy service-to-service communication (based on Kubernetes sidecars)\n\n**Organize**\n\n* multi-organizations\n* multi-teams\n* routes groups\n\n**Developpers portal**\n\n* Using @link:[Daikoku](https://maif.github.io/daikoku/manual/index.html) { open=new }\n"},{"name":"getting-started.md","id":"/getting-started.md","url":"/getting-started.html","title":"Getting Started","content":"# Getting Started\n\n- [Protect your service with Otoroshi ApiKey](#protect-your-service-with-otoroshi-apikey)\n- [Secure your web app in 2 calls with an authentication](#secure-your-web-app-in-2-calls-with-an-authentication)\n\nDownload the latest jar of Otoroshi\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar'\n```\n\nOnce downloading, run Otoroshi.\n```sh\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n```\n\nYes, that command is all it took to start it up.\n\n## Protect your service with Otoroshi ApiKey\n\nCreate a new route, exposed on `http://myapi.oto.tools:8080`, which will forward all requests to the mirror `https://mirror.otoroshi.io`.\n\n```sh\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/routes \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"myapi.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.ApikeyCalls\",\n \"enabled\": true,\n \"config\": {\n \"validate\": true,\n \"mandatory\": true,\n \"update_quotas\": true\n }\n }\n ]\n}\nEOF\n```\n\nNow that we have created our route, let’s see if our request reaches our upstream service. \nYou should receive an error from Otoroshi about a missing api key in our request.\n\n```sh\ncurl 'http://myapi.oto.tools:8080'\n```\n\nIt looks like we don’t have access to it. Create your first api key with a quota of 10 calls by day and month.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/apikeys' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"clientId\": \"my-first-apikey-id\",\n \"clientSecret\": \"my-first-apikey-secret\",\n \"clientName\": \"my-first-apikey\",\n \"description\": \"my-first-apikey-description\",\n \"authorizedGroup\": \"default\",\n \"enabled\": true,\n \"throttlingQuota\": 10,\n \"dailyQuota\": 10,\n \"monthlyQuota\": 10\n}\nEOF\n```\n\nCall your api with the generated apikey.\n\n```sh\ncurl 'http://myapi.oto.tools:8080' -u my-first-apikey-id:my-first-apikey-secret\n```\n\n```json\n{\n \"method\": \"GET\",\n \"path\": \"/\",\n \"headers\": {\n \"host\": \"mirror.otoroshi.io\",\n \"accept\": \"*/*\",\n \"user-agent\": \"curl/7.64.1\",\n \"authorization\": \"Basic bXktZmlyc3QtYXBpLWtleS1pZDpteS1maXJzdC1hcGkta2V5LXNlY3JldA==\",\n \"otoroshi-request-id\": \"1465298507974836306\",\n \"otoroshi-proxied-host\": \"myapi.oto.tools:8080\",\n \"otoroshi-request-timestamp\": \"2021-11-29T13:36:02.888+01:00\",\n },\n \"body\": \"\"\n}\n```\n\nCheck your remaining quotas\n\n```sh\ncurl 'http://myapi.oto.tools:8080' -u my-first-apikey-id:my-first-apikey-secret --include\n```\n\nThis should output these following Otoroshi headers\n\n```json\nOtoroshi-Daily-Calls-Remaining: 6\nOtoroshi-Monthly-Calls-Remaining: 6\n```\n\nKeep calling the api and confirm that Otoroshi is sending you an apikey exceeding quota error\n\n\n```json\n{ \n \"Otoroshi-Error\": \"You performed too much requests\"\n}\n```\n\nWell done, you have secured your first api with the apikeys system with limited call quotas.\n\n## Secure your web app in 2 calls with an authentication\n\nCreate an in-memory authentication module, with one registered user, to protect your service.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/auths' \\\n-H \"Otoroshi-Client-Id: admin-api-apikey-id\" \\\n-H \"Otoroshi-Client-Secret: admin-api-apikey-secret\" \\\n-H 'Content-Type: application/json; charset=utf-8' \\\n-d @- <<'EOF'\n{\n \"type\":\"basic\",\n \"id\":\"auth_mod_in_memory_auth\",\n \"name\":\"in-memory-auth\",\n \"desc\":\"in-memory-auth\",\n \"users\":[\n {\n \"name\":\"User Otoroshi\",\n \"password\":\"$2a$10$oIf4JkaOsfiypk5ZK8DKOumiNbb2xHMZUkYkuJyuIqMDYnR/zXj9i\",\n \"email\":\"user@foo.bar\",\n \"metadata\":{\n \"username\":\"roger\"\n },\n \"tags\":[\"foo\"],\n \"webauthn\":null,\n \"rights\":[{\n \"tenant\":\"*:r\",\n \"teams\":[\"*:r\"]\n }]\n }\n ],\n \"sessionCookieValues\":{\n \"httpOnly\":true,\n \"secure\":false\n }\n}\nEOF\n```\n\nThen create a service secure by the previous authentication module, which proxies `google.fr` on `webapp.oto.tools`.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/routes' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"myapi.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"google.fr\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.AuthModule\",\n \"enabled\": true,\n \"config\": {\n \"pass_with_apikey\": false,\n \"auth_module\": null,\n \"module\": \"auth_mod_in_memory_auth\"\n }\n }\n ]\n}\nEOF\n```\n\nNavigate to http://webapp.oto.tools:8080, login with `user@foo.bar/password` and check that you're redirect to `google` page.\n\nWell done! You completed the discovery tutorial."},{"name":"communicate-with-kafka.md","id":"/how-to-s/communicate-with-kafka.md","url":"/how-to-s/communicate-with-kafka.html","title":"Communicate with Kafka","content":"# Communicate with Kafka\n\nEvery matching event can be sent to an [Apache Kafka topic](https://kafka.apache.org/).\n\n### SASL mechanism\n\nCreate a `docker-compose.yml` with the following content\n\n````yml\nversion: \"2\"\n\nservices:\n zookeeper:\n image: docker.io/bitnami/zookeeper:3.8\n ports:\n - \"2181:2181\"\n environment:\n - ALLOW_ANONYMOUS_LOGIN=yes\n kafka:\n image: docker.io/bitnami/kafka:3.2\n ports:\n - \"9092:9092\"\n environment:\n - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181\n - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:PLAINTEXT,CLIENT:SASL_PLAINTEXT\n - ALLOW_PLAINTEXT_LISTENER=yes\n - KAFKA_CFG_LISTENERS=INTERNAL://:9093,CLIENT://:9092\n - KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka:9093,CLIENT://kafka:9092\n - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL\n - KAFKA_CLIENT_USERS=user\n - KAFKA_CLIENT_PASSWORDS=password\n\n depends_on:\n - zookeeper\n````\n\nLaunch the command to create the zookeeper and kafka containers\n\n````bash\ndocker-compose up -d\n````\n\nCreate a new exporter on your Otoroshi instance with the following values\n\n@@@ div { .centered-img }\n\n@@@\n\n### PLAINTEXT mechanism\n\nCreate a `docker-compose.yml` with the following content\n\n````yml\nversion: \"2\"\n\nservices:\n zookeeper:\n image: docker.io/bitnami/zookeeper:3.8\n ports:\n - \"2181:2181\"\n environment:\n - ALLOW_ANONYMOUS_LOGIN=yes\n kafka:\n image: docker.io/bitnami/kafka:3.2\n ports:\n - \"9092:9092\"\n environment:\n - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181\n - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT\n - ALLOW_PLAINTEXT_LISTENER=yes\n - KAFKA_CFG_LISTENERS=INTERNAL://:9093,CLIENT://:9092\n - KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka:9093,CLIENT://kafka:9092\n - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL\n\n depends_on:\n - zookeeper\n````\n\nLaunch the command to create the zookeeper and kafka containers\n\n````bash\ndocker-compose up -d\n````\n\nCreate a new exporter on your Otoroshi instance with the following values\n\n@@@ div { .centered-img }\n\n@@@\n\n### SSL mechanism\n\n````bash\nwget https://raw.githubusercontent.com/confluentinc/confluent-platform-security-tools/master/kafka-generate-ssl.sh\n````\n\n````bash\nchmod +x kafka-generate-ssl.sh\n````\n\nCreate a `docker-compose.yml` with the following content\n\n````yml\nversion: '3.5'\n\nservices:\n\n zookeeper:\n image: \"wurstmeister/zookeeper:latest\"\n ports:\n - \"2181:2181\"\n\n kafka:\n image: wurstmeister/kafka:2.12-2.2.0\n depends_on:\n - zookeeper\n ports:\n - \"9092:9092\"\n environment:\n KAFKA_ADVERTISED_LISTENERS: 'SSL://kafka:9092'\n KAFKA_LISTENERS: 'SSL://0.0.0.0:9092'\n KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'\n KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'\n KAFKA_SSL_KEYSTORE_LOCATION: '/keystore/kafka.keystore.jks'\n KAFKA_SSL_KEYSTORE_PASSWORD: 'otoroshi'\n KAFKA_SSL_KEY_PASSWORD: 'otoroshi'\n KAFKA_SSL_TRUSTSTORE_LOCATION: '/truststore/kafka.truststore.jks'\n KAFKA_SSL_TRUSTSTORE_PASSWORD: 'otoroshi'\n KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ''\n KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ''\n KAFKA_SECURITY_INTER_BROKER_PROTOCOL: 'SSL'\n volumes:\n - ./truststore:/truststore\n - ./keystore:/keystore\n````\n\nLaunch the command to create the zookeeper and kafka containers\n\n````bash\ndocker-compose up -d\n````\n\nCreate a new exporter on your Otoroshi instance with the following values\n\n@@@ div { .centered-img }\n\n@@@\n\n### SASL_SSL mechanism\n\nGenerate the TLS certificates for the Kafka broker.\n\nCreate a file `generate.sh` with the following content and run the command\n\n````bash\nchmod +x generate.sh && ./generate.sh\n````\n\n````bash\n# Content of the generate.sh file\n\nversion: '3.5'\n\nservices:\n\n zookeeper:\n image: \"bitnami/zookeeper:latest\"\n ports:\n - \"2181:2181\"\n environment:\n - ALLOW_ANONYMOUS_LOGIN=yes\n\n kafka:\n image: bitnami/kafka:latest\n depends_on:\n - zookeeper\n ports:\n - '9092:9092'\n environment:\n ALLOW_PLAINTEXT_LISTENER: 'yes'\n KAFKA_ZOOKEEPER_PROTOCOL: 'PLAINTEXT'\n KAFKA_CFG_ZOOKEEPER_CONNECT: 'zookeeper:2181'\n KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: 'INTERNAL:PLAINTEXT,CLIENT:SASL_SSL'\n KAFKA_CFG_LISTENERS: 'INTERNAL://:9093,CLIENT://:9092'\n KAFKA_INTER_BROKER_LISTENER_NAME: 'INTERNAL'\n KAFKA_CFG_ADVERTISED_LISTENERS: 'INTERNAL://kafka:9093,CLIENT://kafka:9092'\n KAFKA_CLIENT_USERS: 'user'\n KAFKA_CLIENT_PASSWORDS: 'password'\n KAFKA_CERTIFICATE_PASSWORD: 'otoroshi'\n KAFKA_TLS_TYPE: 'JKS'\n KAFKA_OPTS: \"-Djava.security.auth.login.config=/opt/kafka/kafka_server_jaas.conf\"\n volumes:\n - ./secrets/kafka_server_jaas.conf:/opt/kafka/kafka_server_jaas.conf\n - ./truststore/kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro\n - ./keystore/kafka.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro\n 79966b@PMP00131 î‚° ~/Downloads/kafka_ssl_setup-master î‚°\n 79966b@PMP00131 î‚° ~/Downloads/kafka_ssl_setup-master î‚° cat generate.sh\n#!/usr/bin/env bash\n\nset -e\n\nKEYSTORE_FILENAME=\"kafka.keystore.jks\"\nVALIDITY_IN_DAYS=3650\nDEFAULT_TRUSTSTORE_FILENAME=\"kafka.truststore.jks\"\nTRUSTSTORE_WORKING_DIRECTORY=\"truststore\"\nKEYSTORE_WORKING_DIRECTORY=\"keystore\"\nCA_CERT_FILE=\"ca-cert\"\nKEYSTORE_SIGN_REQUEST=\"cert-file\"\nKEYSTORE_SIGN_REQUEST_SRL=\"ca-cert.srl\"\nKEYSTORE_SIGNED_CERT=\"cert-signed\"\n\nfunction file_exists_and_exit() {\n echo \"'$1' cannot exist. Move or delete it before\"\n echo \"re-running this script.\"\n exit 1\n}\n\nif [ -e \"$KEYSTORE_WORKING_DIRECTORY\" ]; then\n file_exists_and_exit $KEYSTORE_WORKING_DIRECTORY\nfi\n\nif [ -e \"$CA_CERT_FILE\" ]; then\n file_exists_and_exit $CA_CERT_FILE\nfi\n\nif [ -e \"$KEYSTORE_SIGN_REQUEST\" ]; then\n file_exists_and_exit $KEYSTORE_SIGN_REQUEST\nfi\n\nif [ -e \"$KEYSTORE_SIGN_REQUEST_SRL\" ]; then\n file_exists_and_exit $KEYSTORE_SIGN_REQUEST_SRL\nfi\n\nif [ -e \"$KEYSTORE_SIGNED_CERT\" ]; then\n file_exists_and_exit $KEYSTORE_SIGNED_CERT\nfi\n\necho\necho \"Welcome to the Kafka SSL keystore and truststore generator script.\"\n\necho\necho \"First, do you need to generate a trust store and associated private key,\"\necho \"or do you already have a trust store file and private key?\"\necho\necho -n \"Do you need to generate a trust store and associated private key? [yn] \"\nread generate_trust_store\n\ntrust_store_file=\"\"\ntrust_store_private_key_file=\"\"\n\nif [ \"$generate_trust_store\" == \"y\" ]; then\n if [ -e \"$TRUSTSTORE_WORKING_DIRECTORY\" ]; then\n file_exists_and_exit $TRUSTSTORE_WORKING_DIRECTORY\n fi\n\n mkdir $TRUSTSTORE_WORKING_DIRECTORY\n echo\n echo \"OK, we'll generate a trust store and associated private key.\"\n echo\n echo \"First, the private key.\"\n echo\n echo \"You will be prompted for:\"\n echo \" - A password for the private key. Remember this.\"\n echo \" - Information about you and your company.\"\n echo \" - NOTE that the Common Name (CN) is currently not important.\"\n\n openssl req -new -x509 -keyout $TRUSTSTORE_WORKING_DIRECTORY/ca-key \\\n -out $TRUSTSTORE_WORKING_DIRECTORY/$CA_CERT_FILE -days $VALIDITY_IN_DAYS\n\n trust_store_private_key_file=\"$TRUSTSTORE_WORKING_DIRECTORY/ca-key\"\n\n echo\n echo \"Two files were created:\"\n echo \" - $TRUSTSTORE_WORKING_DIRECTORY/ca-key -- the private key used later to\"\n echo \" sign certificates\"\n echo \" - $TRUSTSTORE_WORKING_DIRECTORY/$CA_CERT_FILE -- the certificate that will be\"\n echo \" stored in the trust store in a moment and serve as the certificate\"\n echo \" authority (CA). Once this certificate has been stored in the trust\"\n echo \" store, it will be deleted. It can be retrieved from the trust store via:\"\n echo \" $ keytool -keystore -export -alias CARoot -rfc\"\n\n echo\n echo \"Now the trust store will be generated from the certificate.\"\n echo\n echo \"You will be prompted for:\"\n echo \" - the trust store's password (labeled 'keystore'). Remember this\"\n echo \" - a confirmation that you want to import the certificate\"\n\n keytool -keystore $TRUSTSTORE_WORKING_DIRECTORY/$DEFAULT_TRUSTSTORE_FILENAME \\\n -alias CARoot -import -file $TRUSTSTORE_WORKING_DIRECTORY/$CA_CERT_FILE\n\n trust_store_file=\"$TRUSTSTORE_WORKING_DIRECTORY/$DEFAULT_TRUSTSTORE_FILENAME\"\n\n echo\n echo \"$TRUSTSTORE_WORKING_DIRECTORY/$DEFAULT_TRUSTSTORE_FILENAME was created.\"\n\n # don't need the cert because it's in the trust store.\n rm $TRUSTSTORE_WORKING_DIRECTORY/$CA_CERT_FILE\nelse\n echo\n echo -n \"Enter the path of the trust store file. \"\n read -e trust_store_file\n\n if ! [ -f $trust_store_file ]; then\n echo \"$trust_store_file isn't a file. Exiting.\"\n exit 1\n fi\n\n echo -n \"Enter the path of the trust store's private key. \"\n read -e trust_store_private_key_file\n\n if ! [ -f $trust_store_private_key_file ]; then\n echo \"$trust_store_private_key_file isn't a file. Exiting.\"\n exit 1\n fi\nfi\n\necho\necho \"Continuing with:\"\necho \" - trust store file: $trust_store_file\"\necho \" - trust store private key: $trust_store_private_key_file\"\n\nmkdir $KEYSTORE_WORKING_DIRECTORY\n\necho\necho \"Now, a keystore will be generated. Each broker and logical client needs its own\"\necho \"keystore. This script will create only one keystore. Run this script multiple\"\necho \"times for multiple keystores.\"\necho\necho \"You will be prompted for the following:\"\necho \" - A keystore password. Remember it.\"\necho \" - Personal information, such as your name.\"\necho \" NOTE: currently in Kafka, the Common Name (CN) does not need to be the FQDN of\"\necho \" this host. However, at some point, this may change. As such, make the CN\"\necho \" the FQDN. Some operating systems call the CN prompt 'first / last name'\"\necho \" - A key password, for the key being generated within the keystore. Remember this.\"\n\n# To learn more about CNs and FQDNs, read:\n# https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/X509ExtendedTrustManager.html\n\nkeytool -keystore $KEYSTORE_WORKING_DIRECTORY/$KEYSTORE_FILENAME \\\n -alias localhost -validity $VALIDITY_IN_DAYS -genkey -keyalg RSA\n\necho\necho \"'$KEYSTORE_WORKING_DIRECTORY/$KEYSTORE_FILENAME' now contains a key pair and a\"\necho \"self-signed certificate. Again, this keystore can only be used for one broker or\"\necho \"one logical client. Other brokers or clients need to generate their own keystores.\"\n\necho\necho \"Fetching the certificate from the trust store and storing in $CA_CERT_FILE.\"\necho\necho \"You will be prompted for the trust store's password (labeled 'keystore')\"\n\nkeytool -keystore $trust_store_file -export -alias CARoot -rfc -file $CA_CERT_FILE\n\necho\necho \"Now a certificate signing request will be made to the keystore.\"\necho\necho \"You will be prompted for the keystore's password.\"\nkeytool -keystore $KEYSTORE_WORKING_DIRECTORY/$KEYSTORE_FILENAME -alias localhost \\\n -certreq -file $KEYSTORE_SIGN_REQUEST\n\necho\necho \"Now the trust store's private key (CA) will sign the keystore's certificate.\"\necho\necho \"You will be prompted for the trust store's private key password.\"\nopenssl x509 -req -CA $CA_CERT_FILE -CAkey $trust_store_private_key_file \\\n -in $KEYSTORE_SIGN_REQUEST -out $KEYSTORE_SIGNED_CERT \\\n -days $VALIDITY_IN_DAYS -CAcreateserial\n# creates $KEYSTORE_SIGN_REQUEST_SRL which is never used or needed.\n\necho\necho \"Now the CA will be imported into the keystore.\"\necho\necho \"You will be prompted for the keystore's password and a confirmation that you want to\"\necho \"import the certificate.\"\nkeytool -keystore $KEYSTORE_WORKING_DIRECTORY/$KEYSTORE_FILENAME -alias CARoot \\\n -import -file $CA_CERT_FILE\nrm $CA_CERT_FILE # delete the trust store cert because it's stored in the trust store.\n\necho\necho \"Now the keystore's signed certificate will be imported back into the keystore.\"\necho\necho \"You will be prompted for the keystore's password.\"\nkeytool -keystore $KEYSTORE_WORKING_DIRECTORY/$KEYSTORE_FILENAME -alias localhost -import \\\n -file $KEYSTORE_SIGNED_CERT\n\necho\necho \"All done!\"\necho\necho \"Delete intermediate files? They are:\"\necho \" - '$KEYSTORE_SIGN_REQUEST_SRL': CA serial number\"\necho \" - '$KEYSTORE_SIGN_REQUEST': the keystore's certificate signing request\"\necho \" (that was fulfilled)\"\necho \" - '$KEYSTORE_SIGNED_CERT': the keystore's certificate, signed by the CA, and stored back\"\necho \" into the keystore\"\necho -n \"Delete? [yn] \"\nread delete_intermediate_files\n\nif [ \"$delete_intermediate_files\" == \"y\" ]; then\n rm $KEYSTORE_SIGN_REQUEST_SRL\n rm $KEYSTORE_SIGN_REQUEST\n rm $KEYSTORE_SIGNED_CERT\nfi\n````\n\nCreate, in the same repository, a repository named `secrets` with the following configuration.\n\n````bash \n# Content of ~/tmp/kafka/secrets/kafka_server_jaas.conf\n\nClient {\n org.apache.kafka.common.security.plain.PlainLoginModule required\n username=\"user\"\n password=\"password\";\n};\n````\n\nCreate a `docker-compose.yml` file with the following content.\n\n````bash\nversion: '3.5'\n\nservices:\n\n zookeeper:\n image: \"bitnami/zookeeper:latest\"\n ports:\n - \"2181:2181\"\n environment:\n - ALLOW_ANONYMOUS_LOGIN=yes\n\n kafka:\n image: bitnami/kafka:latest\n depends_on:\n - zookeeper\n ports:\n - '9092:9092'\n environment:\n ALLOW_PLAINTEXT_LISTENER: 'yes'\n KAFKA_ZOOKEEPER_PROTOCOL: 'PLAINTEXT'\n KAFKA_CFG_ZOOKEEPER_CONNECT: 'zookeeper:2181'\n KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: 'INTERNAL:PLAINTEXT,CLIENT:SASL_SSL'\n KAFKA_CFG_LISTENERS: 'INTERNAL://:9093,CLIENT://:9092'\n KAFKA_INTER_BROKER_LISTENER_NAME: 'INTERNAL'\n KAFKA_CFG_ADVERTISED_LISTENERS: 'INTERNAL://kafka:9093,CLIENT://kafka:9092'\n KAFKA_CLIENT_USERS: 'user'\n KAFKA_CLIENT_PASSWORDS: 'password'\n KAFKA_CERTIFICATE_PASSWORD: 'otoroshi'\n KAFKA_TLS_TYPE: 'JKS'\n KAFKA_OPTS: \"-Djava.security.auth.login.config=/opt/kafka/kafka_server_jaas.conf\"\n volumes:\n - ./secrets/kafka_server_jaas.conf:/opt/kafka/kafka_server_jaas.conf\n - ./truststore/kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro\n - ./keystore/kafka.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro\n````\n\nAt this point, your repository should be \n````\n/tmp/kafka\n | generate.sh\n | docker-compose.yml\n | truststore\n | kafka.truststore.jks\n | keystore \n | kafka.keystore.jks\n | secrets \n | kafka_server_jaas.conf\n````\n\nLaunch the command to create the zookeeper and kafka containers\n\n````bash\ndocker-compose up -d\n````\n\nCreate a new exporter on your Otoroshi instance with the following values\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"create-custom-auth-module.md","id":"/how-to-s/create-custom-auth-module.md","url":"/how-to-s/create-custom-auth-module.html","title":"Create your Authentication module","content":"# Create your Authentication module\n\nAuthentication modules can be used to protect routes. In some cases, you need to create your own custom authentication module to create a new one or simply inherit and extend an exiting module.\n\nYou can write your own authentication using your favorite IDE. Just create an SBT project with the following dependencies. It can be quite handy to manage the source code like any other piece of code, and it avoid the compilation time for the script at Otoroshi startup.\n\n```scala\nlazy val root = (project in file(\".\")).\n settings(\n inThisBuild(List(\n organization := \"com.example\",\n scalaVersion := \"2.12.7\",\n version := \"0.1.0-SNAPSHOT\"\n )),\n name := \"my-custom-auth-module\",\n libraryDependencies += \"fr.maif\" %% \"otoroshi\" % \"1x.x.x\"\n )\n```\n\nJust below, you can find an example of Custom Auth. module. \n\n```scala\npackage auth.custom\n\nimport akka.http.scaladsl.util.FastFuture\nimport otoroshi.auth.{AuthModule, AuthModuleConfig, Form, SessionCookieValues}\nimport otoroshi.controllers.routes\nimport otoroshi.env.Env\nimport otoroshi.models._\nimport otoroshi.security.IdGenerator\nimport otoroshi.utils.JsonPathValidator\nimport otoroshi.utils.syntax.implicits.BetterSyntax\nimport play.api.http.MimeTypes\nimport play.api.libs.json._\nimport play.api.mvc._\n\nimport scala.concurrent.{ExecutionContext, Future}\nimport scala.util.{Failure, Success, Try}\n\ncase class CustomModuleConfig(\n id: String,\n name: String,\n desc: String,\n clientSideSessionEnabled: Boolean,\n sessionMaxAge: Int = 86400,\n userValidators: Seq[JsonPathValidator] = Seq.empty,\n tags: Seq[String],\n metadata: Map[String, String],\n sessionCookieValues: SessionCookieValues,\n location: otoroshi.models.EntityLocation = otoroshi.models.EntityLocation(),\n form: Option[Form] = None,\n foo: String = \"bar\"\n ) extends AuthModuleConfig {\n def `type`: String = \"custom\"\n def humanName: String = \"Custom Authentication\"\n\n override def authModule(config: GlobalConfig): AuthModule = CustomAuthModule(this)\n override def withLocation(location: EntityLocation): AuthModuleConfig = copy(location = location)\n\n lazy val format = new Format[CustomModuleConfig] {\n override def writes(o: CustomModuleConfig): JsValue = o.asJson\n\n override def reads(json: JsValue): JsResult[CustomModuleConfig] = Try {\n CustomModuleConfig(\n location = otoroshi.models.EntityLocation.readFromKey(json),\n id = (json \\ \"id\").as[String],\n name = (json \\ \"name\").as[String],\n desc = (json \\ \"desc\").asOpt[String].getOrElse(\"--\"),\n clientSideSessionEnabled = (json \\ \"clientSideSessionEnabled\").asOpt[Boolean].getOrElse(true),\n sessionMaxAge = (json \\ \"sessionMaxAge\").asOpt[Int].getOrElse(86400),\n metadata = (json \\ \"metadata\").asOpt[Map[String, String]].getOrElse(Map.empty),\n tags = (json \\ \"tags\").asOpt[Seq[String]].getOrElse(Seq.empty[String]),\n sessionCookieValues =\n (json \\ \"sessionCookieValues\").asOpt(SessionCookieValues.fmt).getOrElse(SessionCookieValues()),\n userValidators = (json \\ \"userValidators\")\n .asOpt[Seq[JsValue]]\n .map(_.flatMap(v => JsonPathValidator.format.reads(v).asOpt))\n .getOrElse(Seq.empty),\n form = (json \\ \"form\").asOpt[JsValue].flatMap(json => Form._fmt.reads(json) match {\n case JsSuccess(value, _) => Some(value)\n case JsError(_) => None\n }),\n foo = (json \\ \"foo\").asOpt[String].getOrElse(\"bar\")\n )\n } match {\n case Failure(exception) => JsError(exception.getMessage)\n case Success(value) => JsSuccess(value)\n }\n }.asInstanceOf[Format[AuthModuleConfig]]\n\n override def _fmt()(implicit env: Env): Format[AuthModuleConfig] = format\n\n override def asJson =\n location.jsonWithKey ++ Json.obj(\n \"type\" -> \"custom\",\n \"id\" -> this.id,\n \"name\" -> this.name,\n \"desc\" -> this.desc,\n \"clientSideSessionEnabled\" -> this.clientSideSessionEnabled,\n \"sessionMaxAge\" -> this.sessionMaxAge,\n \"metadata\" -> this.metadata,\n \"tags\" -> JsArray(tags.map(JsString.apply)),\n \"sessionCookieValues\" -> SessionCookieValues.fmt.writes(this.sessionCookieValues),\n \"userValidators\" -> JsArray(userValidators.map(_.json)),\n \"form\" -> this.form.map(Form._fmt.writes),\n \"foo\" -> foo\n )\n\n def save()(implicit ec: ExecutionContext, env: Env): Future[Boolean] = env.datastores.authConfigsDataStore.set(this)\n\n override def cookieSuffix(desc: ServiceDescriptor) = s\"custom-auth-$id\"\n def theDescription: String = desc\n def theMetadata: Map[String, String] = metadata\n def theName: String = name\n def theTags: Seq[String] = tags\n}\n\nobject CustomAuthModule {\n def defaultConfig = CustomModuleConfig(\n id = IdGenerator.namedId(\"auth_mod\", IdGenerator.uuid),\n name = \"My custom auth. module\",\n desc = \"My custom auth. module\",\n tags = Seq.empty,\n metadata = Map.empty,\n sessionCookieValues = SessionCookieValues(),\n clientSideSessionEnabled = true,\n form = None)\n}\n\ncase class CustomAuthModule(authConfig: CustomModuleConfig) extends AuthModule {\n def this() = this(CustomAuthModule.defaultConfig)\n\n override def paLoginPage(request: RequestHeader, config: GlobalConfig, descriptor: ServiceDescriptor, isRoute: Boolean)\n (implicit ec: ExecutionContext, env: Env): Future[Result] = {\n val redirect = request.getQueryString(\"redirect\")\n val hash = env.sign(s\"${authConfig.id}:::${descriptor.id}\")\n env.datastores.authConfigsDataStore.generateLoginToken().flatMap { token =>\n Results\n .Ok(auth.custom.views.html.login(s\"/privateapps/generic/callback?desc=${descriptor.id}&hash=$hash&route=${isRoute}\", token))\n .as(MimeTypes.HTML)\n .addingToSession(\n \"ref\" -> authConfig.id,\n s\"pa-redirect-after-login-${authConfig.cookieSuffix(descriptor)}\" -> redirect.getOrElse(\n routes.PrivateAppsController.home.absoluteURL(env.exposedRootSchemeIsHttps)(request)\n )\n )(request)\n .future\n }\n }\n\n override def paLogout(request: RequestHeader, user: Option[PrivateAppsUser], config: GlobalConfig, descriptor: ServiceDescriptor)\n (implicit ec: ExecutionContext, env: Env): Future[Either[Result, Option[String]]] = FastFuture.successful(Right(None))\n\n override def paCallback(request: Request[AnyContent], config: GlobalConfig, descriptor: ServiceDescriptor)\n (implicit ec: ExecutionContext, env: Env): Future[Either[String, PrivateAppsUser]] = {\n PrivateAppsUser(\n randomId = IdGenerator.token(64),\n name = \"foo\",\n email = s\"foo@oto.tools\",\n profile = Json.obj(\n \"name\" -> \"foo\",\n \"email\" -> s\"foo@oto.tools\"\n ),\n realm = authConfig.cookieSuffix(descriptor),\n otoroshiData = None,\n authConfigId = authConfig.id,\n tags = Seq.empty,\n metadata = Map.empty,\n location = authConfig.location\n )\n .validate(authConfig.userValidators)\n .vfuture\n }\n\n override def boLoginPage(request: RequestHeader, config: GlobalConfig)(implicit ec: ExecutionContext, env: Env): Future[Result] = ???\n\n override def boLogout(request: RequestHeader, user: BackOfficeUser, config: GlobalConfig)(implicit ec: ExecutionContext, env: Env): Future[Either[Result, Option[String]]] = ???\n\n override def boCallback(request: Request[AnyContent], config: GlobalConfig)(implicit ec: ExecutionContext, env: Env): Future[Either[String, BackOfficeUser]] = ???\n}\n```\n\nThis custom Auth. module inherits from AuthModule (the Auth module have to inherit from the AuthModule trait to be found by Otoroshi). It exposes a simple UI to login, and create an user for each callback request without any verification. Methods starting with bo will be called in case that the auth. module is used on the back office and in other cases, the pa methods (pa for Private App) will be called to protect a route.\n\nThis custom Auth. module uses a [Play template](https://www.playframework.com/documentation/2.8.x/ScalaTemplates) to display the login page. It's not required by Otoroshi but it's a easy way to create a login form.\n\n```html \n@import otoroshi.env.Env\n\n@(action: String, token: String)\n\n
\n

Login page

\n\n
\n \n \n Login\n \n \n
\n```\n\nYour hierarchy files should be something like:\n\n```\nauth\n| custom\n |customModule.scala\n | views\n | login.scala.html\n```\n\nWhen your code is ready, create a jar file \n\n```\nsbt package\n```\n\nand add the jar file to the Otoroshi classpath\n\n```sh\njava -cp \"/path/to/customModule.jar:$/path/to/otoroshi.jar\" play.core.server.ProdServerStart\n```\n\nthen, in the authentication modules, you can chose your custom module in the list."},{"name":"custom-initial-state.md","id":"/how-to-s/custom-initial-state.md","url":"/how-to-s/custom-initial-state.html","title":"Initial state customization","content":"# Initial state customization\n\nwhen you start otoroshi for the first time, some basic entities will be created and stored in the datastore in order to make your instance work properly. However it might not be enough for your use case but you do want to bother with restoring a complete otoroshi export.\n\nIn order to make state customization easy, otoroshi provides the config. key `otoroshi.initialCustomization`, overriden by the env. variable `OTOROSHI_INITIAL_CUSTOMIZATION`\n\nThe expected structure is the following :\n\n```javascript\n{\n \"config\": { ... },\n \"admins\": [],\n \"simpleAdmins\": [],\n \"serviceGroups\": [],\n \"apiKeys\": [],\n \"serviceDescriptors\": [],\n \"errorTemplates\": [],\n \"jwtVerifiers\": [],\n \"authConfigs\": [],\n \"certificates\": [],\n \"clientValidators\": [],\n \"scripts\": [],\n \"tcpServices\": [],\n \"dataExporters\": [],\n \"tenants\": [],\n \"teams\": []\n}\n```\n\nin this structure, everything is optional. For every array property, items will be added to the datastore. For the global config. object, you can just add the parts that you need, and they will be merged with the existing config. object of the datastore.\n\n## Customize the global config.\n\nfor instance, if you want to customize the behavior of the TLS termination, you can use the following :\n\n```sh\nexport OTOROSHI_INITIAL_CUSTOMIZATION='{\"config\":{\"tlsSettings\":{\"defaultDomain\":\"www.foo.bar\",\"randomIfNotFound\":false}}'\n```\n\n## Customize entities\n\nif you want to add apikeys at first boot \n\n```sh\nexport OTOROSHI_INITIAL_CUSTOMIZATION='{\"apikeys\":[{\"_loc\":{\"tenant\":\"default\",\"teams\":[\"default\"]},\"clientId\":\"ksVlQ2KlZm0CnDfP\",\"clientSecret\":\"usZYbE1iwSsbpKY45W8kdbZySj1M5CWvFXe0sPbZ0glw6JalMsgorDvSBdr2ZVBk\",\"clientName\":\"awesome-apikey\",\"description\":\"the awesome apikey\",\"authorizedGroup\":\"default\",\"authorizedEntities\":[\"group_default\"],\"enabled\":true,\"readOnly\":false,\"allowClientIdOnly\":false,\"throttlingQuota\":10000000,\"dailyQuota\":10000000,\"monthlyQuota\":10000000,\"constrainedServicesOnly\":false,\"restrictions\":{\"enabled\":false,\"allowLast\":true,\"allowed\":[],\"forbidden\":[],\"notFound\":[]},\"rotation\":{\"enabled\":false,\"rotationEvery\":744,\"gracePeriod\":168,\"nextSecret\":null},\"validUntil\":null,\"tags\":[],\"metadata\":{}}]}'\n```\n"},{"name":"custom-log-levels.md","id":"/how-to-s/custom-log-levels.md","url":"/how-to-s/custom-log-levels.html","title":"Log levels customization","content":"# Log levels customization\n\nIf you want to customize the log level of your otoroshi instances, it's pretty easy to do it using environment variables or configuration file.\n\n## Customize log level for one logger with configuration file\n\nLet say you want to see `DEBUG` messages from the logger `otoroshi-http-handler`.\n\nThen you just have to declare in your otoroshi configuration file\n\n```\notoroshi.loggers {\n ...\n otoroshi-http-handler = \"DEBUG\"\n ...\n}\n```\n\npossible levels are `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`. Default one is `WARN`.\n\n## Customize log level for one logger with environment variable\n\nLet say you want to see `DEBUG` messages from the logger `otoroshi-http-handler`.\n\nThen you just have to declare an environment variable named `OTOROSHI_LOGGERS_OTOROSHI_HTTP_HANDLER` with value `DEBUG`. The rule is \n\n```scala\n\"OTOROSHI_LOGGERS_\" + loggerName.toUpperCase().replace(\"-\", \"_\")\n```\n\npossible levels are `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`. Default one is `WARN`.\n\n## List of loggers\n\n* [`otoroshi-error-handler`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-error-handler%22%29)\n* [`otoroshi-http-handler`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-http-handler%22%29)\n* [`otoroshi-http-handler-debug`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-http-handler-debug%22%29)\n* [`otoroshi-websocket-handler`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-websocket-handler%22%29)\n* [`otoroshi-websocket`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-websocket%22%29)\n* [`otoroshi-websocket-handler-actor`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-websocket-handler-actor%22%29)\n* [`otoroshi-snowmonkey`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-snowmonkey%22%29)\n* [`otoroshi-circuit-breaker`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-circuit-breaker%22%29)\n* [`otoroshi-circuit-breaker`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-circuit-breaker%22%29)\n* [`otoroshi-worker`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-worker%22%29)\n* [`otoroshi-http-handler`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-http-handler%22%29)\n* [`otoroshi-auth-controller`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-auth-controller%22%29)\n* [`otoroshi-swagger-controller`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-swagger-controller%22%29)\n* [`otoroshi-u2f-controller`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-u2f-controller%22%29)\n* [`otoroshi-backoffice-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-backoffice-api%22%29)\n* [`otoroshi-health-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-health-api%22%29)\n* [`otoroshi-stats-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-stats-api%22%29)\n* [`otoroshi-admin-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-admin-api%22%29)\n* [`otoroshi-auth-modules-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-auth-modules-api%22%29)\n* [`otoroshi-certificates-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-certificates-api%22%29)\n* [`otoroshi-pki`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-pki%22%29)\n* [`otoroshi-scripts-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-scripts-api%22%29)\n* [`otoroshi-analytics-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-analytics-api%22%29)\n* [`otoroshi-import-export-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-import-export-api%22%29)\n* [`otoroshi-templates-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-templates-api%22%29)\n* [`otoroshi-teams-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-teams-api%22%29)\n* [`otoroshi-events-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-events-api%22%29)\n* [`otoroshi-canary-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-canary-api%22%29)\n* [`otoroshi-data-exporter-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-data-exporter-api%22%29)\n* [`otoroshi-services-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-services-api%22%29)\n* [`otoroshi-tcp-service-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-tcp-service-api%22%29)\n* [`otoroshi-tenants-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-tenants-api%22%29)\n* [`otoroshi-global-config-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-global-config-api%22%29)\n* [`otoroshi-apikeys-fs-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-apikeys-fs-api%22%29)\n* [`otoroshi-apikeys-fg-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-apikeys-fg-api%22%29)\n* [`otoroshi-apikeys-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-apikeys-api%22%29)\n* [`otoroshi-statsd-actor`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-statsd-actor%22%29)\n* [`otoroshi-snow-monkey-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-snow-monkey-api%22%29)\n* [`otoroshi-jobs-eventstore-checker`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-jobs-eventstore-checker%22%29)\n* [`otoroshi-initials-certs-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-initials-certs-job%22%29)\n* [`otoroshi-alert-actor`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-alert-actor%22%29)\n* [`otoroshi-alert-actor-supervizer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-alert-actor-supervizer%22%29)\n* [`otoroshi-alerts`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-alerts%22%29)\n* [`otoroshi-apikeys-secrets-rotation-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-apikeys-secrets-rotation-job%22%29)\n* [`otoroshi-loader`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-loader%22%29)\n* [`otoroshi-api-action`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-api-action%22%29)\n* [`otoroshi-api-action`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-api-action%22%29)\n* [`otoroshi-analytics-writes-elastic`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-analytics-writes-elastic%22%29)\n* [`otoroshi-analytics-reads-elastic`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-analytics-reads-elastic%22%29)\n* [`otoroshi-events-actor-supervizer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-events-actor-supervizer%22%29)\n* [`otoroshi-data-exporter`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-data-exporter%22%29)\n* [`otoroshi-data-exporter-update-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-data-exporter-update-job%22%29)\n* [`otoroshi-kafka-wrapper`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-kafka-wrapper%22%29)\n* [`otoroshi-kafka-connector`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-kafka-connector%22%29)\n* [`otoroshi-analytics-webhook`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-analytics-webhook%22%29)\n* [`otoroshi-jobs-software-updates`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-jobs-software-updates%22%29)\n* [`otoroshi-analytics-actor`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-analytics-actor%22%29)\n* [`otoroshi-analytics-actor-supervizer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-analytics-actor-supervizer%22%29)\n* [`otoroshi-analytics-event`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-analytics-event%22%29)\n* [`otoroshi-env`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-env%22%29)\n* [`otoroshi-script-compiler`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-script-compiler%22%29)\n* [`otoroshi-script-manager`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-script-manager%22%29)\n* [`otoroshi-script`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-script%22%29)\n* [`otoroshi-tcp-proxy`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-tcp-proxy%22%29)\n* [`otoroshi-tcp-proxy`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-tcp-proxy%22%29)\n* [`otoroshi-tcp-proxy`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-tcp-proxy%22%29)\n* [`otoroshi-custom-timeouts`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-custom-timeouts%22%29)\n* [`otoroshi-client-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-client-config%22%29)\n* [`otoroshi-canary`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-canary%22%29)\n* [`otoroshi-redirection-settings`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-redirection-settings%22%29)\n* [`otoroshi-service-descriptor`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-service-descriptor%22%29)\n* [`otoroshi-service-descriptor-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-service-descriptor-datastore%22%29)\n* [`otoroshi-console-mailer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-console-mailer%22%29)\n* [`otoroshi-mailgun-mailer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-mailgun-mailer%22%29)\n* [`otoroshi-mailjet-mailer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-mailjet-mailer%22%29)\n* [`otoroshi-sendgrid-mailer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-sendgrid-mailer%22%29)\n* [`otoroshi-generic-mailer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-generic-mailer%22%29)\n* [`otoroshi-clevercloud-client`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-clevercloud-client%22%29)\n* [`otoroshi-metrics`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-metrics%22%29)\n* [`otoroshi-gzip-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-gzip-config%22%29)\n* [`otoroshi-regex-pool`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-regex-pool%22%29)\n* [`otoroshi-ws-client-chooser`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-ws-client-chooser%22%29)\n* [`otoroshi-akka-ws-client`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-akka-ws-client%22%29)\n* [`otoroshi-http-implicits`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-http-implicits%22%29)\n* [`otoroshi-service-group`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-service-group%22%29)\n* [`otoroshi-data-exporter-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-data-exporter-config%22%29)\n* [`otoroshi-data-exporter-config-migration-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-data-exporter-config-migration-job%22%29)\n* [`otoroshi-lets-encrypt-helper`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-lets-encrypt-helper%22%29)\n* [`otoroshi-apkikey`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-apkikey%22%29)\n* [`otoroshi-error-template`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-error-template%22%29)\n* [`otoroshi-job-manager`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-job-manager%22%29)\n* [`otoroshi-plugins-internal-eventlistener-actor`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-internal-eventlistener-actor%22%29)\n* [`otoroshi-global-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-global-config%22%29)\n* [`otoroshi-jwks`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-jwks%22%29)\n* [`otoroshi-jwt-verifier`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-jwt-verifier%22%29)\n* [`otoroshi-global-jwt-verifier`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-global-jwt-verifier%22%29)\n* [`otoroshi-snowmonkey-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-snowmonkey-config%22%29)\n* [`otoroshi-webauthn-admin-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-webauthn-admin-datastore%22%29)\n* [`otoroshi-webauthn-admin-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-webauthn-admin-datastore%22%29)\n* [`otoroshi-service-datatstore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-service-datatstore%22%29)\n* [`otoroshi-cassandra-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-cassandra-datastores%22%29)\n* [`otoroshi-redis-like-store`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-redis-like-store%22%29)\n* [`otoroshi-globalconfig-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-globalconfig-datastore%22%29)\n* [`otoroshi-reactive-pg-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-reactive-pg-datastores%22%29)\n* [`otoroshi-reactive-pg-kv`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-reactive-pg-kv%22%29)\n* [`otoroshi-cassandra-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-cassandra-datastores%22%29)\n* [`otoroshi-apikey-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-apikey-datastore%22%29)\n* [`otoroshi-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-datastore%22%29)\n* [`otoroshi-certificate-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-certificate-datastore%22%29)\n* [`otoroshi-simple-admin-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-simple-admin-datastore%22%29)\n* [`otoroshi-atomic-in-memory-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-atomic-in-memory-datastore%22%29)\n* [`otoroshi-lettuce-redis`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-lettuce-redis%22%29)\n* [`otoroshi-lettuce-redis-cluster`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-lettuce-redis-cluster%22%29)\n* [`otoroshi-redis-lettuce-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-redis-lettuce-datastores%22%29)\n* [`otoroshi-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-datastores%22%29)\n* [`otoroshi-file-db-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-file-db-datastores%22%29)\n* [`otoroshi-http-db-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-http-db-datastores%22%29)\n* [`otoroshi-s3-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-s3-datastores%22%29)\n* [`PluginDocumentationGenerator`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22PluginDocumentationGenerator%22%29)\n* [`otoroshi-health-checker`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-health-checker%22%29)\n* [`otoroshi-healthcheck-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-healthcheck-job%22%29)\n* [`otoroshi-healthcheck-local-cache-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-healthcheck-local-cache-job%22%29)\n* [`otoroshi-plugins-response-cache`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-response-cache%22%29)\n* [`otoroshi-oidc-apikey-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-oidc-apikey-config%22%29)\n* [`otoroshi-plugins-maxmind-geolocation-info`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-maxmind-geolocation-info%22%29)\n* [`otoroshi-plugins-ipstack-geolocation-info`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-ipstack-geolocation-info%22%29)\n* [`otoroshi-plugins-maxmind-geolocation-helper`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-maxmind-geolocation-helper%22%29)\n* [`otoroshi-plugins-user-agent-helper`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-user-agent-helper%22%29)\n* [`otoroshi-plugins-user-agent-extractor`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-user-agent-extractor%22%29)\n* [`otoroshi-global-el`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-global-el%22%29)\n* [`otoroshi-plugins-oauth1-caller-plugin`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-oauth1-caller-plugin%22%29)\n* [`otoroshi-dynamic-sslcontext`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-dynamic-sslcontext%22%29)\n* [`otoroshi-plugins-access-log-clf`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-access-log-clf%22%29)\n* [`otoroshi-plugins-access-log-json`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-access-log-json%22%29)\n* [`otoroshi-plugins-kafka-access-log`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kafka-access-log%22%29)\n* [`otoroshi-plugins-kubernetes-client`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kubernetes-client%22%29)\n* [`otoroshi-plugins-kubernetes-ingress-controller-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kubernetes-ingress-controller-job%22%29)\n* [`otoroshi-plugins-kubernetes-ingress-sync`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kubernetes-ingress-sync%22%29)\n* [`otoroshi-plugins-kubernetes-crds-controller-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kubernetes-crds-controller-job%22%29)\n* [`otoroshi-plugins-kubernetes-crds-sync`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kubernetes-crds-sync%22%29)\n* [`otoroshi-cluster`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-cluster%22%29)\n* [`otoroshi-crd-validator`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-crd-validator%22%29)\n* [`otoroshi-sidecar-injector`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-sidecar-injector%22%29)\n* [`otoroshi-plugins-kubernetes-cert-sync`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kubernetes-cert-sync%22%29)\n* [`otoroshi-plugins-kubernetes-to-otoroshi-certs-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kubernetes-to-otoroshi-certs-job%22%29)\n* [`otoroshi-plugins-otoroshi-certs-to-kubernetes-secrets-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-otoroshi-certs-to-kubernetes-secrets-job%22%29)\n* [`otoroshi-apikeys-workflow-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-apikeys-workflow-job%22%29)\n* [`otoroshi-cert-helper`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-cert-helper%22%29)\n* [`otoroshi-certificates-ocsp`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-certificates-ocsp%22%29)\n* [`otoroshi-claim`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-claim%22%29)\n* [`otoroshi-cert`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-cert%22%29)\n* [`otoroshi-ssl-provider`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-ssl-provider%22%29)\n* [`otoroshi-cert-data`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-cert-data%22%29)\n* [`otoroshi-client-cert-validator`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-client-cert-validator%22%29)\n* [`otoroshi-ssl-implicits`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-ssl-implicits%22%29)\n* [`otoroshi-saml-validator-utils`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-saml-validator-utils%22%29)\n* [`otoroshi-global-saml-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-global-saml-config%22%29)\n* [`otoroshi-plugins-hmac-caller-plugin`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-hmac-caller-plugin%22%29)\n* [`otoroshi-plugins-hmac-access-validator-plugin`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-hmac-access-validator-plugin%22%29)\n* [`otoroshi-plugins-hasallowedusersvalidator`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-hasallowedusersvalidator%22%29)\n* [`otoroshi-auth-module-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-auth-module-config%22%29)\n* [`otoroshi-basic-auth-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-basic-auth-config%22%29)\n* [`otoroshi-ldap-auth-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-ldap-auth-config%22%29)\n* [`otoroshi-plugins-jsonpath-helper`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-jsonpath-helper%22%29)\n* [`otoroshi-global-oauth2-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-global-oauth2-config%22%29)\n* [`otoroshi-global-oauth2-module`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-global-oauth2-module%22%29)\n* [`otoroshi-ldap-auth-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-ldap-auth-config%22%29)\n"},{"name":"end-to-end-mtls.md","id":"/how-to-s/end-to-end-mtls.md","url":"/how-to-s/end-to-end-mtls.html","title":"End-to-end mTLS","content":"# End-to-end mTLS\n\nIf you want to use MTLS on otoroshi, you first need to enable it. It is not enabled by default as it will make TLS handshake way heavier. \nTo enable it just change the following config :\n\n```sh\notoroshi.ssl.fromOutside.clientAuth=None|Want|Need\n```\n\nor using env. variables\n\n```sh\nSSL_OUTSIDE_CLIENT_AUTH=None|Want|Need\n```\n\nYou can use the `Want` setup if you cant to have both mtls on some services and no mtls on other services.\n\nYou can also change the trusted CA list sent in the handshake certificate request from the `Danger Zone` in `Tls Settings`.\n\nOtoroshi support mutual TLS out of the box. mTLS from client to Otoroshi and from Otoroshi to targets are supported. In this article we will see how to configure Otoroshi to use end-to-end mTLS. All code and files used in this articles can be found on the [Otoroshi github](https://github.com/MAIF/otoroshi/tree/master/demos/mtls)\n\n### Create certificates\n\nBut first we need to generate some certificates to make the demo work\n\n```sh\nmkdir mtls-demo\ncd mtls-demo\nmkdir ca\nmkdir server\nmkdir client\n\n# create a certificate authority key, use password as pass phrase\nopenssl genrsa -out ./ca/ca-backend.key 4096\n# remove pass phrase\nopenssl rsa -in ./ca/ca-backend.key -out ./ca/ca-backend.key\n# generate the certificate authority cert\nopenssl req -new -x509 -sha256 -days 730 -key ./ca/ca-backend.key -out ./ca/ca-backend.cer -subj \"/CN=MTLSB\"\n\n\n# create a certificate authority key, use password as pass phrase\nopenssl genrsa -out ./ca/ca-frontend.key 2048\n# remove pass phrase\nopenssl rsa -in ./ca/ca-frontend.key -out ./ca/ca-frontend.key\n# generate the certificate authority cert\nopenssl req -new -x509 -sha256 -days 730 -key ./ca/ca-frontend.key -out ./ca/ca-frontend.cer -subj \"/CN=MTLSF\"\n\n\n# now create the backend cert key, use password as pass phrase\nopenssl genrsa -out ./server/_.backend.oto.tools.key 2048\n# remove pass phrase\nopenssl rsa -in ./server/_.backend.oto.tools.key -out ./server/_.backend.oto.tools.key\n# generate the csr for the certificate\nopenssl req -new -key ./server/_.backend.oto.tools.key -sha256 -out ./server/_.backend.oto.tools.csr -subj \"/CN=*.backend.oto.tools\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./server/_.backend.oto.tools.csr -CA ./ca/ca-backend.cer -CAkey ./ca/ca-backend.key -set_serial 1 -out ./server/_.backend.oto.tools.cer\n# verify the certificate, should output './server/_.backend.oto.tools.cer: OK'\nopenssl verify -CAfile ./ca/ca-backend.cer ./server/_.backend.oto.tools.cer\n\n\n# now create the frontend cert key, use password as pass phrase\nopenssl genrsa -out ./server/_.frontend.oto.tools.key 2048\n# remove pass phrase\nopenssl rsa -in ./server/_.frontend.oto.tools.key -out ./server/_.frontend.oto.tools.key\n# generate the csr for the certificate\nopenssl req -new -key ./server/_.frontend.oto.tools.key -sha256 -out ./server/_.frontend.oto.tools.csr -subj \"/CN=*.frontend.oto.tools\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./server/_.frontend.oto.tools.csr -CA ./ca/ca-frontend.cer -CAkey ./ca/ca-frontend.key -set_serial 1 -out ./server/_.frontend.oto.tools.cer\n# verify the certificate, should output './server/_.frontend.oto.tools.cer: OK'\nopenssl verify -CAfile ./ca/ca-frontend.cer ./server/_.frontend.oto.tools.cer\n\n\n# now create the client cert key for backend, use password as pass phrase\nopenssl genrsa -out ./client/_.backend.oto.tools.key 2048\n# remove pass phrase\nopenssl rsa -in ./client/_.backend.oto.tools.key -out ./client/_.backend.oto.tools.key\n# generate the csr for the certificate\nopenssl req -new -key ./client/_.backend.oto.tools.key -out ./client/_.backend.oto.tools.csr -subj \"/CN=*.backend.oto.tools\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./client/_.backend.oto.tools.csr -CA ./ca/ca-backend.cer -CAkey ./ca/ca-backend.key -set_serial 2 -out ./client/_.backend.oto.tools.cer\n# generate a pem version of the cert and key, use password as password\nopenssl x509 -in client/_.backend.oto.tools.cer -out client/_.backend.oto.tools.pem -outform PEM\n\n\n# now create the client cert key for frontend, use password as pass phrase\nopenssl genrsa -out ./client/_.frontend.oto.tools.key 2048\n# remove pass phrase\nopenssl rsa -in ./client/_.frontend.oto.tools.key -out ./client/_.frontend.oto.tools.key\n# generate the csr for the certificate\nopenssl req -new -key ./client/_.frontend.oto.tools.key -out ./client/_.frontend.oto.tools.csr -subj \"/CN=*.frontend.oto.tools\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./client/_.frontend.oto.tools.csr -CA ./ca/ca-frontend.cer -CAkey ./ca/ca-frontend.key -set_serial 2 -out ./client/_.frontend.oto.tools.cer\n# generate a pkcs12 version of the cert and key, use password as password\n# openssl pkcs12 -export -clcerts -in client/_.frontend.oto.tools.cer -inkey client/_.frontend.oto.tools.key -out client/_.frontend.oto.tools.p12\nopenssl x509 -in client/_.frontend.oto.tools.cer -out client/_.frontend.oto.tools.pem -outform PEM\n```\n\nOnce it's done, you should have something like\n\n```sh\n$ tree\n.\n├── backend.js\n├── ca\n│   ├── ca-backend.cer\n│   ├── ca-backend.key\n│   ├── ca-frontend.cer\n│   └── ca-frontend.key\n├── client\n│   ├── _.backend.oto.tools.cer\n│   ├── _.backend.oto.tools.csr\n│   ├── _.backend.oto.tools.key\n│   ├── _.backend.oto.tools.pem\n│   ├── _.frontend.oto.tools.cer\n│   ├── _.frontend.oto.tools.csr\n│   ├── _.frontend.oto.tools.key\n│   └── _.frontend.oto.tools.pem\n└── server\n ├── _.backend.oto.tools.cer\n ├── _.backend.oto.tools.csr\n ├── _.backend.oto.tools.key\n ├── _.frontend.oto.tools.cer\n ├── _.frontend.oto.tools.csr\n └── _.frontend.oto.tools.key\n\n3 directories, 18 files\n```\n\n### The backend service \n\nnow, let's create a backend service using nodejs. Create a file named `backend.js`\n\n```sh\ntouch backend.js\n```\n\nand put the following content\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\nconst options = { \n key: fs.readFileSync('./server/_.backend.oto.tools.key'), \n cert: fs.readFileSync('./server/_.backend.oto.tools.cer'), \n ca: fs.readFileSync('./ca/ca-backend.cer'), \n}; \n\nconst server = https.createServer(options, (req, res) => { \n res.writeHead(200, {\n 'Content-Type': 'application/json'\n }); \n res.end(JSON.stringify({ message: 'Hello World!' }) + \"\\n\"); \n}).listen(8444);\n\nconsole.log('Server listening:', `http://localhost:${server.address().port}`);\n```\n\nto run the server, just do \n\n```sh\nnode ./backend.js\n```\n\nnow you can try your server with\n\n```sh\ncurl --cacert ./ca/ca-backend.cer 'https://api.backend.oto.tools:8444/'\n```\n\nThis should output :\n```json\n{ \"message\": \"Hello World!\" }\n```\n\nnow modify your backend server to ensure that the client provides a client certificate like:\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\nconst options = { \n key: fs.readFileSync('./server/_.backend.oto.tools.key'), \n cert: fs.readFileSync('./server/_.backend.oto.tools.cer'), \n ca: fs.readFileSync('./ca/ca-backend.cer'), \n requestCert: true, \n rejectUnauthorized: true\n}; \n\nconst server = https.createServer(options, (req, res) => { \n console.log('Client certificate CN: ', req.socket.getPeerCertificate().subject.CN);\n res.writeHead(200, {\n 'Content-Type': 'application/json'\n }); \n res.end(JSON.stringify({ message: 'Hello World!' }) + \"\\n\"); \n}).listen(8444);\n\nconsole.log('Server listening:', `http://localhost:${server.address().port}`);\n```\n\nyou can test your new server with\n\n```sh\ncurl \\\n --cacert ./ca/ca-backend.cer \\\n --cert ./client/_.backend.oto.tools.pem \\\n --key ./client/_.backend.oto.tools.key 'https://api.backend.oto.tools:8444/'\n```\n\nthe output should be :\n\n```json\n{ \"message\": \"Hello World!\" }\n```\n\n### Otoroshi setup\n\nDownload the latest version of the Otoroshi jar and run it like\n\n```sh\n java \\\n -Dotoroshi.adminPassword=password \\\n -Dotoroshi.ssl.fromOutside.clientAuth=Want \\\n -jar -Dotoroshi.storage=file otoroshi.jar\n\n[info] otoroshi-env - Admin API exposed on http://otoroshi-api.oto.tools:8080\n[info] otoroshi-env - Admin UI exposed on http://otoroshi.oto.tools:8080\n[info] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[info] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[info] otoroshi-env - You can log into the Otoroshi admin console with the following credentials: admin@otoroshi.io / password\n[info] play.api.Play - Application started (Prod)\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n[info] p.c.s.AkkaHttpServer - Listening for HTTPS on /0:0:0:0:0:0:0:0:8443\n[info] otoroshi-env - Generating a self signed SSL certificate for https://*.oto.tools ...\n```\n\nand log into otoroshi with the tuple `admin@otoroshi.io / password` displayed in the logs. \n\nOnce logged in, navigate to the routes page and create a new route.\n\n* Set a name then validate the creation\n* On frontend node, add `api.frontend.oto.tools` in the list of domains\n* On backend node, replace the target with `api.backend.oto.tools` as hostname and `8444` as port. \n\nSave the route and try to call it.\n\n```sh\ncurl 'http://api.frontend.oto.tools:8080/'\n```\n\nThis should output :\n```json\n{\"Otoroshi-Error\": \"Something went wrong, you should try later. Thanks for your understanding.\"}\n```\n\nyou should get an error due to the fact that Otoroshi doesn't know about the server certificate and the client certificate expected by the server.\n\nWe must declare the client and server certificates for `https://api.backend.oto.tools` to Otoroshi. \n\nGo to the [certificates page](http://otoroshi.oto.tools:8080/bo/dashboard/certificates) and create a new item. Drag and drop the content of the `./client/_.backend.oto.tools.cer` and `./client/_.backend.oto.tools.key` files, respectively in `Certificate full chain` and `Certificate private key`.\n\nIf you prefer to use the API, you can create an Otoroshi certificate automatically from a PEM bundle.\n\n```sh\ncat ./server/_.backend.oto.tools.cer ./ca/ca-backend.cer ./server/_.backend.oto.tools.key | curl \\\n -H 'Content-Type: text/plain' -X POST \\\n --data-binary @- \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n http://otoroshi-api.oto.tools:8080/api/certificates/_bundle \n```\n\nnow we have to expose `https://api.frontend.oto.tools:8443` using otoroshi. \n\nCreate a second item. Copy and paste the content of `./server/_.frontend.oto.tools.cer` and `./server/_.frontend.oto.tools.key` respectively in `Certificate full chain` and `Certificate private key`.\n\nIf you don't want to bother with UI copy/paste, you can use the import bundle api endpoint to create an otoroshi certificate automatically from a PEM bundle.\n\n```sh\ncat ./server/_.frontend.oto.tools.cer ./ca/ca-frontend.cer ./server/_.frontend.oto.tools.key | curl \\\n -H 'Content-Type: text/plain' -X POST \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n --data-binary @- \\\n http://otoroshi-api.oto.tools:8080/api/certificates/_bundle\n```\n\nOnce created, go back to your route. On the target of the backend node, we have to enable the custom Otoroshi TLS.\n\n* Click on the backend node\n* Click on your target\n* Click on the Show advanced settings button\n* Click on Custom TLS setup\n* Enable the section\n* In the list of certificates, select the backend certificate\n* In the list of trusted certificates, select the frontend certificate\n* Save your route\n \nTry the following command\n\n```sh\ncurl --cacert ./ca/ca-frontend.cer 'https://api.frontend.oto.tools:8443/'\n```\nthe output should be\n\n```json\n{\"message\":\"Hello World!\"}\n```\n\nNow we want to enforce the fact that we want client certificate for `api.frontend.oto.tools`. \n\nSearch in the list of plugins and add the `Client Certificate Only` plugin to your route.\n\nnow if you retry \n\n```sh\ncurl --cacert ./ca/ca-frontend.cer 'https://api.frontend.oto.tools:8443/'\n```\nthe output should be\n\n```json\n{\"Otoroshi-Error\":\"bad request\"}\n```\n\nyou should get an error because no client certificate is passed with the request. But if you pass the `./client/_.frontend.oto.tools.csr` client cert and the key in your curl call\n\n```sh\ncurl 'https://api.frontend.oto.tools:8443' \\\n --cacert ./ca/ca-frontend.cer \\\n --cert ./client/_.frontend.oto.tools.pem \\\n --key ./client/_.frontend.oto.tools.key\n```\nthe output should be\n\n```json\n{\"message\":\"Hello World!\"}\n```\n\n### Client certificate matching plugin\n\nOtoroshi can restrict and check all incoming client certificates on a route.\n\nSearch in the list of plugins the `Client certificate matching` plugin and add it the the flow.\n\nSave the route and retry your call again.\n\n```sh\ncurl 'https://api.frontend.oto.tools:8443' \\\n --cacert ./ca/ca-frontend.cer \\\n --cert ./client/_.frontend.oto.tools.pem \\\n --key ./client/_.frontend.oto.tools.key\n```\nthe output should be\n\n```json\n{\"Otoroshi-Error\":\"bad request\"}\n```\n\nOur client certificate is not matched by Otoroshi. We have to add the subject DN in the configuration of the `Client certificate matching` plugin to authorize it.\n\n```json\n{\n \"HasClientCertMatchingValidator\": {\n \"serialNumbers\": [],\n \"subjectDNs\": [\n \"CN=*.frontend.oto.tools\"\n ],\n \"issuerDNs\": [],\n \"regexSubjectDNs\": [],\n \"regexIssuerDNs\": []\n }\n}\n```\n\nSave the service and retry your call again.\n\n```sh\ncurl 'https://api.frontend.oto.tools:8443' \\\n --cacert ./ca/ca-frontend.cer \\\n --cert ./client/_.frontend.oto.tools.pem \\\n --key ./client/_.frontend.oto.tools.key\n```\nthe output should be\n\n```json\n{\"message\":\"Hello World!\"}\n```\n\n\n"},{"name":"export-alerts-using-mailgun.md","id":"/how-to-s/export-alerts-using-mailgun.md","url":"/how-to-s/export-alerts-using-mailgun.html","title":"Send alerts using mailgun","content":"# Send alerts using mailgun\n\nAll Otoroshi alerts can be send on different channels.\nOne of the ways is to send a group of specific alerts via emails.\n\nTo enable this behaviour, let's start by create an exporter of events.\n\nIn this tutorial, we will admit that you already have a mailgun account with an API key and a domain.\n\n## Create an Mailgun exporter\n\nLet's create an exporter. The exporter will export by default all events generated by Otoroshi.\n\n1. Go ahead, and navigate to http://otoroshi.oto.tools:8080\n2. Click on the cog icon on the top right\n3. Then `Exporters` button\n4. And add a new configuration when clicking on the `Add item` button\n5. Select the `mailer` in the `type` selector field\n6. Jump to `Exporter config` and select the `Mailgun` option\n7. Set the following values:\n* `EU` : false/true depending on your mailgun configuratin\n* `Mailgun api key` : your-mailgun-api-key\n* `Mailgun domain` : your-mailgun-domain\n* `Email addresses` : list of the recipient adresses\n\nWith this configuration, all Otoroshi events will be send to your listed addresses (we don't recommended to do that).\n\nTo filter events on `Alerts` type, we need to add the following configuration inside the `Filtering and projection` section (if you want to deep learn about this section, read this @ref:[part](../entities/data-exporters.md#matching-and-projections)).\n\n```json\n{\n \"include\": [\n { \"@type\": \"AlertEvent\" }\n ],\n \"exclude\": []\n}\n``` \n\nSave at the bottom page and enable the exporter (on the top of the page or in list of exporters). We will need to wait few seconds to receive the first alerts.\n\nThe **projection** field can be useful in the case you want to filter the fields contained in each alert sent.\n\nThe `Projection` field is a json where you can list the fields to keep for each alert.\n\n```json\n{\n \"@type\": true,\n \"@timestamp\": true,\n \"@id\": true\n}\n```\n\nWith this example, only `@type`, `@timestamp` and `@id` will be sent to the addresses of your recipients."},{"name":"export-events-to-elastic.md","id":"/how-to-s/export-events-to-elastic.md","url":"/how-to-s/export-events-to-elastic.html","title":"Export events to Elasticsearch","content":"# Export events to Elasticsearch\n\n### Before you start\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Deploy a Elasticsearch and kibana stack on Docker\n\nLet's start by creating an Elasticsearch and Kibana stack on our machine (if it's already done for you, you can skip this section).\n\nTo start an Elasticsearch container for development or testing, run:\n\n```sh\ndocker network create elastic\ndocker pull docker.elastic.co/elasticsearch/elasticsearch:7.15.1\ndocker run --name es01-test --net elastic -p 9200:9200 -p 9300:9300 -e \"discovery.type=single-node\" docker.elastic.co/elasticsearch/elasticsearch:7.15.1\n```\n\n```sh\ndocker pull docker.elastic.co/kibana/kibana:7.15.1\ndocker run --name kib01-test --net elastic -p 5601:5601 -e \"ELASTICSEARCH_HOSTS=http://es01-test:9200\" docker.elastic.co/kibana/kibana:7.15.1\n```\n\nTo access Kibana, go to @link:[http://localhost:5601](http://localhost:5601) { open=new }.\n\n### Create an Elasticsearch exporter\n\nLet's create an exporter. The exporter will export by default all events generated by Otoroshi.\n\n1. Go ahead, and navigate to @link:[http://otoroshi.oto.tools:8080](http://otoroshi.oto.tools:8080) { open=new }\n2. Click on the cog icon on the top right\n3. Then `Exporters` button\n4. And add a new configuration when clicking on the `Add item` button\n5. Select the `elastic` in the `type` selector field\n6. Jump to `Exporter config`\n7. Set the following values: `Cluster URI` -> `http://localhost:9200`\n\nThen test your configuration by clicking on the `Check connection` button. This should output a modal with the Elasticsearch version and the number of loaded docs.\n\nSave at the bottom of the page and enable the exporter (on the top of the page or in list of exporters).\n\n### Testing your configuration\n\nOne simple way to test is to setup the reading of our Elasticsearch instance by Otoroshi.\n\nNavigate to the danger zone (click on the cog on the top right and scroll to `danger zone`). Jump to the `Analytics: Elastic dashboard datasource (read)` section.\n\nSet the following values : `Cluster URI` -> `http://localhost:9200`\n\nThen click on the `Check connection`. This should ouput the same result as the previous part. Save the global configuration and navigate to @link:[http://otoroshi.oto.tools:8080/bo/dashboard/stats](http://otoroshi.oto.tools:8080/bo/dashboard/stats) { open=new }.\n\nThis should output a list of graphs.\n\n### Advanced usage\n\nBy default, an exporter handle all events from Otoroshi. In some case, you need to filter the events to send to elasticsearch.\n\nTo filter the events, jump to the `Filtering and projection` field in exporter view. Otoroshi supports to include a kind of events or to exclude a list of events (if you want to deep learn about this section, read this @ref:[part](../entities/data-exporters.md#matching-and-projections)). \n\nAn example which keep only events with a field `@type` of value `AlertEvent`:\n```json\n{\n \"include\": [\n { \"@type\": \"AlertEvent\" }\n ],\n \"exclude\": []\n}\n```\nAn example which exclude only events with a field `@type` of value `GatewayEvent` :\n```json\n{\n \"exclude\": [\n { \"@type\": \"GatewayEvent\" }\n ],\n \"include\": []\n}\n```\n\nThe next field is the **Projection**. This field is a json when you can list the fields to keep for each event.\n\n```json\n{\n \"@type\": true,\n \"@timestamp\": true,\n \"@id\": true\n}\n```\n\nWith this example, only `@type`, `@timestamp` and `@id` will be send to ES.\n\n### Debug your configuration\n\n#### Missing user rights on Elasticsearch\n\nWhen creating an exporter, Otoroshi try to join the index route of the elasticsearch instance. If you have a specific management access rights on Elasticsearch, you have two possiblities :\n\n- set a full access to the user used in Otoroshi for write in Elasticsearch\n- set the version of Elasticsearch inside the `Version` field of your exporter.\n\n#### None event appear in your Elasticsearch\n\nWhen creating an exporter, Otoroshi try to push the index template on Elasticsearch. If the post failed, Otoroshi will fail for each push of events and your database will keep empty. \n\nTo fix this problem, you can try to send the index template with the `Manually apply index template` button in your exporter."},{"name":"import-export-otoroshi-datastore.md","id":"/how-to-s/import-export-otoroshi-datastore.md","url":"/how-to-s/import-export-otoroshi-datastore.html","title":"Import and export Otoroshi datastore","content":"# Import and export Otoroshi datastore\n\n### Start Otoroshi with an initial datastore\n\nLet's start by downloading the latest Otoroshi\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar'\n```\n\nBy default, Otoroshi starts with domain `oto.tools` that targets `127.0.0.1` Now you are almost ready to run Otoroshi for the first time, we want run it with an initial data.\n\nTo do that, you need to add the **otoroshi.importFrom** setting to the Otoroshi configuration (of `$APP_IMPORT_FROM` env). It can be a file path or a URL. The content of the initial datastore can look something like the following.\n\n```json\n{\n \"label\": \"Otoroshi initial datastore\",\n \"admins\": [],\n \"simpleAdmins\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"username\": \"admin@otoroshi.io\",\n \"password\": \"$2a$10$iQRkqjKTW.5XH8ugQrnMDeUstx4KqmIeQ58dHHdW2Dv1FkyyAs4C.\",\n \"label\": \"Otoroshi Admin\",\n \"createdAt\": 1634651307724,\n \"type\": \"SIMPLE\",\n \"metadata\": {},\n \"tags\": [],\n \"rights\": [\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\"\n ]\n }\n ]\n }\n ],\n \"serviceGroups\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"id\": \"admin-api-group\",\n \"name\": \"Otoroshi Admin Api group\",\n \"description\": \"No description\",\n \"tags\": [],\n \"metadata\": {}\n },\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"id\": \"default\",\n \"name\": \"default-group\",\n \"description\": \"The default service group\",\n \"tags\": [],\n \"metadata\": {}\n }\n ],\n \"apiKeys\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"clientId\": \"admin-api-apikey-id\",\n \"clientSecret\": \"admin-api-apikey-secret\",\n \"clientName\": \"Otoroshi Backoffice ApiKey\",\n \"description\": \"The apikey use by the Otoroshi UI\",\n \"authorizedGroup\": \"admin-api-group\",\n \"authorizedEntities\": [\n \"group_admin-api-group\"\n ],\n \"enabled\": true,\n \"readOnly\": false,\n \"allowClientIdOnly\": false,\n \"throttlingQuota\": 10000,\n \"dailyQuota\": 10000000,\n \"monthlyQuota\": 10000000,\n \"constrainedServicesOnly\": false,\n \"restrictions\": {\n \"enabled\": false,\n \"allowLast\": true,\n \"allowed\": [],\n \"forbidden\": [],\n \"notFound\": []\n },\n \"rotation\": {\n \"enabled\": false,\n \"rotationEvery\": 744,\n \"gracePeriod\": 168,\n \"nextSecret\": null\n },\n \"validUntil\": null,\n \"tags\": [],\n \"metadata\": {}\n }\n ],\n \"serviceDescriptors\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"id\": \"admin-api-service\",\n \"groupId\": \"admin-api-group\",\n \"groups\": [\n \"admin-api-group\"\n ],\n \"name\": \"otoroshi-admin-api\",\n \"description\": \"\",\n \"env\": \"prod\",\n \"domain\": \"oto.tools\",\n \"subdomain\": \"otoroshi-api\",\n \"targetsLoadBalancing\": {\n \"type\": \"RoundRobin\"\n },\n \"targets\": [\n {\n \"host\": \"127.0.0.1:8080\",\n \"scheme\": \"http\",\n \"weight\": 1,\n \"mtlsConfig\": {\n \"certs\": [],\n \"trustedCerts\": [],\n \"mtls\": false,\n \"loose\": false,\n \"trustAll\": false\n },\n \"tags\": [],\n \"metadata\": {},\n \"protocol\": \"HTTP/1.1\",\n \"predicate\": {\n \"type\": \"AlwaysMatch\"\n },\n \"ipAddress\": null\n }\n ],\n \"root\": \"/\",\n \"matchingRoot\": null,\n \"stripPath\": true,\n \"localHost\": \"127.0.0.1:8080\",\n \"localScheme\": \"http\",\n \"redirectToLocal\": false,\n \"enabled\": true,\n \"userFacing\": false,\n \"privateApp\": false,\n \"forceHttps\": false,\n \"logAnalyticsOnServer\": false,\n \"useAkkaHttpClient\": true,\n \"useNewWSClient\": false,\n \"tcpUdpTunneling\": false,\n \"detectApiKeySooner\": false,\n \"maintenanceMode\": false,\n \"buildMode\": false,\n \"strictlyPrivate\": false,\n \"enforceSecureCommunication\": true,\n \"sendInfoToken\": true,\n \"sendStateChallenge\": true,\n \"sendOtoroshiHeadersBack\": true,\n \"readOnly\": false,\n \"xForwardedHeaders\": false,\n \"overrideHost\": true,\n \"allowHttp10\": true,\n \"letsEncrypt\": false,\n \"secComHeaders\": {\n \"claimRequestName\": null,\n \"stateRequestName\": null,\n \"stateResponseName\": null\n },\n \"secComTtl\": 30000,\n \"secComVersion\": 1,\n \"secComInfoTokenVersion\": \"Legacy\",\n \"secComExcludedPatterns\": [],\n \"securityExcludedPatterns\": [],\n \"publicPatterns\": [\n \"/health\",\n \"/metrics\"\n ],\n \"privatePatterns\": [],\n \"additionalHeaders\": {\n \"Host\": \"otoroshi-admin-internal-api.oto.tools\"\n },\n \"additionalHeadersOut\": {},\n \"missingOnlyHeadersIn\": {},\n \"missingOnlyHeadersOut\": {},\n \"removeHeadersIn\": [],\n \"removeHeadersOut\": [],\n \"headersVerification\": {},\n \"matchingHeaders\": {},\n \"ipFiltering\": {\n \"whitelist\": [],\n \"blacklist\": []\n },\n \"api\": {\n \"exposeApi\": false\n },\n \"healthCheck\": {\n \"enabled\": false,\n \"url\": \"/\"\n },\n \"clientConfig\": {\n \"useCircuitBreaker\": true,\n \"retries\": 1,\n \"maxErrors\": 20,\n \"retryInitialDelay\": 50,\n \"backoffFactor\": 2,\n \"callTimeout\": 30000,\n \"callAndStreamTimeout\": 120000,\n \"connectionTimeout\": 10000,\n \"idleTimeout\": 60000,\n \"globalTimeout\": 30000,\n \"sampleInterval\": 2000,\n \"proxy\": {},\n \"customTimeouts\": [],\n \"cacheConnectionSettings\": {\n \"enabled\": false,\n \"queueSize\": 2048\n }\n },\n \"canary\": {\n \"enabled\": false,\n \"traffic\": 0.2,\n \"targets\": [],\n \"root\": \"/\"\n },\n \"gzip\": {\n \"enabled\": false,\n \"excludedPatterns\": [],\n \"whiteList\": [\n \"text/*\",\n \"application/javascript\",\n \"application/json\"\n ],\n \"blackList\": [],\n \"bufferSize\": 8192,\n \"chunkedThreshold\": 102400,\n \"compressionLevel\": 5\n },\n \"metadata\": {},\n \"tags\": [],\n \"chaosConfig\": {\n \"enabled\": false,\n \"largeRequestFaultConfig\": null,\n \"largeResponseFaultConfig\": null,\n \"latencyInjectionFaultConfig\": null,\n \"badResponsesFaultConfig\": null\n },\n \"jwtVerifier\": {\n \"type\": \"ref\",\n \"ids\": [],\n \"id\": null,\n \"enabled\": false,\n \"excludedPatterns\": []\n },\n \"secComSettings\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"secComUseSameAlgo\": true,\n \"secComAlgoChallengeOtoToBack\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"secComAlgoChallengeBackToOto\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"secComAlgoInfoToken\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"cors\": {\n \"enabled\": false,\n \"allowOrigin\": \"*\",\n \"exposeHeaders\": [],\n \"allowHeaders\": [],\n \"allowMethods\": [],\n \"excludedPatterns\": [],\n \"maxAge\": null,\n \"allowCredentials\": true\n },\n \"redirection\": {\n \"enabled\": false,\n \"code\": 303,\n \"to\": \"https://www.otoroshi.io\"\n },\n \"authConfigRef\": null,\n \"clientValidatorRef\": null,\n \"transformerRef\": null,\n \"transformerRefs\": [],\n \"transformerConfig\": {},\n \"apiKeyConstraints\": {\n \"basicAuth\": {\n \"enabled\": true,\n \"headerName\": null,\n \"queryName\": null\n },\n \"customHeadersAuth\": {\n \"enabled\": true,\n \"clientIdHeaderName\": null,\n \"clientSecretHeaderName\": null\n },\n \"clientIdAuth\": {\n \"enabled\": true,\n \"headerName\": null,\n \"queryName\": null\n },\n \"jwtAuth\": {\n \"enabled\": true,\n \"secretSigned\": true,\n \"keyPairSigned\": true,\n \"includeRequestAttributes\": false,\n \"maxJwtLifespanSecs\": null,\n \"headerName\": null,\n \"queryName\": null,\n \"cookieName\": null\n },\n \"routing\": {\n \"noneTagIn\": [],\n \"oneTagIn\": [],\n \"allTagsIn\": [],\n \"noneMetaIn\": {},\n \"oneMetaIn\": {},\n \"allMetaIn\": {},\n \"noneMetaKeysIn\": [],\n \"oneMetaKeyIn\": [],\n \"allMetaKeysIn\": []\n }\n },\n \"restrictions\": {\n \"enabled\": false,\n \"allowLast\": true,\n \"allowed\": [],\n \"forbidden\": [],\n \"notFound\": []\n },\n \"accessValidator\": {\n \"enabled\": false,\n \"refs\": [],\n \"config\": {},\n \"excludedPatterns\": []\n },\n \"preRouting\": {\n \"enabled\": false,\n \"refs\": [],\n \"config\": {},\n \"excludedPatterns\": []\n },\n \"plugins\": {\n \"enabled\": false,\n \"refs\": [],\n \"config\": {},\n \"excluded\": []\n },\n \"hosts\": [\n \"otoroshi-api.oto.tools\"\n ],\n \"paths\": [],\n \"handleLegacyDomain\": true,\n \"issueCert\": false,\n \"issueCertCA\": null\n }\n ],\n \"errorTemplates\": [],\n \"jwtVerifiers\": [],\n \"authConfigs\": [],\n \"certificates\": [],\n \"clientValidators\": [],\n \"scripts\": [],\n \"tcpServices\": [],\n \"dataExporters\": [],\n \"tenants\": [\n {\n \"id\": \"default\",\n \"name\": \"Default organization\",\n \"description\": \"The default organization\",\n \"metadata\": {},\n \"tags\": []\n }\n ],\n \"teams\": [\n {\n \"id\": \"default\",\n \"tenant\": \"default\",\n \"name\": \"Default Team\",\n \"description\": \"The default Team of the default organization\",\n \"metadata\": {},\n \"tags\": []\n }\n ]\n}\n```\n\nRun an Otoroshi with the previous file as parameter.\n\n```sh\njava \\\n -Dotoroshi.adminPassword=password \\\n -Dotoroshi.importFrom=./initial-state.json \\\n -jar otoroshi.jar \n```\n\nThis should show\n\n```sh\n...\n[info] otoroshi-env - Importing from: ./initial-state.json\n[info] otoroshi-env - Successful import !\n...\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n...\n```\n\n> Warning : when you using Otoroshi with a datastore different from file or in-memory, Otoroshi will not reload the initialization script. If you expected, you have to manually clean your store.\n\n### Export the current datastore via the danger zone\n\nWhen Otoroshi is running, you can backup the global configuration store from the UI. Navigate to your instance (in our case @link:[http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone](http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone) { open=new }) and scroll to the bottom page. \n\nClick on `Full export` button to download the full global configuration.\n\n### Import a datastore from file via the danger zone\n\nWhen Otoroshi is running, you can recover a global configuration from the UI. Navigate to your instance (in our case @link:[http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone](http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone) { open=new }) and scroll to the bottom of the page. \n\nClick on `Recover from a full export file` button to apply all configurations from a file.\n\n### Export the current datastore with the Admin API\n\nOtoroshi exposes his own Admin API to manage Otoroshi resources. To call this api, you need to have an api key with the rights on `Otoroshi Admin Api group`. This group includes the `Otoroshi-admin-api` service that you can found on the services page. \n\nBy default, and with our initial configuration, Otoroshi has already created an api key named `Otoroshi Backoffice ApiKey`. You can verify the rights of an api key on its page by checking the `Authorized On` field (you should find the `Otoroshi Admin Api group` inside).\n\nThe default api key id and secret are `admin-api-apikey-id` and `admin-api-apikey-secret`.\n\nRun the next command with these values.\n\n```sh\ncurl \\\n -H 'Content-Type: application/json' \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n 'http://otoroshi-api.oto.tools:8080/api/otoroshi.json'\n```\n\nWhen calling the `/api/otoroshi.json`, the return should be the current datastore including the service descriptors, the api keys, all others resources like certificates and authentification modules, and the the global config (representing the form of the danger zone).\n\n### Import the current datastore with the Admin API\n\nAs the same way of previous section, you can erase the current datastore with a POST request. The route is the same : `/api/otoroshi.json`.\n\n```sh\ncurl \\\n -X POST \\\n -H 'Content-Type: application/json' \\\n -d '{\n \"label\" : \"Otoroshi export\",\n \"dateRaw\" : 1634714811217,\n \"date\" : \"2021-10-20 09:26:51\",\n \"stats\" : {\n \"calls\" : 4,\n \"dataIn\" : 0,\n \"dataOut\" : 97991\n },\n \"config\" : {\n \"tags\" : [ ],\n \"letsEncryptSettings\" : {\n \"enabled\" : false,\n \"server\" : \"acme://letsencrypt.org/staging\",\n \"emails\" : [ ],\n \"contacts\" : [ ],\n \"publicKey\" : \"\",\n \"privateKey\" : \"\"\n },\n \"lines\" : [ \"prod\" ],\n \"maintenanceMode\" : false,\n \"enableEmbeddedMetrics\" : true,\n \"streamEntityOnly\" : true,\n \"autoLinkToDefaultGroup\" : true,\n \"limitConcurrentRequests\" : false,\n \"maxConcurrentRequests\" : 1000,\n \"maxHttp10ResponseSize\" : 4194304,\n \"useCircuitBreakers\" : true,\n \"apiReadOnly\" : false,\n \"u2fLoginOnly\" : false,\n \"trustXForwarded\" : true,\n \"ipFiltering\" : {\n \"whitelist\" : [ ],\n \"blacklist\" : [ ]\n },\n \"throttlingQuota\" : 10000000,\n \"perIpThrottlingQuota\" : 10000000,\n \"analyticsWebhooks\" : [ ],\n \"alertsWebhooks\" : [ ],\n \"elasticWritesConfigs\" : [ ],\n \"elasticReadsConfig\" : null,\n \"alertsEmails\" : [ ],\n \"logAnalyticsOnServer\" : false,\n \"useAkkaHttpClient\" : false,\n \"endlessIpAddresses\" : [ ],\n \"statsdConfig\" : null,\n \"kafkaConfig\" : {\n \"servers\" : [ ],\n \"keyPass\" : null,\n \"keystore\" : null,\n \"truststore\" : null,\n \"topic\" : \"otoroshi-events\",\n \"mtlsConfig\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n },\n \"backOfficeAuthRef\" : null,\n \"mailerSettings\" : {\n \"type\" : \"none\"\n },\n \"cleverSettings\" : null,\n \"maxWebhookSize\" : 100,\n \"middleFingers\" : false,\n \"maxLogsSize\" : 10000,\n \"otoroshiId\" : \"83539cbca-76ee-4abc-ad31-a4794e873848\",\n \"snowMonkeyConfig\" : {\n \"enabled\" : false,\n \"outageStrategy\" : \"OneServicePerGroup\",\n \"includeUserFacingDescriptors\" : false,\n \"dryRun\" : false,\n \"timesPerDay\" : 1,\n \"startTime\" : \"09:00:00.000\",\n \"stopTime\" : \"23:59:59.000\",\n \"outageDurationFrom\" : 600000,\n \"outageDurationTo\" : 3600000,\n \"targetGroups\" : [ ],\n \"chaosConfig\" : {\n \"enabled\" : true,\n \"largeRequestFaultConfig\" : null,\n \"largeResponseFaultConfig\" : null,\n \"latencyInjectionFaultConfig\" : {\n \"ratio\" : 0.2,\n \"from\" : 500,\n \"to\" : 5000\n },\n \"badResponsesFaultConfig\" : {\n \"ratio\" : 0.2,\n \"responses\" : [ {\n \"status\" : 502,\n \"body\" : \"{\\\"error\\\":\\\"Nihonzaru everywhere ...\\\"}\",\n \"headers\" : {\n \"Content-Type\" : \"application/json\"\n }\n } ]\n }\n }\n },\n \"scripts\" : {\n \"enabled\" : false,\n \"transformersRefs\" : [ ],\n \"transformersConfig\" : { },\n \"validatorRefs\" : [ ],\n \"validatorConfig\" : { },\n \"preRouteRefs\" : [ ],\n \"preRouteConfig\" : { },\n \"sinkRefs\" : [ ],\n \"sinkConfig\" : { },\n \"jobRefs\" : [ ],\n \"jobConfig\" : { }\n },\n \"geolocationSettings\" : {\n \"type\" : \"none\"\n },\n \"userAgentSettings\" : {\n \"enabled\" : false\n },\n \"autoCert\" : {\n \"enabled\" : false,\n \"replyNicely\" : false,\n \"caRef\" : null,\n \"allowed\" : [ ],\n \"notAllowed\" : [ ]\n },\n \"tlsSettings\" : {\n \"defaultDomain\" : null,\n \"randomIfNotFound\" : false,\n \"includeJdkCaServer\" : true,\n \"includeJdkCaClient\" : true,\n \"trustedCAsServer\" : [ ]\n },\n \"plugins\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excluded\" : [ ]\n },\n \"metadata\" : { }\n },\n \"admins\" : [ ],\n \"simpleAdmins\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"username\" : \"admin@otoroshi.io\",\n \"password\" : \"$2a$10$iQRkqjKTW.5XH8ugQrnMDeUstx4KqmIeQ58dHHdW2Dv1FkyyAs4C.\",\n \"label\" : \"Otoroshi Admin\",\n \"createdAt\" : 1634651307724,\n \"type\" : \"SIMPLE\",\n \"metadata\" : { },\n \"tags\" : [ ],\n \"rights\" : [ {\n \"tenant\" : \"*:rw\",\n \"teams\" : [ \"*:rw\" ]\n } ]\n } ],\n \"serviceGroups\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"admin-api-group\",\n \"name\" : \"Otoroshi Admin Api group\",\n \"description\" : \"No description\",\n \"tags\" : [ ],\n \"metadata\" : { }\n }, {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"default\",\n \"name\" : \"default-group\",\n \"description\" : \"The default service group\",\n \"tags\" : [ ],\n \"metadata\" : { }\n } ],\n \"apiKeys\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"clientId\" : \"admin-api-apikey-id\",\n \"clientSecret\" : \"admin-api-apikey-secret\",\n \"clientName\" : \"Otoroshi Backoffice ApiKey\",\n \"description\" : \"The apikey use by the Otoroshi UI\",\n \"authorizedGroup\" : \"admin-api-group\",\n \"authorizedEntities\" : [ \"group_admin-api-group\" ],\n \"enabled\" : true,\n \"readOnly\" : false,\n \"allowClientIdOnly\" : false,\n \"throttlingQuota\" : 10000,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"constrainedServicesOnly\" : false,\n \"restrictions\" : {\n \"enabled\" : false,\n \"allowLast\" : true,\n \"allowed\" : [ ],\n \"forbidden\" : [ ],\n \"notFound\" : [ ]\n },\n \"rotation\" : {\n \"enabled\" : false,\n \"rotationEvery\" : 744,\n \"gracePeriod\" : 168,\n \"nextSecret\" : null\n },\n \"validUntil\" : null,\n \"tags\" : [ ],\n \"metadata\" : { }\n } ],\n \"serviceDescriptors\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"admin-api-service\",\n \"groupId\" : \"admin-api-group\",\n \"groups\" : [ \"admin-api-group\" ],\n \"name\" : \"otoroshi-admin-api\",\n \"description\" : \"\",\n \"env\" : \"prod\",\n \"domain\" : \"oto.tools\",\n \"subdomain\" : \"otoroshi-api\",\n \"targetsLoadBalancing\" : {\n \"type\" : \"RoundRobin\"\n },\n \"targets\" : [ {\n \"host\" : \"127.0.0.1:8080\",\n \"scheme\" : \"http\",\n \"weight\" : 1,\n \"mtlsConfig\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n },\n \"tags\" : [ ],\n \"metadata\" : { },\n \"protocol\" : \"HTTP/1.1\",\n \"predicate\" : {\n \"type\" : \"AlwaysMatch\"\n },\n \"ipAddress\" : null\n } ],\n \"root\" : \"/\",\n \"matchingRoot\" : null,\n \"stripPath\" : true,\n \"localHost\" : \"127.0.0.1:8080\",\n \"localScheme\" : \"http\",\n \"redirectToLocal\" : false,\n \"enabled\" : true,\n \"userFacing\" : false,\n \"privateApp\" : false,\n \"forceHttps\" : false,\n \"logAnalyticsOnServer\" : false,\n \"useAkkaHttpClient\" : true,\n \"useNewWSClient\" : false,\n \"tcpUdpTunneling\" : false,\n \"detectApiKeySooner\" : false,\n \"maintenanceMode\" : false,\n \"buildMode\" : false,\n \"strictlyPrivate\" : false,\n \"enforceSecureCommunication\" : true,\n \"sendInfoToken\" : true,\n \"sendStateChallenge\" : true,\n \"sendOtoroshiHeadersBack\" : true,\n \"readOnly\" : false,\n \"xForwardedHeaders\" : false,\n \"overrideHost\" : true,\n \"allowHttp10\" : true,\n \"letsEncrypt\" : false,\n \"secComHeaders\" : {\n \"claimRequestName\" : null,\n \"stateRequestName\" : null,\n \"stateResponseName\" : null\n },\n \"secComTtl\" : 30000,\n \"secComVersion\" : 1,\n \"secComInfoTokenVersion\" : \"Legacy\",\n \"secComExcludedPatterns\" : [ ],\n \"securityExcludedPatterns\" : [ ],\n \"publicPatterns\" : [ \"/health\", \"/metrics\" ],\n \"privatePatterns\" : [ ],\n \"additionalHeaders\" : {\n \"Host\" : \"otoroshi-admin-internal-api.oto.tools\"\n },\n \"additionalHeadersOut\" : { },\n \"missingOnlyHeadersIn\" : { },\n \"missingOnlyHeadersOut\" : { },\n \"removeHeadersIn\" : [ ],\n \"removeHeadersOut\" : [ ],\n \"headersVerification\" : { },\n \"matchingHeaders\" : { },\n \"ipFiltering\" : {\n \"whitelist\" : [ ],\n \"blacklist\" : [ ]\n },\n \"api\" : {\n \"exposeApi\" : false\n },\n \"healthCheck\" : {\n \"enabled\" : false,\n \"url\" : \"/\"\n },\n \"clientConfig\" : {\n \"useCircuitBreaker\" : true,\n \"retries\" : 1,\n \"maxErrors\" : 20,\n \"retryInitialDelay\" : 50,\n \"backoffFactor\" : 2,\n \"callTimeout\" : 30000,\n \"callAndStreamTimeout\" : 120000,\n \"connectionTimeout\" : 10000,\n \"idleTimeout\" : 60000,\n \"globalTimeout\" : 30000,\n \"sampleInterval\" : 2000,\n \"proxy\" : { },\n \"customTimeouts\" : [ ],\n \"cacheConnectionSettings\" : {\n \"enabled\" : false,\n \"queueSize\" : 2048\n }\n },\n \"canary\" : {\n \"enabled\" : false,\n \"traffic\" : 0.2,\n \"targets\" : [ ],\n \"root\" : \"/\"\n },\n \"gzip\" : {\n \"enabled\" : false,\n \"excludedPatterns\" : [ ],\n \"whiteList\" : [ \"text/*\", \"application/javascript\", \"application/json\" ],\n \"blackList\" : [ ],\n \"bufferSize\" : 8192,\n \"chunkedThreshold\" : 102400,\n \"compressionLevel\" : 5\n },\n \"metadata\" : { },\n \"tags\" : [ ],\n \"chaosConfig\" : {\n \"enabled\" : false,\n \"largeRequestFaultConfig\" : null,\n \"largeResponseFaultConfig\" : null,\n \"latencyInjectionFaultConfig\" : null,\n \"badResponsesFaultConfig\" : null\n },\n \"jwtVerifier\" : {\n \"type\" : \"ref\",\n \"ids\" : [ ],\n \"id\" : null,\n \"enabled\" : false,\n \"excludedPatterns\" : [ ]\n },\n \"secComSettings\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"secComUseSameAlgo\" : true,\n \"secComAlgoChallengeOtoToBack\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"secComAlgoChallengeBackToOto\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"secComAlgoInfoToken\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"cors\" : {\n \"enabled\" : false,\n \"allowOrigin\" : \"*\",\n \"exposeHeaders\" : [ ],\n \"allowHeaders\" : [ ],\n \"allowMethods\" : [ ],\n \"excludedPatterns\" : [ ],\n \"maxAge\" : null,\n \"allowCredentials\" : true\n },\n \"redirection\" : {\n \"enabled\" : false,\n \"code\" : 303,\n \"to\" : \"https://www.otoroshi.io\"\n },\n \"authConfigRef\" : null,\n \"clientValidatorRef\" : null,\n \"transformerRef\" : null,\n \"transformerRefs\" : [ ],\n \"transformerConfig\" : { },\n \"apiKeyConstraints\" : {\n \"basicAuth\" : {\n \"enabled\" : true,\n \"headerName\" : null,\n \"queryName\" : null\n },\n \"customHeadersAuth\" : {\n \"enabled\" : true,\n \"clientIdHeaderName\" : null,\n \"clientSecretHeaderName\" : null\n },\n \"clientIdAuth\" : {\n \"enabled\" : true,\n \"headerName\" : null,\n \"queryName\" : null\n },\n \"jwtAuth\" : {\n \"enabled\" : true,\n \"secretSigned\" : true,\n \"keyPairSigned\" : true,\n \"includeRequestAttributes\" : false,\n \"maxJwtLifespanSecs\" : null,\n \"headerName\" : null,\n \"queryName\" : null,\n \"cookieName\" : null\n },\n \"routing\" : {\n \"noneTagIn\" : [ ],\n \"oneTagIn\" : [ ],\n \"allTagsIn\" : [ ],\n \"noneMetaIn\" : { },\n \"oneMetaIn\" : { },\n \"allMetaIn\" : { },\n \"noneMetaKeysIn\" : [ ],\n \"oneMetaKeyIn\" : [ ],\n \"allMetaKeysIn\" : [ ]\n }\n },\n \"restrictions\" : {\n \"enabled\" : false,\n \"allowLast\" : true,\n \"allowed\" : [ ],\n \"forbidden\" : [ ],\n \"notFound\" : [ ]\n },\n \"accessValidator\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excludedPatterns\" : [ ]\n },\n \"preRouting\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excludedPatterns\" : [ ]\n },\n \"plugins\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excluded\" : [ ]\n },\n \"hosts\" : [ \"otoroshi-api.oto.tools\" ],\n \"paths\" : [ ],\n \"handleLegacyDomain\" : true,\n \"issueCert\" : false,\n \"issueCertCA\" : null\n } ],\n \"errorTemplates\" : [ ],\n \"jwtVerifiers\" : [ ],\n \"authConfigs\" : [ ],\n \"certificates\" : [],\n \"clientValidators\" : [ ],\n \"scripts\" : [ ],\n \"tcpServices\" : [ ],\n \"dataExporters\" : [ ],\n \"tenants\" : [ {\n \"id\" : \"default\",\n \"name\" : \"Default organization\",\n \"description\" : \"The default organization\",\n \"metadata\" : { },\n \"tags\" : [ ]\n } ],\n \"teams\" : [ {\n \"id\" : \"default\",\n \"tenant\" : \"default\",\n \"name\" : \"Default Team\",\n \"description\" : \"The default Team of the default organization\",\n \"metadata\" : { },\n \"tags\" : [ ]\n } ]\n }' \\\n 'http://otoroshi-api.oto.tools:8080/api/otoroshi.json' \\\n -u admin-api-apikey-id:admin-api-apikey-secret \n```\n\nThis should output :\n\n```json\n{ \"done\":true }\n```\n\n> Note : be very carefully with this POST command. If you send a wrong JSON, you risk breaking your instance.\n\nThe second way is to send the same configuration but from a file. You can pass two kind of file : a `json` file or a `ndjson` file. Both files are available as export methods on the danger zone.\n\n```sh\n# the curl is run from a folder containing the initial-state.json file \ncurl -X POST \\\n -H \"Content-Type: application/json\" \\\n -d @./initial-state.json \\\n 'http://otoroshi-api.oto.tools:8080/api/otoroshi.json' \\\n -u admin-api-apikey-id:admin-api-apikey-secret\n```\n\nThis should output :\n\n```json\n{ \"done\":true }\n```\n\n> Note: To send a ndjson file, you have to set the Content-Type header at `application/x-ndjson`"},{"name":"index.md","id":"/how-to-s/index.md","url":"/how-to-s/index.html","title":"How to's","content":"# How to's\n\nin this section, we will explain some mainstream Otoroshi usage scenario's \n\n* @ref:[Otoroshi and WASM](./wasm-usage.md)\n* @ref:[WASM Manager](./wasm-manager-installation.md)\n* @ref:[Tailscale integration](./tailscale-integration.md)\n* @ref:[End-to-end mTLS](./end-to-end-mtls.md)\n* @ref:[Send alerts by emails](./export-alerts-using-mailgun.md)\n* @ref:[Export events to Elasticsearch](./export-events-to-elastic.md)\n* @ref:[Import/export Otoroshi datastore](./import-export-otoroshi-datastore.md)\n* @ref:[Secure an app with Auth0](./secure-app-with-auth0.md)\n* @ref:[Secure an app with Keycloak](./secure-app-with-keycloak.md)\n* @ref:[Secure an app with LDAP](./secure-app-with-ldap.md)\n* @ref:[Secure an api with apikeys](./secure-with-apikey.md)\n* @ref:[Secure an app with OAuth1](./secure-with-oauth1-client.md)\n* @ref:[Secure an api with OAuth2 client_credentials flow](./secure-with-oauth2-client-credentials.md)\n* @ref:[Setup an Otoroshi cluster](./setup-otoroshi-cluster.md)\n* @ref:[TLS termination using Let's Encrypt](./tls-using-lets-encrypt.md)\n* @ref:[Secure an app with jwt verifiers](./secure-an-app-with-jwt-verifiers.md)\n* @ref:[Secure the communication between a backend app and Otoroshi](./secure-the-communication-between-a-backend-app-and-otoroshi.md)\n* @ref:[TLS termination using your own certificates](./tls-termination-using-own-certificates.md)\n* @ref:[The resources loader](./resources-loader.md)\n* @ref:[Log levels customization](./custom-log-levels.md)\n* @ref:[Initial state customization](./custom-initial-state.md)\n* @ref:[Communicate with Kafka](./communicate-with-kafka.md)\n* @ref:[Create your custom Authentication module](./create-custom-auth-module.md)\n* @ref:[Working with Eureka](./working-with-eureka.md)\n* @ref:[Instantiate a WAF with Coraza](./instantiate-waf-coraza.md)\n\n@@@ index\n\n\n* [WASM usage](./wasm-usage.md)\n* [WASM Manager](./wasm-manager-installation.md)\n* [Tailscale integration](./tailscale-integration.md)\n* [End-to-end mTLS](./end-to-end-mtls.md)\n* [Send alerts by emails](./export-alerts-using-mailgun.md)\n* [Export events to Elasticsearch](./export-events-to-elastic.md)\n* [Import/export Otoroshi datastore](./import-export-otoroshi-datastore.md)\n* [Secure an app with Auth0](./secure-app-with-auth0.md)\n* [Secure an app with Keycloak](./secure-app-with-keycloak.md)\n* [Secure an app with LDAP](./secure-app-with-ldap.md)\n* [Secure an api with apikeys](./secure-with-apikey.md)\n* [Secure an app with OAuth1](./secure-with-oauth1-client.md)\n* [Secure an api with OAuth2 client_credentials flow](./secure-with-oauth2-client-credentials.md)\n* [Setup an Otoroshi cluster](./setup-otoroshi-cluster.md)\n* [TLS termination using Let's Encrypt](./tls-using-lets-encrypt.md)\n* [Secure an app with jwt verifiers](./secure-an-app-with-jwt-verifiers.md)\n* [Secure the communication between a backend app and Otoroshi](./secure-the-communication-between-a-backend-app-and-otoroshi.md)\n* [TLS termination using your own certificates](./tls-termination-using-own-certificates.md)\n* [The resources loader](./resources-loader.md)\n* [Log levels customization](./custom-log-levels.md)\n* [Initial state customization](./custom-initial-state.md)\n* [Communicate with Kafka](./communicate-with-kafka.md)\n* [Create your custom Authentication module](./create-custom-auth-module.md)\n* [Working with Eureka](./working-with-eureka.md)\n* [Instantiate a WAF with Coraza](./instantiate-waf-coraza.md)\n@@@\n"},{"name":"instantiate-waf-coraza.md","id":"/how-to-s/instantiate-waf-coraza.md","url":"/how-to-s/instantiate-waf-coraza.html","title":"Instantiate a WAF with Coraza","content":"# Instantiate a WAF with Coraza\n\nSometimes you may want to secure an app with a [Web Appplication Firewall (WAF)](https://en.wikipedia.org/wiki/Web_application_firewall) and apply the security rules from the [OWASP Core Rule Set](https://owasp.org/www-project-modsecurity-core-rule-set/). To allow that, we integrated [the Coraza WAF](https://coraza.io/) in Otoroshi through a plugin that uses the WASM version of Coraza.\n\n### Before you start\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Create a WAF configuration\n\nfirst, go on [the features page of otoroshi](http://otoroshi.oto.tools:8080/bo/dashboard/features) and then click on the [Coraza WAF configs. item](http://otoroshi.oto.tools:8080/bo/dashboard/extensions/coraza-waf/coraza-configs). \n\nNow create a new configuration, give it a name and a description, ensure that you enabled the `Inspect req/res body` flag and save your configuration.\n\nThe corresponding admin api call is the following :\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/apis/coraza-waf.extensions.otoroshi.io/v1/coraza-configs' \\\n -u admin-api-apikey-id:admin-api-apikey-secret -H 'Content-Type: application/json' -d '\n{\n \"id\": \"coraza-waf-demo\",\n \"name\": \"My blocking WAF\",\n \"description\": \"An awesome WAF\",\n \"inspect_body\": true,\n \"config\": {\n \"directives_map\": {\n \"default\": [\n \"Include @recommended-conf\",\n \"Include @crs-setup-conf\",\n \"Include @owasp_crs/*.conf\",\n \"SecRuleEngine DetectionOnly\"\n ]\n },\n \"default_directives\": \"default\",\n \"per_authority_directives\": {}\n }\n}'\n```\n\n### Configure Coraza and the OWASP Core Rule Set\n\nNow you can easily configure the coraza WAF in the `json` config. section. By default it should look something like :\n\n```json\n{\n \"directives_map\": {\n \"default\": [\n \"Include @recommended-conf\",\n \"Include @crs-setup-conf\",\n \"Include @owasp_crs/*.conf\",\n \"SecRuleEngine DetectionOnly\"\n ]\n },\n \"default_directives\": \"default\",\n \"per_authority_directives\": {}\n}\n```\n\nYou can find anything about it in [the documentation of Coraza](https://coraza.io/docs/tutorials/introduction/).\n\nhere we have the basic setup to apply the OWASP core rule set in detection mode only. \nSo each time Coraza will find something weird in a request, it will only log it but let the request pass.\n We can enable blocking by setting `\"SecRuleEngine On\"`\n\nwe can also deny access to the `/admin` uri by adding the following directive\n\n```json\n\"SecRule REQUEST_URI \\\"@streq /admin\\\" \\\"id:101,phase:1,t:lowercase,deny\\\"\"\n```\n\nYou can also provide multiple profile of rules in the `directives_map` with different names and use the `per_authority_directives` object to map hostnames to a specific profile.\n\nthe corresponding admin api call is the following :\n\n```sh\ncurl -X PUT 'http://otoroshi-api.oto.tools:8080/apis/coraza-waf.extensions.otoroshi.io/v1/coraza-configs/coraza-waf-demo' \\\n -u admin-api-apikey-id:admin-api-apikey-secret -H 'Content-Type: application/json' -d '\n{\n \"id\": \"coraza-waf-demo\",\n \"name\": \"My blocking WAF\",\n \"description\": \"An awesome WAF\",\n \"inspect_body\": true,\n \"config\": {\n \"directives_map\": {\n \"default\": [\n \"Include @recommended-conf\",\n \"Include @crs-setup-conf\",\n \"Include @owasp_crs/*.conf\",\n \"SecRule REQUEST_URI \\\"@streq /admin\\\" \\\"id:101,phase:1,t:lowercase,deny\\\"\",\n \"SecRuleEngine On\"\n ]\n },\n \"default_directives\": \"default\",\n \"per_authority_directives\": {}\n }\n}'\n```\n\n### Add the WAF plugin on your route\n\nNow you can create a new route that will use your WAF configuration. Let say we want a route on `http://wouf.oto.tools:8080` to goes to `https://www.otoroshi.io`. Now add the `Coraza WAF` plugin to your route and in the configuration select the configuration you created previously.\n\nthe corresponding admin api call is the following :\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/routes' \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n -H 'Content-Type: application/json' -d '\n{\n \"id\": \"route_demo\",\n \"name\": \"WAF route\",\n \"description\": \"A new route with a WAF enabled\",\n \"frontend\": {\n \"domains\": [\n \"wouf.oto.tools\"\n ]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"www.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.wasm.proxywasm.NgCorazaWAF\",\n \"config\": {\n \"ref\": \"coraza-waf-demo\"\n },\n \"plugin_index\": {\n \"validate_access\": 0,\n \"transform_request\": 0,\n \"transform_response\": 0\n }\n },\n {\n \"plugin\": \"cp:otoroshi.next.plugins.OverrideHost\",\n \"plugin_index\": {\n \"transform_request\": 1\n }\n }\n ]\n}'\n```\n\n### Try to use an exploit ;)\n\nlet try to trigger Coraza with a Log4Shell crafted request:\n\n```sh\ncurl 'http://wouf.oto.tools:9999' -H 'foo: ${jndi:rmi://foo/bar}' --include\n\nHTTP/1.1 403 Forbidden\nDate: Thu, 25 May 2023 09:47:04 GMT\nContent-Type: text/plain\nContent-Length: 0\n\n```\n\nor access to `/admin`\n\n```sh\ncurl 'http://wouf.oto.tools:9999/admin' --include\n\nHTTP/1.1 403 Forbidden\nDate: Thu, 25 May 2023 09:47:04 GMT\nContent-Type: text/plain\nContent-Length: 0\n\n```\n\nif you look at otoroshi logs you will find something like :\n\n```log\n[error] otoroshi-proxy-wasm - [client \"127.0.0.1\"] Coraza: Warning. Potential Remote Command Execution: Log4j / Log4shell \n [file \"@owasp_crs/REQUEST-944-APPLICATION-ATTACK-JAVA.conf\"] [line \"10608\"] [id \"944150\"] [rev \"\"] \n [msg \"Potential Remote Command Execution: Log4j / Log4shell\"] [data \"\"] [severity \"critical\"] \n [ver \"OWASP_CRS/4.0.0-rc1\"] [maturity \"0\"] [accuracy \"0\"] [tag \"application-multi\"] \n [tag \"language-java\"] [tag \"platform-multi\"] [tag \"attack-rce\"] [tag \"OWASP_CRS\"] \n [tag \"capec/1000/152/137/6\"] [tag \"PCI/6.5.2\"] [tag \"paranoia-level/1\"] [hostname \"wwwwouf.oto.tools\"] \n [uri \"/\"] [unique_id \"uTYakrlgMBydVGLodbz\"]\n[error] otoroshi-proxy-wasm - [client \"127.0.0.1\"] Coraza: Warning. Inbound Anomaly Score Exceeded (Total Score: 5) \n [file \"@owasp_crs/REQUEST-949-BLOCKING-EVALUATION.conf\"] [line \"11029\"] [id \"949110\"] [rev \"\"] \n [msg \"Inbound Anomaly Score Exceeded (Total Score: 5)\"] \n [data \"\"] [severity \"emergency\"] [ver \"OWASP_CRS/4.0.0-rc1\"] [maturity \"0\"] [accuracy \"0\"] \n [tag \"anomaly-evaluation\"] [hostname \"wwwwouf.oto.tools\"] [uri \"/\"] [unique_id \"uTYakrlgMBydVGLodbz\"]\n[info] otoroshi-proxy-wasm - Transaction interrupted tx_id=\"uTYakrlgMBydVGLodbz\" context_id=3 action=\"deny\" phase=\"http_response_headers\"\n...\n[error] otoroshi-proxy-wasm - [client \"127.0.0.1\"] Coraza: Warning. [file \"\"] [line \"12914\"] \n [id \"101\"] [rev \"\"] [msg \"\"] [data \"\"] [severity \"emergency\"] [ver \"\"] [maturity \"0\"] [accuracy \"0\"] \n [hostname \"wwwwouf.oto.tools\"] [uri \"/admin\"] [unique_id \"mqXZeMdzRaVAqIiqvHf\"]\n[info] otoroshi-proxy-wasm - Transaction interrupted tx_id=\"mqXZeMdzRaVAqIiqvHf\" context_id=2 action=\"deny\" phase=\"http_request_headers\"\n```\n\n### Generated events\n\neach time Coraza will generate log about vunerability detection, an event will be generated in otoroshi and exporter through the usual data exporter way. The event will look like :\n\n```json\n{\n \"@id\" : \"86b647450-3cc7-42a9-aaec-828d261a8c74\",\n \"@timestamp\" : 1684938211157,\n \"@type\" : \"CorazaTrailEvent\",\n \"@product\" : \"otoroshi\",\n \"@serviceId\" : \"--\",\n \"@service\" : \"--\",\n \"@env\" : \"prod\",\n \"level\" : \"ERROR\",\n \"msg\" : \"Coraza: Warning. Potential Remote Command Execution: Log4j / Log4shell\",\n \"fields\" : {\n \"hostname\" : \"wouf.oto.tools\",\n \"maturity\" : \"0\",\n \"line\" : \"10608\",\n \"unique_id\" : \"oNbisKlXWaCdXntaUpq\",\n \"tag\" : \"paranoia-level/1\",\n \"data\" : \"\",\n \"accuracy\" : \"0\",\n \"uri\" : \"/\",\n \"rev\" : \"\",\n \"id\" : \"944150\",\n \"client\" : \"127.0.0.1\",\n \"ver\" : \"OWASP_CRS/4.0.0-rc1\",\n \"file\" : \"@owasp_crs/REQUEST-944-APPLICATION-ATTACK-JAVA.conf\",\n \"msg\" : \"Potential Remote Command Execution: Log4j / Log4shell\",\n \"severity\" : \"critical\"\n },\n \"raw\" : \"[client \\\"127.0.0.1\\\"] Coraza: Warning. Potential Remote Command Execution: Log4j / Log4shell [file \\\"@owasp_crs/REQUEST-944-APPLICATION-ATTACK-JAVA.conf\\\"] [line \\\"10608\\\"] [id \\\"944150\\\"] [rev \\\"\\\"] [msg \\\"Potential Remote Command Execution: Log4j / Log4shell\\\"] [data \\\"\\\"] [severity \\\"critical\\\"] [ver \\\"OWASP_CRS/4.0.0-rc1\\\"] [maturity \\\"0\\\"] [accuracy \\\"0\\\"] [tag \\\"application-multi\\\"] [tag \\\"language-java\\\"] [tag \\\"platform-multi\\\"] [tag \\\"attack-rce\\\"] [tag \\\"OWASP_CRS\\\"] [tag \\\"capec/1000/152/137/6\\\"] [tag \\\"PCI/6.5.2\\\"] [tag \\\"paranoia-level/1\\\"] [hostname \\\"wouf.oto.tools\\\"] [uri \\\"/\\\"] [unique_id \\\"oNbisKlXWaCdXntaUpq\\\"]\\n\",\n}\n```"},{"name":"resources-loader.md","id":"/how-to-s/resources-loader.md","url":"/how-to-s/resources-loader.html","title":"The resources loader","content":"# The resources loader\n\nThe resources loader is a tool to create an Otoroshi resource from a raw content. This content can be found on each Otoroshi resources pages (services descriptors, apikeys, certificates, etc ...). To get the content of a resource as file, you can use the two export buttons, one to export as JSON format and the other as YAML format.\n\nOnce exported, the content of the resource can be import with the resource loader. You can import single or multiples resources on one time, as JSON and YAML format.\n\nThe resource loader is available on this route [`bo/dashboard/resources-loader`](http://otoroshi.oto.tools:8080/bo/dashboard/resources-loader).\n\nOn this page, you can paste the content of your resources and click on **Load resources**.\n\nFor each detected resource, the loader will display :\n\n* a resource name corresponding to the field `name` \n* a resource type corresponding to the type of created resource (ServiceDescriptor, ApiKey, Certificate, etc)\n* a toggle to choose if you want to include the element for the creation step\n* the updated status by the creation process\n\nOnce you have selected the resources to create, you can **Import selected resources**.\n\nOnce generated, all status will be updated. If all is working, the status will be equals to done.\n\nIf you want to get back to the initial page, you can use the **restart** button."},{"name":"secure-an-app-with-jwt-verifiers.md","id":"/how-to-s/secure-an-app-with-jwt-verifiers.md","url":"/how-to-s/secure-an-app-with-jwt-verifiers.html","title":"Secure an api with jwt verifiers","content":"# Secure an api with jwt verifiers\n\nA Jwt verifier is the guard that verifies the signature of tokens in requests. \n\nA verifier can obvisouly verify or generate.\n\n### Before you start\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Your first jwt verifier\n\nLet's start by validating all incoming request tokens tokens on our simple route created in the @ref:[Before you start](#before-you-start) section.\n\n1. Navigate to the simple route\n2. Search in the list of plugins and add the `Jwt verification only` plugin on the flow\n3. Click on `Start by select or create a JWT Verifier`\n4. Create a new JWT verifier\n5. Set `simple-jwt-verifier` as `Name`\n6. Select `Hmac + SHA` as `Algo` (for this example, we expect tokens with a symetric signature), `512` as `SHA size` and `otoroshi` as `HMAC secret`\n7. Confirm the creation \n\nSave your route and try to call it\n\n```sh\ncurl -X GET 'http://myservice.oto.tools:8080/' --include\n```\n\nThis should output : \n```json\n{\n \"Otoroshi-Error\": \"error.expected.token.not.found\"\n}\n```\n\nA simple way to generate a token is to use @link:[jwt.io](http://jwt.io) { open=new }. Once navigate, define `HS512` as `alg` in header section and insert `otoroshi` as verify signature secret. \n\nOnce created, copy-paste the token from jwt.io to the Authorization header and call our service.\n\n```sh\n# replace xxxx by the generated token\ncurl -X GET \\\n -H \"X-JWT-Token: xxxx\" \\\n 'http://myservice.oto.tools:8080'\n```\n\nThis should output a json with `X-JWT-Token` in headers field. Its value is exactly the same as the passed token.\n\n```json\n{\n \"method\": \"GET\",\n \"path\": \"/\",\n \"headers\": {\n \"host\": \"mirror.otoroshi.io\",\n \"X-JWT-Token\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.ipDFgkww51mSaSg_199BMRj4gK20LGz_czozu3u8rCFFO1X20MwcabSqEzUc0q4qQ4rjTxjoR4HeUDVcw8BxoQ\",\n ...\n }\n}\n```\n\n### Verify and generate a new token\n\nAn other feature is to verify the incomings tokens and generate new ones, with a different signature and claims. \n\nLet's start by extending the @link:[previous verifier](http://otoroshi.oto.tools:8080/bo/dashboard/jwt-verifiers) { open=new }.\n\n1. Jump to the `Verif Strategy` field and select `Verify and re-sign JWT token`. \n2. Edit the name with `jwt-verify-and-resign`\n3. Remove the default field in `Verify token fields` array\n4. Change the second `Hmac secret` in `Re-sign settings` section with `otoroshi-internal-secret`\n5. Save your verifier.\n\n> Note : the name of the verifier doesn't impact the identifier. So you can save the changes of your verifier without modifying the identifier used in your call. \n\n```sh\n# replace xxxx by the generated token\ncurl -X GET \\\n -H \"Authorization: xxxx\" \\\n 'http://myservice.oto.tools:8080'\n```\n\nThis should output a json with `authorization` in headers field. This time, the value are different and you can check his signature on @link:[jwt.io](https://jwt.io) { open=new } (the expected secret of the generated token is **otoroshi-internal-secret**)\n\n\n\n### Verify, transform and generate a new token\n\nThe most advanced verifier is able to do the same as the previous ones, with the ability to configure the token generation (claims, output header name).\n\nLet's start by extending the @link:[previous verifier](http://otoroshi.oto.tools:8080/bo/dashboard/jwt-verifiers) { open=new }.\n\n1. Jump to the `Verif Strategy` field and select `Verify, transform and re-sign JWT token`. \n\n2. Edit the name with `jwt-verify-transform-and-resign`\n3. Remove the default field in `Verify token fields` array\n4. Change the second `Hmac secret` in `Re-sign settings` section with `otoroshi-internal-secret`\n5. Set `Internal-Authorization` as `Header name`\n6. Set `key` on first field of `Rename token fields` and `from-otoroshi-verifier` on second field\n7. Set `generated-key` and `generated-value` as `Set token fields`\n8. Add `generated_at` and `${date}` as second field of `Set token fields` (Otoroshi supports an @ref:[expression language](../topics/expression-language.md))\n9. Save your verifier and try to call your service again.\n\nThis should output a json with `authorization` in headers field and our generate token in `Internal-Authorization`.\nOnce paste in @link:[jwt.io](https://jwt.io) { open=new }, you should have :\n\n\n\nYou can see, in the payload of your token, the two claims **from-otoroshi-verifier** and **generated-key** added during the generation of the token by the JWT verifier.\n"},{"name":"secure-app-with-auth0.md","id":"/how-to-s/secure-app-with-auth0.md","url":"/how-to-s/secure-app-with-auth0.html","title":"Secure an app with Auth0","content":"# Secure an app with Auth0\n\n### Download Otoroshi\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Configure an Auth0 client\n\nThe first step of this tutorial is to setup an Auth0 application with the information of the instance of our Otoroshi.\n\nNavigate to @link:[https://manage.auth0.com](https://manage.auth0.com) { open=new } (create an account if it's not already done). \n\nLet's create an application when clicking on the **Applications** button on the sidebar. Then click on the **Create application** button on the top right.\n\n1. Choose `Regular Web Applications` as `Application type`\n2. Then set for example `otoroshi-client` as `Name`, and confirm the creation\n3. Jump to the `Settings` tab\n4. Scroll to the `Application URLs` section and add the following url as `Allowed Callback URLs` : `http://otoroshi.oto.tools:8080/backoffice/auth0/callback`\n5. Set `https://otoroshi.oto.tools:8080/` as `Allowed Logout URLs`\n6. Set `https://otoroshi.oto.tools:8080` as `Allowed Web Origins` \n7. Save changes at the bottom of the page.\n\nOnce done, we have a full setup, with a client ID and secret at the top of the page, which authorizes our Otoroshi and redirects the user to the callback url when they log into Auth0.\n\n### Create an Auth0 provider module\n\nLet's back to Otoroshi to create an authentication module with `OAuth2 / OIDC provider` as `type`.\n\n1. Go ahead, and navigate to @link:[http://otoroshi.oto.tools:8080](http://otoroshi.oto.tools:8080) { open=new }\n1. Click on the cog icon on the top right\n1. Then `Authentication configs` button\n1. And add a new configuration when clicking on the `Add item` button\n2. Select the `OAuth provider` in the type selector field\n3. Then click on `Get from OIDC config` and paste `https://..auth0.com/.well-known/openid-configuration`. Replace the tenant name by the name of your tenant (displayed on the left top of auth0 page), and the region of the tenant (`eu` in my case).\n\nOnce done, set the `Client ID` and the `Client secret` from your Auth0 application. End the configuration with `http://otoroshi.oto.tools:8080/backoffice/auth0/callback` as `Callback URL`.\n\nAt the bottom of the page, disable the `secure` button (because we're using http and this configuration avoid to include cookie in an HTTP Request without secure channel, typically HTTPs).\n\n### Connect to Otoroshi with Auth0 authentication\n\nTo secure Otoroshi with your Auth0 configuration, we have to register an **Authentication configuration** as a BackOffice Auth. configuration.\n\n1. Navigate to the **danger zone** (when clicking on the cog on the top right and selecting Danger zone)\n2. Scroll to the **BackOffice auth. settings**\n3. Select your last Authentication configuration (created in the previous section)\n4. Save the global configuration with the button on the top right\n\n#### Testing your configuration\n\n1. Disconnect from your instance\n1. Then click on the *Login using third-party* button (or navigate to http://otoroshi.oto.tools:8080)\n2. Click on **Login using Third-party** button\n3. If all is configured, Otoroshi will redirect you to the auth0 server login page\n4. Set your account credentials\n5. Good works! You're connected to Otoroshi with an Auth0 module.\n\n### Secure an app with Auth0 authentication\n\nWith the previous configuration, you can secure any of Otoroshi services with it. \n\nThe first step is to apply a little change on the previous configuration. \n\n1. Navigate to @link:[http://otoroshi.oto.tools:8080/bo/dashboard/auth-configs](http://otoroshi.oto.tools:8080/bo/dashboard/auth-configs) { open=new }.\n2. Create a new **Authentication module** configuration with the same values.\n3. Replace the `Callback URL` field to `http://privateapps.oto.tools:8080/privateapps/generic/callback` (we changed this value because the redirection of a connected user by a third-party server is covered by another route by Otoroshi).\n4. Disable the `secure` button (because we're using http and this configuration avoid to include cookie in an HTTP Request without secure channel, typically HTTPs)\n\n> Note : an Otoroshi service is called **a private app** when it is protected by an Authentication module.\n\nWe can set the Authentication module on your route.\n\n1. Navigate to any created route\n2. Search in the list of plugins the plugin named `Authentication`\n3. Select your Authentication config inside the list\n4. Don't forget to save your configuration.\n5. Now you can try to call your route and see the Auth0 login page appears.\n\n\n"},{"name":"secure-app-with-keycloak.md","id":"/how-to-s/secure-app-with-keycloak.md","url":"/how-to-s/secure-app-with-keycloak.html","title":"Secure an app with Keycloak","content":"# Secure an app with Keycloak\n\n### Before you start\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Running a keycloak instance with docker\n\n```sh\ndocker run \\\n -p 8080:8080 \\\n -e KEYCLOAK_USER=admin \\\n -e KEYCLOAK_PASSWORD=admin \\\n --name keycloak-server \\\n --detach jboss/keycloak:15.0.1\n```\n\nThis should download the image of keycloak (if you haven't already it) and display the digest of the created container. This command mapped TCP port 8080 in the container to port 8080 of your laptop and created a server with `admin/admin` as admin credentials.\n\nOnce started, you can open a browser on @link:[http://localhost:8080](http://localhost:8080) { open=new } and click on `Administration Console`. Log to your instance with `admin/admin` as credentials.\n\nThe first step is to create a Keycloak client, an entity that can request Keycloak to authenticate a user. Click on the **clients** button on the sidebar, and then on **Create** button at the top right of the view.\n\nFill the client form with the following values.\n\n* `Client ID`: `keycloak-otoroshi-backoffice`\n* `Client Protocol`: `openid-connect`\n* `Root URL`: `http://otoroshi.oto.tools:8080/`\n\nValidate the creation of the client by clicking on the **Save** button.\n\nThe next step is to change the `Access Type` used by default. Jump to the `Access Type` field and select `confidential`. The confidential configuration force the client application to send at Keycloak a client ID and a client Secret. Scroll to the bottom of the page and save the configuration.\n\nNow scroll to the top of your page. Just at the right of the `Settings` tab, a new tab appeared : the `Credentials` page. Click on this tab, and make sure that `Client Id and Secret` is selected as `Client Authenticator` and copy the generated `Secret` to the next part.\n\n### Create a Keycloak provider module\n\n1. Go ahead, and navigate to http://otoroshi.oto.tools:8080\n1. Click on the cog icon on the top right\n1. Then `Authentication configs` button\n1. And add a new configuration when clicking on the `Add item` button\n2. Select the `OAuth2 / OIDC provider` in the type selector field\n3. Set a basic name and description\n\nA simple way to import a Keycloak client is to give the `URL of the OpenID Connect` Otoroshi. By default, keycloak used the next URL : `http://localhost:8080/auth/realms/master/.well-known/openid-configuration`. \n\nClick on the `Get from OIDC config` button and paste the previous link. Once it's done, scroll to the `URLs` section. All URLs has been fill with the values picked from the JSON object returns by the previous URL.\n\nThe only fields to change are : \n\n* `Client ID`: `keycloak-otoroshi-backoffice`\n* `Client Secret`: Paste the secret from the Credentials Keycloak page. In my case, it's something like `90c9bf0b-2c0c-4eb0-aa02-72195beb9da7`\n* `Callback URL`: `http://otoroshi.oto.tools:8080/backoffice/auth0/callback`\n\nAt the bottom of the page, disable the `secure` button (because we're using http and this configuration avoid to include cookie in an HTTP Request without secure channel, typically HTTPs). Nothing else to change, just save the configuration.\n\n### Connect to Otoroshi with Keycloak authentication\n\nTo secure Otoroshi with your Keycloak configuration, we have to register an Authentication configuration as a BackOffice Auth. configuration.\n\n1. Navigate to the **danger zone** (when clicking on the cog on the top right and selecting Danger zone)\n1. Scroll to the **BackOffice auth. settings**\n1. Select your last Authentication configuration (created in the previous section)\n1. Save the global configuration with the button on the top right\n\n### Testing your configuration\n\n1. Disconnect from your instance\n1. Then click on the **Login using third-party** button (or navigate to @link:[http://otoroshi.oto.tools:8080](http://otoroshi.oto.tools:8080) { open=new })\n2. Click on **Login using Third-party** button\n3. If all is configured, Otoroshi will redirect you to the keycloak login page\n4. Set `admin/admin` as user and trust the user by clicking on `yes` button.\n5. Good work! You're connected to Otoroshi with an Keycloak module.\n\n> A fallback solution is always available in the event of a bad authentication configuration. By going to http://otoroshi.oto.tools:8080/bo/simple/login, the administrators will be able to redefine the configuration.\n\n### Visualize an admin user session or a private user session\n\nEach user, wheter connected user to the Otoroshi UI or at a private Otoroshi app, has an own session. As an administrator of Otoroshi, you can visualize via Otoroshi the list of the connected users and their profile.\n\nLet's start by navigating to the `Admin users sessions` page (just @link:[here](http://otoroshi.oto.tools:8080/bo/dashboard/sessions/admin) or when clicking on the cog, and on the `Admins sessions` button at the bottom of the list).\n\nThis page gives a complete view of the connected admins. For each admin, you have his connection date and his expiration date. You can also check the `Profile` and the `Rights` of the connected users.\n\nIf we check the profile and the rights of the previously logged user (from Keycloak in the previous part) we can retrieve the following information :\n\n```json\n{\n \"sub\": \"4c8cd101-ca28-4611-80b9-efa504ac51fd\",\n \"upn\": \"admin\",\n \"email_verified\": false,\n \"address\": {},\n \"groups\": [\n \"create-realm\",\n \"default-roles-master\",\n \"offline_access\",\n \"admin\",\n \"uma_authorization\"\n ],\n \"preferred_username\": \"admin\"\n}\n```\n\nand his default rights \n\n```sh\n[\n {\n \"tenant\": \"default:rw\",\n \"teams\": [\n \"default:rw\"\n ]\n }\n]\n```\n\nWe haven't create any specific groups in Keycloak or specify rights in Otoroshi for him. In this case, the use received the default Otoroshi rights at his connection. The user can navigate on the default Organization and Teams (which are two resources created by Otoroshi at the boot) and have the full access on its (`r`: Read, `w`: Write, `*`: read/write).\n\nIn the same way, you'll find all users connected to a private Otoroshi app when navigate on the @link:[`Private App View`](http://otoroshi.oto.tools:8080/bo/dashboard/sessions/private) or using the cog at the top of the page. \n\n### Configure the Keycloak module to force logged in users to be an Otoroshi admin with full access\n\nGo back to the Keycloak module in `Authentication configs` view. Turn on the `Supers admin only` button and save your configuration. Try again the connection to Otoroshi using Keycloak third-party server.\n\nOnce connected, click on the cog button, and check that you have access to the full features of Otoroshi (like Admin user sessions). Now, your rights should be : \n```json\n[\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\"\n ]\n }\n]\n```\n\n### Merge Id token content on user profile\n\nGo back to the Keycloak module in `Authentication configs` view. Turn on the `Read profile` from token button and save your configuration. Try again the connection to Otoroshi using Keycloak third-party server.\n\nOnce connected, your profile should be contains all Keycloak id token : \n```json\n{\n \"exp\": 1634286674,\n \"iat\": 1634286614,\n \"auth_time\": 1634286614,\n \"jti\": \"eb368578-e886-4caa-a51b-c1d04973c80e\",\n \"iss\": \"http://localhost:8080/auth/realms/master\",\n \"aud\": [\n \"master-realm\",\n \"account\"\n ],\n \"sub\": \"4c8cd101-ca28-4611-80b9-efa504ac51fd\",\n \"typ\": \"Bearer\",\n \"azp\": \"keycloak-otoroshi-backoffice\",\n \"session_state\": \"e44fe471-aa3b-477d-b792-4f7b4caea220\",\n \"acr\": \"1\",\n \"allowed-origins\": [\n \"http://otoroshi.oto.tools:8080\"\n ],\n \"realm_access\": {\n \"roles\": [\n \"create-realm\",\n \"default-roles-master\",\n \"offline_access\",\n \"admin\",\n \"uma_authorization\"\n ]\n },\n \"resource_access\": {\n \"master-realm\": {\n \"roles\": [\n \"view-identity-providers\",\n \"view-realm\",\n \"manage-identity-providers\",\n \"impersonation\",\n \"create-client\",\n \"manage-users\",\n \"query-realms\",\n \"view-authorization\",\n \"query-clients\",\n \"query-users\",\n \"manage-events\",\n \"manage-realm\",\n \"view-events\",\n \"view-users\",\n \"view-clients\",\n \"manage-authorization\",\n \"manage-clients\",\n \"query-groups\"\n ]\n },\n \"account\": {\n \"roles\": [\n \"manage-account\",\n \"manage-account-links\",\n \"view-profile\"\n ]\n }\n }\n ...\n}\n```\n\n### Manage the Otoroshi user rights from keycloak\n\nOne powerful feature supports by Otoroshi, is to use the Keycloak groups attributes to set a list of rights for a Otoroshi user.\n\nIn the Keycloak module, you have a field, named `Otoroshi rights field name` with `otoroshi_rights` as default value. This field is used by Otoroshi to retrieve information from the Id token groups.\n\nLet's create a group in Keycloak, and set our default Admin user inside.\nIn Keycloak admin console :\n\n1. Navigate to the groups view, using the keycloak sidebar\n2. Create a new group with `my-group` as `Name`\n3. Then, on the `Attributes` tab, create an attribute with `otoroshi_rights` as `Key` and the following json array as `Value`\n```json\n[\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\",\n \"my-future-team:rw\"\n ]\n }\n]\n```\n\nWith this configuration, the user have a full access on all Otoroshi resources (my-future-team is not created in Otoroshi but it's not a problem, Otoroshi can handle it and use this rights only when the team will be present)\n\nClick on the **Add** button and **save** the group. The last step is to assign our user to this group. Jump to `Users` view using the sidebar, click on **View all users**, edit the user and his group membership using the `Groups` tab (use **join** button the assign user in `my-group`).\n\nThe next step is to add a mapper in the Keycloak client. By default, Keycloak doesn't expose any users information (like group membership or users attribute). We need to ask to Keycloak to expose the user attribute `otoroshi_rights` set previously on group.\n\nNavigate to the `Keycloak-otoroshi-backoffice` client, and jump to `Mappers` tab. Create a new mapper with the following values: \n\n* Name: `otoroshi_rights`\n* Mapper Type: `User Attribute`\n* User Attribute: `otoroshi_rights`\n* Token Claim Name: `otoroshi_rights`\n* Claim JSON Type: `JSON`\n* Multivalued: `√`\n* Aggregate attribute values: `√`\n\nGo back to the Authentication Keycloak module inside Otoroshi UI, and turn off **Super admins only**. **Save** the configuration.\n\nOnce done, try again the connection to Otoroshi using Keycloak third-party server.\nNow, your rights should be : \n```json\n[\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\",\n \"my-future-team:rw\"\n ]\n }\n]\n```\n\n### Secure an app with Keycloak authentication\n\nThe only change to apply on the previous authentication module is on the callback URL. When you want secure a Otoroshi service, and transform it on `Private App`, you need to set the `Callback URL` at `http://privateapps.oto.tools:8080/privateapps/generic/callback`. This configuration will redirect users to the backend service after they have successfully logged in.\n\n1. Go back to the authentication module\n2. Jump to the `Callback URL` field\n3. Paste this value `http://privateapps.oto.tools:8080/privateapps/generic/callback`\n4. Save your configuration\n5. Navigate to `http://myservice.oto.tools:8080`.\n6. You should redirect to the keycloak login page.\n7. Once logged in, you can check the content of the private app session created.\n\nThe rights should be : \n\n```json\n[\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\",\n \"my-future-team:rw\"\n ]\n }\n]\n```"},{"name":"secure-app-with-ldap.md","id":"/how-to-s/secure-app-with-ldap.md","url":"/how-to-s/secure-app-with-ldap.html","title":"Secure an app and/or your Otoroshi UI with LDAP","content":"# Secure an app and/or your Otoroshi UI with LDAP\n\n### Before you start\n\n@@include[fetch-and-start.md](../includes/fetch-and-start.md) { #init }\n\n#### Running an simple OpenLDAP server \n\nRun OpenLDAP docker image : \n```sh\ndocker run \\\n -p 389:389 \\\n -p 636:636 \\\n --env LDAP_ORGANISATION=\"Otoroshi company\" \\\n --env LDAP_DOMAIN=\"otoroshi.tools\" \\\n --env LDAP_ADMIN_PASSWORD=\"otoroshi\" \\\n --env LDAP_READONLY_USER=\"false\" \\\n --env LDAP_TLS\"false\" \\\n --env LDAP_TLS_ENFORCE\"false\" \\\n --name my-openldap-container \\\n --detach osixia/openldap:1.5.0\n```\n\nLet's make the first search in our LDAP container :\n\n```sh\ndocker exec my-openldap-container ldapsearch -x -H ldap://localhost -b dc=otoroshi,dc=tools -D \"cn=admin,dc=otoroshi,dc=tools\" -w otoroshi\n```\n\nThis should output :\n```sh\n# extended LDIF\n ...\n# otoroshi.tools\ndn: dc=otoroshi,dc=tools\nobjectClass: top\nobjectClass: dcObject\nobjectClass: organization\no: Otoroshi company\ndc: otoroshi\n\n# search result\nsearch: 2\nresult: 0 Success\n...\n```\n\nNow you can seed the open LDAP server with a few users. \n\nJoin your LDAP container.\n\n```sh\ndocker exec -it my-openldap-container \"/bin/bash\"\n```\n\nThe command `ldapadd` needs of a file to run.\n\nLaunch this command to create a `bootstrap.ldif` with one organization, one singers group with John user and a last group with Baz as scientist.\n\n```sh\necho -e \"\ndn: ou=People,dc=otoroshi,dc=tools\nobjectclass: top\nobjectclass: organizationalUnit\nou: People\n\ndn: ou=Role,dc=otoroshi,dc=tools\nobjectclass: top\nobjectclass: organizationalUnit\nou: Role\n\ndn: uid=john,ou=People,dc=otoroshi,dc=tools\nobjectclass: top\nobjectclass: person\nobjectclass: organizationalPerson\nobjectclass: inetOrgPerson\nuid: john\ncn: John\nsn: Brown\nmail: john@otoroshi.tools\npostalCode: 88442\nuserPassword: password\n\ndn: uid=baz,ou=People,dc=otoroshi,dc=tools\nobjectclass: top\nobjectclass: person\nobjectclass: organizationalPerson\nobjectclass: inetOrgPerson\nuid: baz\ncn: Baz\nsn: Wilson\nmail: baz@otoroshi.tools\npostalCode: 88443\nuserPassword: password\n\ndn: cn=singers,ou=Role,dc=otoroshi,dc=tools\nobjectclass: top\nobjectclass: groupOfNames\ncn: singers\nmember: uid=john,ou=People,dc=otoroshi,dc=tools\n\ndn: cn=scientists,ou=Role,dc=otoroshi,dc=tools\nobjectclass: top\nobjectclass: groupOfNames\ncn: scientists\nmember: uid=baz,ou=People,dc=otoroshi,dc=tools\n\" > bootstrap.ldif\n\nldapadd -x -w otoroshi -D \"cn=admin,dc=otoroshi,dc=tools\" -f bootstrap.ldif -v\n```\n\n### Create an Authentication configuration\n\n- Go ahead, and navigate to @link:[http://otoroshi.oto.tools:8080](http://otoroshi.oto.tools:8080) { open=new }\n- Click on the cog icon on the top right\n- Then `Authentication configs` button\n- And add a new configuration when clicking on the `Add item` button\n- Select the `Ldap auth. provider` in the type selector field\n- Set a basic name and description\n- Then set `ldap://localhost:389` as `LDAP Server URL`and `dc=otoroshi,dc=tools` as `Search Base`\n- Create a group filter (in the next part, we'll change this filter to spread users in different groups with given rights) with \n - objectClass=groupOfNames as `Group filter` \n - All as `Tenant`\n - All as `Team`\n - Read/Write as `Rights`\n- Set the search filter as `(uid=${username})`\n- Set `cn=admin,dc=otoroshi,dc=tools` as `Admin username`\n- Set `otoroshi` as `Admin password`\n- At the bottom of the page, disable the `secure` button (because we're using http and this configuration avoid to include cookie in an HTTP Request without secure channel, typically HTTPs)\n\n\n At this point, your configuration should be similar to :\n \n\n\n\n> Dont' forget to save on the bottom page your configuration before to quit the page.\n\n- Test the connection when clicking on `Test admin connection` button. This should show a `It works!` message\n\n- Finally, test the user connection button and set `john/password` or `baz/password` as credentials. This should show a `It works!` message\n\n> Dont' forget to save on the bottom page your configuration before to quit the page.\n\n\n### Connect to Otoroshi with LDAP authentication\n\nTo secure Otoroshi with your LDAP configuration, we have to register an **Authentication configuration** as a BackOffice Auth. configuration.\n\n- Navigate to the **danger zone** (when clicking on the cog on the top right and selecting Danger zone)\n- Scroll to the **BackOffice auth. settings**\n- Select your last Authentication configuration (created in the previous section)\n- Save the global configuration with the button on the top right\n\n### Testing your configuration\n\n- Disconnect from your instance\n- Then click on the **Login using third-party** button (or navigate to @link:[http://otoroshi.oto.tools:8080/backoffice/auth0/login](http://otoroshi.oto.tools:8080/backoffice/auth0/login) { open=new })\n- Set `john/password` or `baz/password` as credentials\n\n> A fallback solution is always available in the event of a bad authentication configuration. By going to http://otoroshi.oto.tools:8080/bo/simple/login, the administrators will be able to redefine the configuration.\n\n\n#### Secure an app with LDAP authentication\n\nOnce the configuration is done, you can secure any of Otoroshi routes. \n\n- Navigate to any created route\n- Add the `Authentication` plugin to your route\n- Select your Authentication config inside the list\n- Save your configuration\n\nNow try to call your route. The login module should appear.\n\n#### Manage LDAP users rights on Otoroshi\n\nFor each group filter, you can affect a list of rights:\n\n- on an `Organization`\n- on a `Team`\n- and a level of rights : `Read`, `Write` or `Read/Write`\n\n\nStart by navigate to your authentication configuration (created in @ref:[previous](#create-an-authentication-configuration) step).\n\nThen, replace the values of the `Mapping group filter` field to match LDAP groups with Otoroshi rights.\n\n\n\n\nWith this configuration, Baz is an administrator of Otoroshi with full rights (read / write) on all organizations.\n\nConversely, John can't see any configuration pages (like the danger zone) because he has only the read rights on Otoroshi.\n\nYou can easily test this behaviour by @ref:[testing](#testing-your-configuration) with both credentials.\n\n\n#### Advanced usage of LDAP Authentication\n\nIn the previous section, we have define rights for each LDAP groups. But in some case, we want to have a finer granularity like set rights for a specific user. The last 4 fields of the authentication form cover this. \n\nLet's start by adding few properties for each connected users with `Extra metadata`.\n\n```json\n// Add this configuration in extra metadata part\n{\n \"provider\": \"OpenLDAP\"\n}\n```\n\nThe next field `Data override` is merged with extra metadata when a user connects to a `private app` or to the UI (inside Otoroshi, private app is a service secure by any authentication module). The `Email field name` is configured to match with the `mail` field from LDAP user data.\n\n```json \n{\n \"john@otoroshi.tools\": {\n \"stage_name\": \"Will\"\n }\n}\n```\n\nIf you try to connect to an app with this configuration, the user result profile should be :\n\n```json\n{\n ...,\n \"metadata\": {\n \"lastname\": \"Willy\",\n \"stage_name\": \"Will\"\n }\n}\n```\n\nLet's try to increase the John rights with the `Additional rights group`.\n\nThis field supports the creation of virtual groups. A virtual group is composed of a list of users and a list of rights for each teams/organizations.\n\n```json\n// increase_john_rights is a virtual group which adds full access rights at john \n{\n \"increase_john_rights\": {\n \"rights\": [\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\"\n ]\n }\n ],\n \"users\": [\n \"john@otoroshi.tools\"\n ]\n }\n}\n```\n\nThe last field `Rights override` is useful when you want erase the rights of an user with only specific rights. This field is the last to be applied on the user rights. \n\nTo resume, when John connects to Otoroshi, he receives the rights to only read the default Organization (from **Mapping group filter**), then he is promote to administrator role (from **Additional rights group**) and finally his rights are reset with the last field **Rights override** to the read rights.\n\n```json \n{\n \"john@otoroshi.tools\": [\n {\n \"tenant\": \"*:r\",\n \"teams\": [\n \"*:r\"\n ]\n }\n ]\n}\n```\n\n\n\n\n\n\n\n\n"},{"name":"secure-the-communication-between-a-backend-app-and-otoroshi.md","id":"/how-to-s/secure-the-communication-between-a-backend-app-and-otoroshi.md","url":"/how-to-s/secure-the-communication-between-a-backend-app-and-otoroshi.html","title":"Secure the communication between a backend app and Otoroshi","content":"# Secure the communication between a backend app and Otoroshi\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\nLet's create a new route with the Otorochi challenge plugin enabled.\n\n```sh\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/routes \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"myapi.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"localhost\",\n \"port\": 8081,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.OtoroshiChallenge\",\n \"config\": {\n \"version\": 2,\n \"ttl\": 30,\n \"request_header_name\": \"Otoroshi-State\",\n \"response_header_name\": \"Otoroshi-State-Resp\",\n \"algo_to_backend\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"algo_from_backend\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"state_resp_leeway\": 10\n }\n }\n ]\n}\nEOF\n```\n\nLet's use the following application, developed in NodeJS, which supports both versions of the exchange protocol.\n\nClone this @link:[repository](https://github.com/MAIF/otoroshi/blob/master/demos/challenge) and run the installation of the dependencies.\n\n```sh\ngit clone 'git@github.com:MAIF/otoroshi.git' --depth=1\ncd ./otoroshi/demos/challenge\nnpm install\nPORT=8081 node server.js\n```\n\nThe last command should return : \n\n```sh\nchallenge-verifier listening on http://0.0.0.0:8081\n```\n\nThis project runs an express client with one middleware. The middleware handles each request, and check if the header `State token header` is present in headers. By default, the incoming expected header is `Otoroshi-State` by the application and `Otoroshi-State-Resp` header in the headers of the return request. \n\nTry to call your service via http://myapi.oto.tools:8080/. This should return a successful response with all headers received by the backend app. \n\nNow try to disable the middleware in the nodejs file by commenting the following line. \n\n```js\n// app.use(OtoroshiMiddleware());\n```\n\nTry to call again your service. This time, Otoroshi breaks the return response from your backend service, and returns.\n\n```sh\nDownstream microservice does not seems to be secured. Cancelling request !\n```"},{"name":"secure-with-apikey.md","id":"/how-to-s/secure-with-apikey.md","url":"/how-to-s/secure-with-apikey.html","title":"Secure an api with api keys","content":"# Secure an api with api keys\n\n### Before you start\n\n@@include[fetch-and-start.md](../includes/fetch-and-start.md) { #init }\n\n### Create a simple route\n\n**From UI**\n\n1. Navigate to @link:[http://otoroshi.oto.tools:8080/bo/dashboard/routes](http://otoroshi.oto.tools:8080/bo/dashboard/routes) { open=new } and click on the `create new route` button\n2. Give a name to your route\n3. Save your route\n4. Set `myservice.oto.tools` as frontend domains\n5. Set `https://mirror.otoroshi.io` as backend target (hostname: `mirror.otoroshi.io`, port: `443`, Tls: `Enabled`)\n\n**From Admin API**\n\n```sh\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/routes \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"id\": \"myservice\",\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"myservice.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n }\n}\nEOF\n```\n\n### Secure routes with api key\n\nBy default, a route is public. In our case, we want to secure all paths starting with `/api` and leave all others unauthenticated.\n\nLet's add a new plugin, called `Apikeys`, to our route. Search in the list of plugins, then add it to the flow.\nOnce done, restrict its range by setting up `/api` in the `Informations>include` section.\n\n**From Admin API**\n\n```sh\ncurl -X PUT http://otoroshi-api.oto.tools:8080/api/routes/myservice \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"id\": \"myservice\",\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"myservice.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.ApikeyCalls\",\n \"include\": [\n \"/api\"\n ],\n \"config\": {\n \"validate\": true,\n \"mandatory\": true,\n \"wipe_backend_request\": true,\n \"update_quotas\": true\n }\n }\n ]\n}\nEOF\n```\n\nNavigate to @link:[http://myservice.oto.tools:8080/api/test](http://myservice.oto.tools:8080/api/test) { open=new } again. If the service is configured, you should have a `Service Not found error`.\n\nThe expected error on the `/api/test`, indicate that an api key is required to access to this part of the backend service.\n\nNavigate to any other routes which are not starting by `/api/*` like @link:[http://myservice.oto.tools:8080/test/bar](http://myservice.oto.tools:8080/test/bar) { open=new }\n\n\n### Generate an api key to request secure services\n\nNavigate to @link:[http://otoroshi.oto.tools:8080/bo/dashboard/apikeys/add](http://otoroshi.oto.tools:8080/bo/dashboard/apikeys/add) { open=new } or when clicking on the **Add apikey** button on the sidebar.\n\nThe only required fields of an Otoroshi api key are : \n\n* `ApiKey id`\n* `ApiKey Secret`\n* `ApiKey Name`\n\nThese fields are automatically generated by Otoroshi. However, you can override these values and indicate an additional description.\n\nTo simplify the rest of the tutorial, set the values:\n\n* `my-first-api-key-id` as `ApiKey Id`\n* `my-first-api-key-secret` as `ApiKey Secret`\n\nClick on **Create and stay on this ApiKey** button at the bottom of the page.\n\nNow you created the key, it's time to call our previous generated service with it.\n\nOtoroshi supports two methods to achieve that. \nOnce by passing Otoroshi api key in two headers : `Otoroshi-Client-Id` and `Otoroshi-Client-Secret` (these headers names can be override on each service).\nAnd the second by passing Otoroshi api key in the authentication Header (basically the `Authorization` header) as a basic encoded value.\n\nLet's ahead and call our service :\n\n```sh\ncurl -X GET \\\n -H 'Otoroshi-Client-Id: my-first-api-key-id' \\\n -H 'Otoroshi-Client-Secret: my-first-api-key-secret' \\\n 'http://myservice.oto.tools:8080/api/test' --include\n```\n\nAnd with the second method :\n\n```sh\ncurl -X GET \\\n -H 'Authorization: Basic bXktZmlyc3QtYXBpLWtleS1pZDpteS1maXJzdC1hcGkta2V5LXNlY3JldA==' \\\n 'http://myservice.oto.tools:8080/api/test' --include\n```\n\n> Tips : To easily fill your headers, you can jump to the `Call examples` section in each api key view. In this section the header names are the default values and the service url is not set. You have to adapt these lines to your case. \n\n### Override defaults headers names for a route\n\nIn some case, we want to change the defaults headers names (and it's a quite good idea).\n\nLet's start by navigating to the `Apikeys` plugin in the Designer of our route.\n\nThe first values to change are the headers names used to read the api key from client. Start by clicking on `extractors > CustomHeaders` and set the following values :\n\n* `api-key-header-id` as `Custom client id header name`\n* `api-key-header-secret` as `Custom client secret header name`\n\nSave the route, and call the service again.\n\n```sh\ncurl -X GET \\\n -H 'Otoroshi-Client-Id: my-first-api-key-id' \\\n -H 'Otoroshi-Client-Secret: my-first-api-key-secret' \\\n 'http://myservice.oto.tools:8080/api/test' --include\n```\n\nThis should output an error because Otoroshi are expecting the api keys in other headers.\n\n```json\n{\n \"Otoroshi-Error\": \"No ApiKey provided\"\n}\n```\n\nCall one again the service but with the changed headers names.\n\n```sh\ncurl -X GET \\\n -H 'api-key-header-id: my-first-api-key-id' \\\n -H 'api-key-header-secret: my-first-api-key-secret' \\\n 'http://myservice.oto.tools:8080/api/test' --include\n```\n\nAll others default services will continue to accept the api keys with the `Otoroshi-Client-Id` and `Otoroshi-Client-Secret` headers, whereas our service, will accept the `api-key-header-id` and `api-key-header-secret` headers.\n\n### Accept only api keys with expected values\n\nBy default, a secure service only accepts requests with api key. But all generated api keys are eligible to call our service and in some case, we want authorize only a couple of api keys.\n\nYou can restrict the list of accepted api keys by giving a list of `metadata` or/and `tags`. Each api key has a list of `tags` and `metadata`, which can be used by Otoroshi to validate a request with an api key. All api key metadata/tags can be forward to your service (see `Otoroshi Challenge` section of a service to get more information about `Otoroshi info. token`).\n\nLet's starting by only accepting api keys with the `otoroshi` tag.\n\nClick on the `ApiKeys` plugin, and enabled the `Routing` section. These constraints guarantee that a request will only be transmitted if all the constraints are validated.\n\nIn our first case, set `otoroshi` in `One Tag in` array and save the service.\nThen call our service with :\n```sh\ncurl -X GET \\\n -H 'Otoroshi-Client-Id: my-first-api-key-id' \\\n -H 'Otoroshi-Client-Secret: my-first-api-key-secret' \\\n 'http://myservice.oto.tools:8080/api/test' --include\n```\n\nThis should output :\n```json\n// Error reason : Our api key doesn't contains the expected tag.\n{\n \"Otoroshi-Error\": \"Bad API key\"\n}\n```\n\nNavigate to the edit page of our api key, and jump to the `Metadata and tags` section.\nIn this section, add `otoroshi` in `Tags` array, then save the api key. Call once again your call and you will normally get a successful response of our backend service.\n\nIn this example, we have limited our service to API keys that have `otoroshi` as a tag.\n\nOtoroshi provides a few others behaviours. For each behaviour, *Api key used should*:\n\n* `All Tags in` : have all of the following tags\n* `No Tags in` : not have one of the following tags\n* `One Tag in` : have at least one of the following tags\n\n---\n\n* `All Meta. in` : have all of the following metadata entries\n* `No Meta. in` : not have one of the following metadata entries\n* `One Meta. in` : have at least one of the following metadata entries\n \n----\n\n* `One Meta key in` : have at least one of the following key in metadata\n* `All Meta key in` : have all of the following keys in metadata\n* `No Meta key in` : not have one of the following keys in metadata"},{"name":"secure-with-oauth1-client.md","id":"/how-to-s/secure-with-oauth1-client.md","url":"/how-to-s/secure-with-oauth1-client.html","title":"Secure an app with OAuth1 client flow","content":"# Secure an app with OAuth1 client flow\n\n### Before you start\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Running an simple OAuth 1 server\n\nIn this tutorial, we'll instantiate a oauth 1 server with docker. If you alredy have the necessary, skip this section @ref:[to](#create-an-oauth-1-provider-module).\n\nLet's start by running the server\n\n```sh\ndocker run -d --name oauth1-server --rm \\\n -p 5000:5000 \\\n -e OAUTH1_CLIENT_ID=2NVVBip7I5kfl0TwVmGzTphhC98kmXScpZaoz7ET \\\n -e OAUTH1_CLIENT_SECRET=wXzb8tGqXNbBQ5juA0ZKuFAmSW7RwOw8uSbdE3MvbrI8wjcbGp \\\n -e OAUTH1_REDIRECT_URI=http://otoroshi.oto.tools:8080/backoffice/auth0/callback \\\n ghcr.io/beryju/oauth1-test-server\n```\n\nWe created a oauth 1 server which accepts `http://otoroshi.oto.tools:8080/backoffice/auth0/callback` as `Redirect URI`. This URL is used by Otoroshi to retrieve a token and a profile at the end of an authentication process.\n\nAfter this command, the container logs should output :\n```sh \n127.0.0.1 - - [14/Oct/2021 12:10:49] \"HEAD /api/health HTTP/1.1\" 200 -\n```\n\n### Create an OAuth 1 provider module\n\n1. Go ahead, and navigate to @link:[http://otoroshi.oto.tools:8080](http://otoroshi.oto.tools:8080) { open=new }\n1. Click on the cog icon on the top right\n1. Then **Authentication configs** button\n1. And add a new configuration when clicking on the **Add item** button\n2. Select the `Oauth1 provider` in the type selector field\n3. Set a basic name and description like `oauth1-provider`\n4. Set `2NVVBip7I5kfl0TwVmGzTphhC98kmXScpZaoz7ET` as `Consumer key`\n5. Set `wXzb8tGqXNbBQ5juA0ZKuFAmSW7RwOw8uSbdE3MvbrI8wjcbGp` as `Consumer secret`\n6. Set `http://localhost:5000/oauth/request_token` as `Request Token URL`\n7. Set `http://localhost:5000/oauth/authorize` as `Authorize URL`\n8. Set `http://localhost:oauth/access_token` as `Access token URL`\n9. Set `http://localhost:5000/api/me` as `Profile URL`\n10. Set `http://otoroshi.oto.tools:8080/backoffice/auth0/callback` as `Callback URL`\n11. At the bottom of the page, disable the **secure** button (because we're using http and this configuration avoid to include cookie in an HTTP Request without secure channel, typically HTTPs)\n\n At this point, your configuration should be similar to :\n\n\n\n\nWith this configuration, the connected user will receive default access on teams and organizations. If you want to change the access rights for a specific user, you can achieve it with the `Rights override` field and a configuration like :\n\n```json\n{\n \"foo@example.com\": [\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\"\n ]\n }\n ]\n}\n```\n\nSave your configuration at the bottom of the page, then navigate to the `danger zone` to use your module as a third-party connection to the Otoroshi UI.\n\n### Connect to Otoroshi with OAuth1 authentication\n\nTo secure Otoroshi with your OAuth1 configuration, we have to register an Authentication configuration as a BackOffice Auth. configuration.\n\n1. Navigate to the **danger zone** (when clicking on the cog on the top right and selecting Danger zone)\n1. Scroll to the **BackOffice auth. settings**\n1. Select your last Authentication configuration (created in the previous section)\n1. Save the global configuration with the button on the top right\n\n### Testing your configuration\n\n1. Disconnect from your instance\n1. Then click on the **Login using third-party** button (or navigate to http://otoroshi.oto.tools:8080)\n2. Click on **Login using Third-party** button\n3. If all is configured, Otoroshi will redirect you to the oauth 1 server login page\n4. Set `example-user` as user and trust the user by clicking on `yes` button.\n5. Good work! You're connected to Otoroshi with an OAuth1 module.\n\n> A fallback solution is always available in the event of a bad authentication configuration. By going to http://otoroshi.oto.tools:8080/bo/simple/login, the administrators will be able to redefine the configuration.\n\n### Secure an app with OAuth 1 authentication\n\nWith the previous configuration, you can secure any of Otoroshi services with it. \n\nThe first step is to apply a little change on the previous configuration. \n\n1. Navigate to @link:[http://otoroshi.oto.tools:8080/bo/dashboard/auth-configs](http://otoroshi.oto.tools:8080/bo/dashboard/auth-configs) { open=new }.\n2. Create a new auth module configuration with the same values.\n3. Replace the `Callback URL` field to `http://privateapps.oto.tools:8080/privateapps/generic/callback` (we changed this value because the redirection of a logged user by a third-party server is cover by an other route by Otoroshi).\n4. Disable the `secure` button (because we're using http and this configuration avoid to include cookie in an HTTP Request without secure channel, typically HTTPs)\n\n> Note : an Otoroshi service is called a private app when it is protected by an authentication module.\n\nOur example server supports only one redirect URI. We need to kill it, and to create a new container with `http://otoroshi.oto.tools:8080/privateapps/generic/callback` as `OAUTH1_REDIRECT_URI`\n\n```sh\ndocker rm -f oauth1-server\ndocker run -d --name oauth1-server --rm \\\n -p 5000:5000 \\\n -e OAUTH1_CLIENT_ID=2NVVBip7I5kfl0TwVmGzTphhC98kmXScpZaoz7ET \\\n -e OAUTH1_CLIENT_SECRET=wXzb8tGqXNbBQ5juA0ZKuFAmSW7RwOw8uSbdE3MvbrI8wjcbGp \\\n -e OAUTH1_REDIRECT_URI=http://privateapps.oto.tools:8080/privateapps/generic/callback \\\n ghcr.io/beryju/oauth1-test-server\n```\n\nOnce the authentication module and the new container created, we can define the authentication module on the service.\n\n1. Navigate to any created route\n2. Search in the list of plugins the plugin named `Authentication`\n3. Select your Authentication config inside the list\n4. Don't forget to save your configuration.\n\nNow you can try to call your route and see the login module appears.\n\n> \n\nThe allow access to the user.\n\n> \n\nIf you had any errors, make sure of :\n\n* check if you are on http or https, and if the **secure cookie option** is enabled or not on the authentication module\n* check if your OAuth1 server has the REDIRECT_URI set on **privateapps/...**\n* Make sure your server supports POST or GET OAuth1 flow set on authentication module\n\nOnce the configuration is working, you can check, when connecting with an Otoroshi admin user, the `Private App session` created (use the cog at the top right of the page, and select `Priv. app sesssions`, or navigate to @link:[http://otoroshi.oto.tools:8080/bo/dashboard/sessions/private](http://otoroshi.oto.tools:8080/bo/dashboard/sessions/private) { open=new }).\n\nOne interesing feature is to check the profile of the connected user. In our case, when clicking on the `Profile` button of the right user, we should have : \n\n```json\n{\n \"email\": \"foo@example.com\",\n \"id\": 1,\n \"name\": \"test name\",\n \"screen_name\": \"example-user\"\n}\n```"},{"name":"secure-with-oauth2-client-credentials.md","id":"/how-to-s/secure-with-oauth2-client-credentials.md","url":"/how-to-s/secure-with-oauth2-client-credentials.html","title":"Secure an app with OAuth2 client_credential flow","content":"# Secure an app with OAuth2 client_credential flow\n\nOtoroshi makes it easy for your app to implement the [OAuth2 Client Credentials Flow](https://auth0.com/docs/authorization/flows/client-credentials-flow). \n\nWith machine-to-machine (M2M) applications, the system authenticates and authorizes the app rather than a user. With the client credential flow, applications will pass along their Client ID and Client Secret to authenticate themselves and get a token.\n\n## Deployed the Client Credential Service\n\nThe Client Credential Service must be enabled as a global plugin on your Otoroshi instance. Once enabled, it will expose three endpoints to issue and validate tokens for your routes.\n\nLet's navigate to your otoroshi instance (in our case http://otoroshi.oto.tools:8080) on the danger zone (`top right cog icon / Danger zone` or at [/bo/dashboard/dangerzone](http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone)).\n\nTo enable a plugin in global on Otoroshi, you must add it in the `Global Plugins` section.\n\n1. Open the `Global Plugin` section \n2. Click on `enabled` (if not already done)\n3. Search the plugin named `Client Credential Service` of type `Sink` (you need to enabled it on the old or new Otoroshi engine, depending on your use case)\n4. Inject the default configuration by clicking on the button (if you are using the old Otoroshi engine)\n\nIf you click on the arrow near each plugin, you will have the documentation of the plugin and its default configuration.\n\nThe client credential plugin has by default 4 parameters : \n\n* `domain`: a regex used to expose the three endpoints (`default`: *)\n* `expiration`: duration until the token expire (in ms) (`default`: 3600000)\n* `defaultKeyPair`: a key pair used to sign the jwt token. By default, Otoroshi is deployed with an otoroshi-jwt-signing that you can visualize on the jwt verifiers certificates (`default`: \"otoroshi-jwt-signing\")\n* `secure`: if enabled, Otoroshi will expose routes only in the https requests case (`default`: true)\n\nIn this tutorial, we will set the configuration as following : \n\n* `domain`: oauth.oto.tools\n* `expiration`: 3600000\n* `defaultKeyPair`: otoroshi-jwt-signing\n* `secure`: false\n\nNow that the plugin is running, third routes are exposed on each matching domain of the regex.\n\n* `GET /.well-known/otoroshi/oauth/jwks.json` : retrieve all public keys presents in Otoroshi\n* `POST /.well-known/otoroshi/oauth/token/introspect` : validate and decode the token \n* `POST /.well-known/otoroshi/oauth/token` : generate a token with the fields provided\n\nOnce the global configuration saved, we can deployed a simple service to test it.\n\nLet's navigate to the routes page, and create a new route with : \n\n1. `foo.oto.tools` as `domain` in the frontend node\n2. `mirror.otoroshi.io` as hostname in the list of targets of the backend node, and `443` as `port`.\n3. Search in the list of plugins and add the `Apikeys` plugin to the flow\n4. In the extractors section of the `Apikeys` plugin, disabled the `Basic`, `Client id` and `Custom headers` option.\n5. Save your route\n\nLet's make a first call, to check if the jwks are already exposed :\n\n```sh\ncurl 'http://oauth.oto.tools:8080/.well-known/otoroshi/oauth/jwks.json'\n```\n\nThe output should look like a list of public keys : \n```sh\n{\n \"keys\": [\n {\n \"kty\": \"RSA\",\n \"e\": \"AQAB\",\n \"kid\": \"otoroshi-intermediate-ca\",\n ...\n }\n ...\n ]\n}\n``` \n\nLet's make a call to your route. \n\n```sh\ncurl 'http://foo.oto.tools:8080/'\n```\n\nThis should output the expected error: \n```json\n{\n \"Otoroshi-Error\": \"No ApiKey provided\"\n}\n```\n\nThe first step is to generate an api key. Navigate to the api keys page, and create an item with the following values (it will be more easy to use them in the next step)\n\n* `my-id` as `ApiKey Id`\n* `my-secret` as `ApiKey Secret`\n\nThe next step is to get a token by calling the endpoint `http://oauth.oto.tools:8080/.well-known/otoroshi/oauth/jwks.json`. The required fields are the grand type, the client and the client secret corresponding to our generated api key.\n\n```sh\ncurl -X POST http://oauth.oto.tools:8080/.well-known/otoroshi/oauth/token \\\n-H \"Content-Type: application/json\" \\\n-d @- <<'EOF'\n{\n \"grant_type\": \"client_credentials\",\n \"client_id\":\"my-id\",\n \"client_secret\":\"my-secret\"\n}\nEOF\n```\n\nThis request have one more optional field, named `scope`. The scope can be used to set a bunch of scope on the generated access token.\n\nThe last command should look like : \n\n```sh\n{\n \"access_token\": \"generated-token-xxxxx\",\n \"token_type\": \"Bearer\",\n \"expires_in\": 3600\n}\n```\n\nNow we can call our api with the generated token\n\n```sh\ncurl 'http://foo.oto.tools:8080/' \\\n -H \"Authorization: Bearer generated-token-xxxxx\"\n```\n\nThis should output a successful call with the list of headers with a field named `Authorization` containing the previous access token.\n\n## Other possible configuration\n\nBy default, Otoroshi generate the access token with the specified key pair in the configuration. But, in some case, you want a specific key pair by client_id/client_secret.\nThe `jwt-sign-keypair` metadata can be set on any api key with the id of the key pair as value. \n"},{"name":"setup-otoroshi-cluster.md","id":"/how-to-s/setup-otoroshi-cluster.md","url":"/how-to-s/setup-otoroshi-cluster.html","title":"Setup an Otoroshi cluster","content":"# Setup an Otoroshi cluster\n\nIn this tutorial, you create an cluster of Otoroshi.\n\n### Summary \n\n1. Deploy an Otoroshi cluster with one leader and 2 workers \n2. Add a load balancer in front of the workers \n3. Validate the installation by adding a header on the requests\n\nLet's start by downloading the latest jar of Otoroshi.\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar'\n```\n\nThen create an instance of Otoroshi and indicates with the `otoroshi.cluster.mode` environment variable that it will be the leader.\n\n```sh\njava -Dhttp.port=8091 -Dhttps.port=9091 -Dotoroshi.cluster.mode=leader -jar otoroshi.jar\n```\n\nLet's create two Otoroshi workers, exposed on `:8082/:8092` and `:8083/:8093` ports, and setting the leader URL in the `otoroshi.cluster.leader.urls` environment variable.\n\nThe first worker will listen on the `:8082/:8092` ports\n```sh\njava \\\n -Dotoroshi.cluster.worker.name=worker-1 \\\n -Dhttp.port=8092 \\\n -Dhttps.port=9092 \\\n -Dotoroshi.cluster.mode=worker \\\n -Dotoroshi.cluster.leader.urls.0='http://127.0.0.1:8091' -jar otoroshi.jar\n```\n\nThe second worker will listen on the `:8083/:8093` ports\n```sh\njava \\\n -Dotoroshi.cluster.worker.name=worker-2 \\\n -Dhttp.port=8093 \\\n -Dhttps.port=9093 \\\n -Dotoroshi.cluster.mode=worker \\\n -Dotoroshi.cluster.leader.urls.0='http://127.0.0.1:8091' -jar otoroshi.jar\n```\n\nOnce launched, you can navigate to the @link:[cluster view](http://otoroshi.oto.tools:8091/bo/dashboard/cluster) { open=new }. The cluster is now configured, you can see the 3 instances and some health informations on each instance.\n\nTo complete our installation, we want to spread the incoming requests accross otoroshi worker instances. \n\nIn this tutorial, we will use `haproxy` has a TCP loadbalancer. If you don't have haproxy installed, you can use docker to run an haproxy instance as explained below.\n\nBut first, we need an haproxy configuration file named `haproxy.cfg` with the following content :\n\n```sh\nfrontend front_nodes_http\n bind *:8080\n mode tcp\n default_backend back_http_nodes\n timeout client 1m\n\nbackend back_http_nodes\n mode tcp\n balance roundrobin\n server node1 host.docker.internal:8092 # (1)\n server node2 host.docker.internal:8093 # (1)\n timeout connect 10s\n timeout server 1m\n```\n\nand run haproxy with this config file\n\nno docker\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #no_docker }\n\ndocker (on linux)\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #docker_linux }\n\ndocker (on macos)\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #docker_mac }\n\ndocker (on windows)\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #docker_windows }\n\nThe last step is to create a route, add a rule to add, in the headers, a specific value to identify the worker used.\n\nCreate this route, exposed on `http://api.oto.tools:xxxx`, which will forward all requests to the mirror `https://mirror.otoroshi.io`.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8091/api/routes' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"api.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.AdditionalHeadersIn\",\n \"config\": {\n \"headers\": {\n \"worker-name\": \"${config.otoroshi.cluster.worker.name}\"\n }\n }\n }\n ]\n}\nEOF\n```\n\nOnce created, call two times the service. If all is working, the header received by the backend service will have `worker-1` and `worker-2` as value.\n\n```sh\ncurl 'http://api.oto.tools:8080'\n## Response headers\n{\n ...\n \"worker-name\": \"worker-2\"\n ...\n}\n```\n\nThis should output `worker-1`, then `worker-2`, etc. Well done, your loadbalancing is working and your cluster is set up correctly.\n\n\n"},{"name":"tailscale-integration.md","id":"/how-to-s/tailscale-integration.md","url":"/how-to-s/tailscale-integration.html","title":"Tailscale integration","content":"# Tailscale integration\n\n[Tailscale](https://tailscale.com/) is a VPN service that let you create your own private network based on [Wireguard](https://www.wireguard.com/). Tailscale goes beyond the simple meshed wireguard based VPN and offers out of the box NAT traversal, third party identity provider integration, access control, magic DNS and let's encrypt integration for the machines on your VPN.\n\nOtoroshi provides somes plugins out of the box to work in a [Tailscale](https://tailscale.com/) environment.\n\nby default Otoroshi, works out of the box when integrated in a `tailnet` as you can contact other machines usign their ip address. But we can go a little bit further.\n\n## tailnet configuration\n\nfirst thing, go to your tailnet setting on [tailscale.com](https://login.tailscale.com/admin/machines) and go to the [DNS tab](https://login.tailscale.com/admin/dns). Here you can find \n\n* your tailnet name: the domain name of all your machines on your tailnet\n* MagicDNS: a way to address your machines by directly using their names\n* HTTPS Certificates: HTTPS certificates provision for all your machines\n\nto use otoroshi Tailscale plugin you must enable `MagicDNS` and `HTTPS Certificates`\n\n## Tailscale certificates integration\n\nyou can use tailscale generated let's encrypt certificates in otoroshi by using the `Tailscale certificate fetcher job` in the plugins section of the danger zone. Once enabled, this job will fetch certificates for domains in `xxxx.ts.net` that belong to your tailnet. \n\nas usual, the fetched certificates will be available in the [certificates page](http://otoroshi.oto.tools:8080/bo/dashboard/certificates) of otoroshi.\n\n## Tailscale targets integration\n\nthe following pair of plugins let your contact tailscale machine by using their names even if their are multiple instance.\n\nwhen you register a machine on a tailnet, you have to provide a name for it, let say `my-server`. This machine will be addressable in your tailnet with `my-server.tailxxx.ts.net`. But if you have multiple instance of the same server on several machines with the same `my-server` name, their DNS name on the tailnet will be `my-server.tailxxx.ts.net`, `my-server-1.tailxxx.ts.net`, `my-server-2.tailxxx.ts.net`, etc. If you want to use those names in an otoroshi backend it could be tricky if the application has something like autoscaling enabled.\n\nin that case, you can add the `Tailscale targets job` in the plugins section of the danger zone. Once enabled, this job will fetch periodically available machine on the tailnet with their names and DNS names. Then, in a route, you can use the `Tailscale select target by name` plugin to tell otoroshi to loadbalance traffic between all machine that have the name specified in the plugin config. instead of their DNS name."},{"name":"tls-termination-using-own-certificates.md","id":"/how-to-s/tls-termination-using-own-certificates.md","url":"/how-to-s/tls-termination-using-own-certificates.html","title":"TLS termination using your own certificates","content":"# TLS termination using your own certificates\n\nThe goal of this tutorial is to expose a service via https using a certificate generated by openssl.\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\nTry to call the service.\n\n```sh\ncurl 'http://myservice.oto.tools:8080'\n```\n\nThis should output something like\n\n```json\n{\n \"method\": \"GET\",\n \"path\": \"/\",\n \"headers\": {\n \"host\": \"mirror.opunmaif.io\",\n \"accept\": \"*/*\",\n \"user-agent\": \"curl/7.64.1\",\n \"x-forwarded-port\": \"443\",\n \"opun-proxied-host\": \"mirror.otoroshi.io\",\n \"otoroshi-request-id\": \"1463145856319359618\",\n \"otoroshi-proxied-host\": \"myservice.oto.tools:8080\",\n \"opun-gateway-request-id\": \"1463145856554240100\",\n \"x-forwarded-proto\": \"https\",\n },\n \"body\": \"\"\n}\n```\n\nLet's try to call the service in https.\n\n```sh\ncurl 'https://myservice.oto.tools:8443'\n```\n\nThis should output\n\n```sh\ncurl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to myservice.oto.tools:8443\n```\n\nTo fix it, we have to generate a certificate and import it in Otoroshi to match the domain `myservice.oto.tools`.\n\n> If you already had a certificate you can skip the next set of commands and directly import your certificate in Otoroshi\n\nWe will use openssl to generate a private key and a self-signed certificate.\n\n```sh\nopenssl genrsa -out myservice.key 4096\n# remove pass phrase\nopenssl rsa -in myservice.key -out myservice.key\n# generate the certificate authority cert\nopenssl req -new -x509 -sha256 -days 730 -key myservice.key -out myservice.cer -subj \"/CN=myservice.oto.tools\"\n```\n\nCheck the content of the certificate \n\n```sh\nopenssl x509 -in myservice.cer -text\n```\n\nThis should contains something like\n\n```sh\nCertificate:\n Data:\n Version: 1 (0x0)\n Serial Number: 9572962808320067790 (0x84d9fef455f188ce)\n Signature Algorithm: sha256WithRSAEncryption\n Issuer: CN=myservice.oto.tools\n Validity\n Not Before: Nov 23 14:25:55 2021 GMT\n Not After : Nov 23 14:25:55 2022 GMT\n Subject: CN=myservice.oto.tools\n Subject Public Key Info:\n Public Key Algorithm: rsaEncryption\n Public-Key: (4096 bit)\n Modulus:\n...\n```\n\nOnce generated, go back to Otoroshi and navigate to the certificates management page (`top right cog icon / SSL/TLS certificates` or at @link:[`/bo/dashboard/certificates`](http://otoroshi.oto.tools:8080/bo/dashboard/certificates)) and click on `Add item`.\n\nSet `myservice-certificate` as `name` and `description`.\n\nDrop the `myservice.cer` file or copy the content to the `Certificate full chain` field.\n\nDo the same action for the `myservice.key` file in the `Certificate private key` field.\n\nSet your passphrase password in the `private key password` field if you added one.\n\nLet's try the same call to the service.\n\n```sh\ncurl 'https://myservice.oto.tools:8443'\n```\n\nAn error should occurs due to the untrsuted received certificate server\n\n```sh\ncurl: (60) SSL certificate problem: self signed certificate\nMore details here: https://curl.haxx.se/docs/sslcerts.html\n\ncurl failed to verify the legitimacy of the server and therefore could not\nestablish a secure connection to it. To learn more about this situation and\nhow to fix it, please visit the web page mentioned above.\n```\n\nEnd this tutorial by trusting the certificate server \n\n```sh\ncurl 'https://myservice.oto.tools:8443' --cacert myservice.cer\n```\n\nThis should finally output\n\n```json\n{\n \"method\": \"GET\",\n \"path\": \"/\",\n \"headers\": {\n \"host\": \"mirror.opunmaif.io\",\n \"accept\": \"*/*\",\n \"user-agent\": \"curl/7.64.1\",\n \"x-forwarded-port\": \"443\",\n \"opun-proxied-host\": \"mirror.otoroshi.io\",\n \"otoroshi-request-id\": \"1463158439730479893\",\n \"otoroshi-proxied-host\": \"myservice.oto.tools:8443\",\n \"opun-gateway-request-id\": \"1463158439558515871\",\n \"x-forwarded-proto\": \"https\",\n \"sozu-id\": \"01FN6MGKSYZNJYHEMP4R5PJ4Q5\"\n },\n \"body\": \"\"\n}\n```\n\n"},{"name":"tls-using-lets-encrypt.md","id":"/how-to-s/tls-using-lets-encrypt.md","url":"/how-to-s/tls-using-lets-encrypt.html","title":"TLS termination using Let's Encrypt","content":"# TLS termination using Let's Encrypt\n\nAs you know, Otoroshi is capable of doing TLS termination for your services. You can import your own certificates, generate certificates from scratch and you can also use the @link:[ACME protocol](https://datatracker.ietf.org/doc/html/rfc8555) to generate certificates. One of the most popular service offering ACME certificates creation is @link:[Let's Encrypt](https://letsencrypt.org/).\n\n@@@ warning\nIn order to make this tutorial work, your otoroshi instance MUST be accessible from the internet in order to be reachable by Let's Encrypt ACME process. Also, the domain name used for the certificates MUST be configured to reach your otoroshi instance at your DNS provider level.\n@@@\n\n@@@ note\nthis tutorial can work with any ACME provider with the same rules. your otoroshi instance MUST be accessible by the ACME process. Also, the domain name used for the certificates MUST be configured to reach your otoroshi instance at your DNS provider level.\n@@@\n\n## Setup let's encrypt on otoroshi\n\nGo on the danger zone page by clicking on the [`cog icon / Danger Zone`](http://otoroshi.oto.tools:8080/bo/dashboard/certificates). Scroll to the `Let's Encrypt settings` section. Enable it, and specify the address of the ACME server (for production Let's Encrypt it's `acme://letsencrypt.org`, for testing, it's `acme://letsencrypt.org/staging`. Any ACME server address should work). You can also add one or more email addresses or contact urls that will be included in your Let's Encrypt account. You don't have to fill the `public/private key` inputs as they will be automatically generated on the first usage.\n\n## Creating let's encrypt certificate from FQDNs\n\nYou can go to the certificates page by clicking on the [`cog icon / SSL/TLS Certificates`](http://otoroshi.oto.tools:8080/bo/dashboard/certificates). Here, click on the `+ Let's Encrypt certificate` button. A popup will show up to ask you the FQDN that you want for you certificate. Once done, click on the `Create` button. A few moment later, you will be redirected on a brand new certificate generated by Let's encrypt. You can now enjoy accessing your service behind the FQDN with TLS.\n\n## Creating let's encrypt certificate from a service\n\nYou can go to any service page and enable the flag `Issue Let's Encrypt cert.`. Do not forget to save your service. A few moment later, the certificates will be available in the certificates page and you can will be able to enjoy accessing your service with TLS.\n"},{"name":"wasm-manager-installation.md","id":"/how-to-s/wasm-manager-installation.md","url":"/how-to-s/wasm-manager-installation.html","title":"Deploy your own WASM Manager","content":"# Deploy your own WASM Manager\n\n@@@ div { .centered-img }\n\n@@@\n\n## Manager's configuration\n\nIn the @ref:[WASM tutorial](./wasm-usage.md) we used existing WASM files. These files has been generated with the WASM Manager solution provided by the Otoroshi team. \n\nThe wasm manager is a code editor in the browser that will help you to write and compile your plugin to WASM using Rust or Assembly Script. \nYou can install your own man ager instance using a docker image.\n\n```sh\ndocker run -p 5001:5001 maif/otoroshi-wasm-manager\n```\n\nThis should download and run the latest version of the manager. Once launched, you can navigate [http://localhost:5001]([http://localhost:5001) (or any other binding port). \n\nThis should show an authentication error. The manager can run with or without authentication, and you can confige it using the `AUTH_MODE` environment variable (`AUTH` or `NO_AUTH` values).\n\nThe manager is configurable by environment variables. The manager uses an object storage (S3 compatible) as storage solution. \nYou can configure your S3 with the four variables `S3_ACCESS_KEY_ID`, `S3_SECRET_ACCESS_KEY`, `S3_ENDPOINT` and `S3_BUCKET`.\n\nFeel free to change the following variables:\n\n\n| NAME | DEFAULT VALUE | DESCRIPTION |\n| ------------------------- | ------------------ | -------------------------------------------------------------------------- |\n| MANAGER_PORT | 5001 | The manager will be exposed on this port |\n| MANAGER_ALLOWED_DOMAINS | otoroshi.oto.tools | Array of origins, separated by comma, which is allowed to call the manager |\n| MANAGER_MAX_PARALLEL_JOBS | 2 | Number of parallel jobs to compile plugins |\n\nThe following variables are useful to bind the manager with Otoroshi and to run it behind (we will use them in the next section of this tutorial).\n\n| NAME | DEFAULT VALUE | DESCRIPTION |\n| ---------------------- | ----------------------- | ------------------------------------------------------ |\n| OTOROSHI_USER_HEADER | Otoroshi-User | Header used to extract the user from Otoroshi request |\n| OTOROSHI_TOKEN_SECRET | veryverysecret | the secret used to sign the user token |\n\n## Tutorial\n\n1. [Before you start](#before-you-start)\n2. [Deploy the manager using Docker](#deploy-the-manager-using-docker)\n3. [Create a route to expose and protect the manager with authentication](#create-a-route-to-expose-and-protect-the-manager-with-authentication)\n4. [Create a first validator plugin using the manager](#create-a-first-validator-plugin-using-the-manager)\n5. [Configure the danger zone of Otoroshi to bind Otoroshi and the manager](#configure-the-danger-zone-of-otoroshi-to-bind-otoroshi-and-the-manager)\n6. [Create a route using the generated wasm file](#create-a-route-using-the-generated-wasm-file)\n7. [Test your route](#test-your-route)\n\nAfter completing these steps you will have a running Otoroshi instance and our owm WASM manager linked together.\n\n### Before your start\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Deploy the manager using Docker\n\nLet's start by deploying an instance of S3. If you already have an instance you can skip the next section.\n\n```sh\ndocker network create manager-network\ndocker run --name s3Server -p 8000:8000 -e SCALITY_ACCESS_KEY_ID=access_key -e SCALITY_SECRET_ACCESS_KEY=secret --net manager-network scality/s3server \n```\n\nOnce launched, we can run a manager instance.\n\n```sh\ndocker run -d --net manager-network \\\n --name wasm-manager \\\n -p 5001:5001 \\\n -e \"MANAGER_PORT=5001\" \\\n -e \"AUTH_MODE=AUTH\" \\\n -e \"MANAGER_MAX_PARALLEL_JOBS=2\" \\\n -e \"MANAGER_ALLOWED_DOMAINS=otoroshi.oto.tools,wasm-manager.oto.tools,localhost:5001\" \\\n -e \"OTOROSHI_USER_HEADER=Otoroshi-User\" \\\n -e \"OTOROSHI_TOKEN_SECRET=veryverysecret\" \\\n -e \"S3_ACCESS_KEY_ID=access_key\" \\\n -e \"S3_SECRET_ACCESS_KEY=secret\" \\\n -e \"S3_FORCE_PATH_STYLE=true\" \\\n -e \"S3_ENDPOINT=http://host.docker.internal:8000\" \\\n -e \"S3_BUCKET=wasm-manager\" \\\n -e \"DOCKER_USAGE=true\" \\\n maif/otoroshi-wasm-manager\n```\n\nOnce launched, go to [http://localhost:5001](http://localhost:5001). If everything is working as intended, \nyou should see, at the bottom right of your screen the following error\n\n```\nYou're not authorized to access to manager\n```\n\nThis error indicates that the manager could not authorize the request. \nActually, the manager expects to be only reachable through Otoroshi (this is the definition of the `AUTH_MODE=AUTH`). \nSo we need to create a route in Otoroshi to properly expose our manager to the rest of the world.\n\n### Create a route to expose and protect the manager with authentication\n\nWe are going to use the admin API of Otoroshi to create the route. The configuration of the route is:\n\n* `wasm-manager` as name\n* `wasm-manager.oto.tools` as exposed domain\n* `localhost:5001` as target without TLS option enabled\n\nWe need to add two more plugins to require the authentication from users and to pass the logged in user to the manager. \nThese plugins are named `Authentication` and `Otoroshi Info. token`. \nThe Authentication plugin will use an in-memory authentication with one default user (wasm@otoroshi.io/password). \nThe second plugin will be configured with the value of the `OTOROSHI_USER_HEADER` environment variable. \n\nLet's create the authentication module (if you are interested in how authentication module works, \nyou should read the other tutorials about How to secure an app). \nThe following command creates an in-memory authentication module with an user.\n\n```sh\ncurl -X POST \"http://otoroshi-api.oto.tools:8080/api/auths\" \\\n-u \"admin-api-apikey-id:admin-api-apikey-secret\" \\\n-H 'Content-Type: application/json; charset=utf-8' \\\n-d @- <<'EOF'\n{\n \"id\": \"wasm_manager_in_memory\",\n \"type\": \"basic\",\n \"name\": \"In memory authentication\",\n \"desc\": \"Group of static users\",\n \"users\": [\n {\n \"name\": \"User Otoroshi\",\n \"password\": \"$2a$10$oIf4JkaOsfiypk5ZK8DKOumiNbb2xHMZUkYkuJyuIqMDYnR/zXj9i\",\n \"email\": \"wasm@otoroshi.io\"\n }\n ],\n \"sessionCookieValues\": {\n \"httpOnly\": true,\n \"secure\": false\n }\n}\nEOF\n```\n\nOnce created, you can create our route to expose the manager.\n\n```sh\ncurl -X POST \"http://otoroshi-api.oto.tools:8080/api/routes\" \\\n-H \"Content-type: application/json\" \\\n-u \"admin-api-apikey-id:admin-api-apikey-secret\" \\\n-d @- <<'EOF'\n{\n \"id\": \"wasm-manager\",\n \"name\": \"wasm-manager\",\n \"frontend\": {\n \"domains\": [\"wasm-manager.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"localhost\",\n \"port\": 5001,\n \"tls\": false\n }\n ],\n \"load_balancing\": {\n \"type\": \"RoundRobin\"\n }\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.AuthModule\",\n \"exclude\": [\n \"/plugins\",\n \"/wasm/.*\"\n ],\n \"config\": {\n \"pass_with_apikey\": false,\n \"auth_module\": null,\n \"module\": \"wasm_manager_in_memory\"\n }\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.ApikeyCalls\",\n \"include\": [\n \"/plugins\",\n \"/wasm/.*\"\n ],\n \"config\": {}\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.OtoroshiInfos\",\n \"config\": {\n \"version\": \"Latest\",\n \"ttl\": 30,\n \"header_name\": \"Otoroshi-User\",\n \"algo\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"veryverysecret\"\n }\n }\n }\n ]\n}\nEOF\n```\n\nTry to access to the manager with the new domain: http://wasm-manager.oto.tools:8080. \nThis should redirect you to the login page of Otoroshi. Enter the credentials of the user: wasm@otoroshi.io/password\nCongratulations, you now have a secure manager.\n\n### Create a first validator plugin using the manager\n\nIn the previous part, we secured the manager. Now, is the time to create your first simple plugin, written in Rust. \nThis plugin will apply a check on the request and ensure that the headers contains the key-value foo:bar.\n\n1. On the right top of the screen, click on the plus icon to create a new plugin\n2. Select the Rust language\n3. Call it `my-first-validator` and press the enter key\n4. Click on the new plugin called `my-first-validator`\n\nBefore continuing, let's explain the different files already present in your plugin. \n\n* `types.rs`: this file contains all Otoroshi structures that the plugin can receive and respond\n* `lib.rs`: this file is the core of your plugin. It must contain at least one **function** which will be called by Otoroshi when executing the plugin.\n* `Cargo.toml`: for each rust package, this file is called its manifest. It is written in the TOML format. \nIt contains metadata that is needed to compile the package. You can read more information about it [here](https://doc.rust-lang.org/cargo/reference/manifest.html)\n\nYou can write a plugin for different uses cases in Otoroshi: validate an access, transform request or generate a target. \nIn terms of plugin type,\nyou need to change your plugin's context and reponse types accordingly.\n\nLet's take the example of creating a validator plugin. If we search in the types.rs file, we can found the corresponding \ntypes named: `WasmAccessValidatorContext` and `WasmAccessValidatorResponse`.\nThese types must be use in the declaration of the main **function** (named execute in our case).\n\n```rust\n... \npub fn execute(Json(context): Json) -> FnResult> {\n \n}\n```\n\nWith this code, we declare a function named `execute`, which takes a context of type WasmAccessValidatorContext as parameter, \nand which returns an object of type WasmAccessValidatorResponse. Now, let's add the check of the foo header.\n\n```rust\n... \npub fn execute(Json(context): Json) -> FnResult> {\n match context.request.headers.get(\"foo\") {\n Some(foo) => if foo == \"bar\" {\n Ok(Json(types::WasmAccessValidatorResponse { \n result: true,\n error: None\n }))\n } else {\n Ok(Json(types::WasmAccessValidatorResponse { \n result: false, \n error: Some(types::WasmAccessValidatorError { \n message: format!(\"{} is not authorized\", foo).to_owned(), \n status: 401\n }) \n }))\n },\n None => Ok(Json(types::WasmAccessValidatorResponse { \n result: false, \n error: Some(types::WasmAccessValidatorError { \n message: \"you're not authorized\".to_owned(), \n status: 401\n }) \n }))\n }\n}\n```\n\nFirst, we checked if the foo header is present, otherwise we return an object of type WasmAccessValidatorError.\nIn the other case, we continue by checking its value. In this example, we have used three types, already declared for you in the types.rs file:\n`WasmAccessValidatorResponse`, `WasmAccessValidatorError` and `WasmAccessValidatorContext`. \n\nAt this time, the content of your lib.rs file should be:\n\n```rust\nmod types;\n\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn execute(Json(context): Json) -> FnResult> {\n match context.request.headers.get(\"foo\") {\n Some(foo) => if foo == \"bar\" {\n Ok(Json(types::WasmAccessValidatorResponse { \n result: true,\n error: None\n }))\n } else {\n Ok(Json(types::WasmAccessValidatorResponse { \n result: false, \n error: Some(types::WasmAccessValidatorError { \n message: format!(\"{} is not authorized\", foo).to_owned(), \n status: 401\n }) \n }))\n },\n None => Ok(Json(types::WasmAccessValidatorResponse { \n result: false, \n error: Some(types::WasmAccessValidatorError { \n message: \"you're not authorized\".to_owned(), \n status: 401\n }) \n }))\n }\n}\n```\n\nLet's compile this plugin by clicking on the hammer icon at the right top of your screen. Once done, you can try your built plugin directly in the UI.\nClick on the play button at the right top of your screen, select your plugin and the correct type of the incoming fake context. \nOnce done, click on the run button at the bottom of your screen. This should output an error.\n\n```json\n{\n \"result\": false,\n \"error\": {\n \"message\": \"asd is not authorized\",\n \"status\": 401\n }\n}\n```\n\nLet's edit the fake input context by adding the exepected foo Header.\n\n```json\n{\n \"request\": {\n \"id\": 0,\n \"method\": \"\",\n \"headers\": {\n \"foo\": \"bar\"\n },\n \"cookies\"\n ...\n```\n\nResubmit the command. It should pass.\n\n### Configure the danger zone of Otoroshi to bind Otoroshi and the manager\n\nNow that we have our compiled plugin, we have to connect Otoroshi with the manager. Let's navigate to the danger zone, and add the following values in the WASM manager section:\n\n* `URL`: admin-api-apikey-id\n* `Apikey id`: admin-api-apikey-secret\n* `Apikey secret`: http://localhost:5001\n* `User(s)`: *\n\nThe User(s) property is used by the manager to filter the list of returned plugins (example: wasm@otoroshi.io will only return the list of plugins created by this user). \n\nDon't forget to save the configuration.\n\n### Create a route using the generated wasm file\n\nThe last step of our tutorial is to create the route using the validator. Let's create the route with the following parameters:\n\n```sh\ncurl -X POST \"http://otoroshi-api.oto.tools:8080/api/routes\" \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"id\": \"wasm-route\",\n \"name\": \"wasm-route\",\n \"frontend\": {\n \"domains\": [\"wasm-route.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"localhost\",\n \"port\": 5001,\n \"tls\": false\n }\n ],\n \"load_balancing\": {\n \"type\": \"RoundRobin\"\n }\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.WasmAccessValidator\",\n \"enabled\": true,\n \"config\": {\n \"compiler_source\": \"my-first-validator\",\n \"functionName\": \"execute\"\n }\n }\n ]\n}\nEOF\n```\n\nYou can validate the creation by navigating to the [dashboard](http://otoroshi.oto.tools:9999/bo/dashboard/routes/wasm-route?tab=flow)\n\n### Test your route\n\nRun the two following commands. The first should show an unauthorized error and the second should conclude this tutorial.\n\n```sh\ncurl \"http://wasm-route.oto.tools:8080\"\n```\n\nand \n\n```sh\ncurl \"http://wasm-route.oto.tools:8080\" -H \"foo:bar\"\n```\n\nCongratulations, you have successfully written your first validator using your own manager.\n"},{"name":"wasm-usage.md","id":"/how-to-s/wasm-usage.md","url":"/how-to-s/wasm-usage.html","title":"Using wasm plugins","content":"# Using wasm plugins\n\nWebAssembly (WASM) is a simple machine model and executable format with an extensive specification. It is designed to be portable, compact, and execute at or near native speeds. Otoroshi already supports the execution of WASM files by providing different plugins that can be applied on routes. You can find more about those plugins @ref:[here](../topics/wasm-usage.md)\n\nTo simplify the process of WASM creation and usage, Otoroshi provides:\n\n- otoroshi ui integration: a full set of plugins that let you pick which WASM function to runtime at any point in a route\n- otoroshi `wasm-manager`: a code editor in the browser that let you write your plugin in `Rust`, `TinyGo`, `Javascript` or `Assembly Script` without having to think about compiling it to WASM (you can find a complete tutorial about it @ref:[here](../how-to-s/wasm-manager-installation.md))\n\n@@@ div { .centered-img }\n\n@@@\n\n## Tutorial\n\n1. [Before your start](#before-your-start)\n2. [Create the route with the plugin validator](#create-the-route-with-the-plugin-validator)\n3. [Test your validator](#test-your-validator)\n4. [Update the route by replacing the backend with a WASM file](#update-the-route-by-replacing-the-backend-with-a-wasm-file)\n5. [WASM backend test](#wasm-backend-test)\n\nAfter completing these steps you will have a route that uses WASM plugins written in Rust.\n\n## Before your start\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n## Create the route with the plugin validator\n\nFor this tutorial, we will start with an existing wasm file. The main function of this file will check the value of an http header to allow access or not. The can find this file at [https://raw.githubusercontent.com/MAIF/otoroshi/master/demos/wasm/first-validator.wasm](#https://raw.githubusercontent.com/MAIF/otoroshi/master/demos/wasm/first-validator.wasm)\n\nThe main function of this validator, written in rust, should look like:\n\nvalidator.rs\n: @@snip [validator.rs](../snippets/wasm-manager/validator.rs) \n\nvalidator.js\n: @@snip [validator.js](../snippets/wasm-manager/validator.js) \n\nvalidator.ts\n: @@snip [validator.ts](../snippets/wasm-manager/validator.ts) \n\nvalidator.js\n: @@snip [validator.js](../snippets/wasm-manager/validator.js) \n\nvalidator.go\n: @@snip [validator.js](../snippets/wasm-manager/validator.go) \n\nThe plugin receives the request context from Otoroshi (the matching route, the api key if present, the headers, etc) as `WasmAccessValidatorContext` object. \nThen it applies a check on the headers, and responds with an error or success depending on the content of the foo header. \nObviously, the previous snippet is an example and the editor allows you to write whatever you want as a check.\n\nLet's create a route that uses the previous wasm file as an access validator plugin :\n\n```sh\ncurl -X POST \"http://otoroshi-api.oto.tools:8080/api/routes\" \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"id\": \"demo-otoroshi\",\n \"name\": \"demo-otoroshi\",\n \"frontend\": {\n \"domains\": [\"demo-otoroshi.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ],\n \"load_balancing\": {\n \"type\": \"RoundRobin\"\n }\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.OverrideHost\",\n \"enabled\": true\n },\n {\n \"plugin\": \"cp:otoroshi.next.plugins.WasmAccessValidator\",\n \"enabled\": true,\n \"config\": {\n \"source\": {\n \"kind\": \"http\",\n \"path\": \"https://raw.githubusercontent.com/MAIF/otoroshi/master/demos/wasm/first-validator.wasm\",\n \"opts\": {}\n },\n \"memoryPages\": 4,\n \"functionName\": \"execute\"\n }\n }\n ]\n}\nEOF\n```\n\nThis request will apply the following process:\n\n* names the route *demo-otoroshi*\n* creates a frontend exposed on the `demo-otoroshi.oto.tools` \n* forward requests on one target, reachable at `mirror.otoroshi.io` using TLS on port 443\n* adds the *WasmAccessValidator* plugin to validate access based on the foo header to the route\n\nYou can validate the route creation by navigating to the [dashboard](http://otoroshi.oto.tools:8080/bo/dashboard/routes/demo-otoroshi?tab=flow)\n\n## Test your validator\n\n```shell\ncurl \"http://demo-otoroshi.oto.tools:8080\" -I\n```\n\nThis should output the following error:\n\n```\nHTTP/1.1 401 Unauthorized\n```\n\nLet's call again the route by adding the header foo with the bar value.\n\n```shell\ncurl \"http://demo-otoroshi.oto.tools:8080\" -H \"foo:bar\" -I\n```\n\nThis should output the successfull message:\n\n```\nHTTP/1.1 200 OK\n```\n\n## Update the route by replacing the backend with a WASM file\n\nThe next step in this tutorial is to use a WASM file as backend of the route. We will use an existing WASM file, available in our wasm demos repository on github. \nThe content of this plugin, called `wasm-target.wasm`, looks like:\n\ntarget.rs\n: @@snip [target.rs](../snippets/wasm-manager/target.rs) \n\ntarget.js\n: @@snip [target.js](../snippets/wasm-manager/target.js) \n\ntarget.ts\n: @@snip [target.ts](../snippets/wasm-manager/target.ts) \n\ntarget.js\n: @@snip [target.js](../snippets/wasm-manager/target.js) \n\ntarget.go\n: @@snip [target.js](../snippets/wasm-manager/target.go) \n\nLet's explain this snippet. The purpose of this type of plugin is to respond an HTTP response with http status, body and headers map.\n\n1. Includes all public structures from `types.rs` file. This file contains predefined Otoroshi structures that plugins can manipulate.\n2. Necessary imports. [Extism](https://extism.org/docs/overview)'s goal is to make all software programmable by providing a plug-in system. \n3. Creates a map of new headers that will be merged with incoming request headers.\n4. Creates the response object with the map of merged headers, a simple JSON body and a successfull status code.\n\nThe file is downloadable [here](#https://raw.githubusercontent.com/MAIF/otoroshi/master/demos/wasm/wasm-target.wasm).\n\nLet's update the route using the this wasm file.\n\n```sh\ncurl -X PUT \"http://otoroshi-api.oto.tools:8080/api/routes/demo-otoroshi\" \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"id\": \"demo-otoroshi\",\n \"name\": \"demo-otoroshi\",\n \"frontend\": {\n \"domains\": [\"demo-otoroshi.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ],\n \"load_balancing\": {\n \"type\": \"RoundRobin\"\n }\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.OverrideHost\",\n \"enabled\": true\n },\n {\n \"plugin\": \"cp:otoroshi.next.plugins.WasmAccessValidator\",\n \"enabled\": true,\n \"config\": {\n \"source\": {\n \"kind\": \"http\",\n \"path\": \"https://raw.githubusercontent.com/MAIF/otoroshi/master/demos/wasm/first-validator.wasm\",\n \"opts\": {}\n },\n \"memoryPages\": 4,\n \"functionName\": \"execute\"\n }\n },\n {\n \"plugin\": \"cp:otoroshi.next.plugins.WasmBackend\",\n \"enabled\": true,\n \"config\": {\n \"source\": {\n \"kind\": \"http\",\n \"path\": \"https://raw.githubusercontent.com/MAIF/otoroshi/master/demos/wasm/wasm-target.wasm\",\n \"opts\": {}\n },\n \"memoryPages\": 4,\n \"functionName\": \"execute\"\n }\n }\n ]\n}\nEOF\n```\n\nThe response should contains the updated route content.\n\n## WASM backend test\n\nLet's call our route.\n\n```sh\ncurl \"http://demo-otoroshi.oto.tools:8080\" -H \"foo:bar\" -H \"fifi: foo\" -v\n```\n\nThis should output:\n\n```\n* Trying 127.0.0.1:8080...\n* Connected to demo-otoroshi.oto.tools (127.0.0.1) port 8080 (#0)\n> GET / HTTP/1.1\n> Host: demo-otoroshi.oto.tools:8080\n> User-Agent: curl/7.79.1\n> Accept: */*\n> foo:bar\n> fifi:foo\n>\n* Mark bundle as not supporting multiuse\n< HTTP/1.1 200 OK\n< foo: bar\n< Host: demo-otoroshi.oto.tools:8080\n<\n* Closing connection 0\n{\"foo\": \"bar\"}\n```\n\nIn this response, we can find our headers send in the curl command and those added by the wasm plugin.\n\n\n\n"},{"name":"working-with-eureka.md","id":"/how-to-s/working-with-eureka.md","url":"/how-to-s/working-with-eureka.html","title":"Working with Eureka","content":"# Working with Eureka\n\nEureka is a library of Spring Cloud Netflix, which provides two parts to register and discover services.\nGenerally, the services are applications written with Spring but Eureka also provides a way to communicate in REST. The main goals of Eureka are to allow clients to find and communicate with each other without hard-coding the hostname and port.\nAll services are registered in an Eureka Server.\n\nTo work with Eureka, Otoroshi has three differents plugins:\n\n* to expose its own Eureka Server instance\n* to discover an existing Eureka Server instance\n* to use Eureka application as an Otoroshi target and took advantage of all Otoroshi clients features (load-balancing, rate limiting, etc...)\n\nLet's cut this tutorial in three parts. \n\n- Create an simple Spring application that we'll use as an Eureka Client\n- Deploy an implementation of the Otoroshi Eureka Server (using the `Eureka Instance` plugin), register eureka clients and expose them using the `Internal Eureka Server` plugin\n- Deploy an Netflix Eureka Server and use it in Otoroshi to discover apps using the `External Eureka Server` plugin.\n\n\nIn this tutorial: \n\n- [Create an Otoroshi route with the Internal Eureka Server plugin](#create-an-otoroshi-route-with-the-internal-eureka-server-plugin)\n- [Create a simple Eureka Client and register it](#create-a-simple-eureka-client-and-register-it)\n- [Connect to an external Eureka server](#connect-to-an-external-eureka-server)\n\n### Download Otoroshi\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Create an Otoroshi route with the Internal Eureka Server plugin\n\n@@@ note\nWe'll supposed that you have an Otoroshi exposed on the 8080 port with the new Otoroshi engine enabled\n@@@\n\nLet's jump to the routes Otoroshi [view](http://otoroshi.oto.tools:8080/bo/dashboard/routes) and create a new route using the wizard button.\n\nEnter the following values in for each step:\n\n1. An Eureka Server instance\n2. Choose the first choice : **BLANK ROUTE** and click on continue\n3. As exposed domain, set `eureka-server.oto.tools/eureka`\n4. As Target URL, set `http://foo.bar` (this value has no importance and will be skip by the Otoroshi Instance plugin)\n5. Validate the creation\n\nOnce created, you can hide with the arrow on the right top of the screen the tester view (which is displayed by default after each route creation).\nIn our case, we want to add a new plugin, called Internal Eureka Instance on our feed.\n\nInside the designer view:\n\n1. Search the `Eureka Instance` in the list of plugins.\n2. Add it to the feed by clicking on it\n3. Set an eviction timeout at 300 seconds (this configuration is used by Otoroshi to automatically check if an Eureka is up. Otherwise Otoroshi will evict the eureka client from the registry)\n\nWell done you have set up an Eureka Server. To check the content of an Eureka Server, you can navigate to this [link]('http://otoroshi.oto.tools:8080/bo/dashboard/eureka-servers'). In all case, none instances or applications are registered, so the registry is currently empty.\n\n### Create a simple Eureka Client and register it\n\n*This tutorial has no vocation to teach you how to write an Spring application and it may exists a newer version of this Spring code.*\n\n\nFor this tutorial, we'll use the following code which initiates an Eureka Client and defines an Spring REST Controller with only one endpoint. This endpoint will return its own exposed port (this value will be useful to check that the Otoroshi load balancing is right working between the multiples Eureka instances registered).\n\n\nLet's fast create a Spring project using [Spring Initializer](https://start.spring.io/). You can use the previous link or directly click on the following link to get the form already filled with the needed dependencies.\n\n````bash\nhttps://start.spring.io/#!type=maven-project&language=java&platformVersion=2.7.3&packaging=jar&jvmVersion=17&groupId=otoroshi.io&artifactId=eureka-client&name=eureka-client&description=A%20simple%20eureka%20client&packageName=otoroshi.io.eureka-client&dependencies=cloud-eureka,web\n````\n\nFeel free to change the project metadata for your use case.\n\nOnce downloaded and uncompressed, let's ahead and start to delete the application.properties and create an application.yml (if you are more comfortable with an application.properties, keep it)\n\n````yaml\neureka:\n client:\n fetch-registry: false # disable the discovery services mechanism for the client\n serviceUrl:\n defaultZone: http://eureka-server.oto.tools:8080/eureka\n\nspring:\n application:\n name: foo_app\n\n````\n\n\nNow, let's define the simple REST controller to expose the client port.\n\nCreate a new file, called PortController.java, in the sources folder of your project with the following content.\n\n````java\npackage otoroshi.io.eurekaclient;\n\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.core.env.Environment;\nimport org.springframework.web.bind.annotation.GetMapping;\nimport org.springframework.web.bind.annotation.RestController;\n\n@RestController\npublic class PortController {\n\n @Autowired\n Environment environment;\n\n @GetMapping(\"/port\")\n public String index() {\n return environment.getProperty(\"local.server.port\");\n }\n}\n````\nThis controller is very simple, we just exposed one endpoint `/port` which returns the port as string. Our client is ready to running. \n\nLet's launch it with the following command:\n\n````sh\nmvn spring-boot:run -Dspring-boot.run.arguments=--server.port=8085\n````\n\n@@@note\nThe port is not required but it will be useful when we will deploy more than one instances in the rest of the tutorial\n@@@\n\n\nOnce the command ran, you can navigate to the eureka server view in the Otoroshi UI. The dashboard should displays one registered app and instance.\nIt should also displays a timer for each application which represents the elapsed time since the last received heartbeat.\n\nLet's define a new route to exposed our registered eureka client.\n\n* Create a new route, named `Eureka client`, exposed on `http://eureka-client.oto.tools:8080` and targeting `http://foo.bar`\n* Search and add the `Internal Eureka server` plugin \n* Edit the plugin and choose your eureka server and your app (in our case, `Eureka Server` and `FOO_APP` respectively)\n* Save your route\n\nNow try to call the new route.\n\n````sh\ncurl 'http://eureka-client.oto.tools:8080/port'\n````\n\nIf everything is working, you should get the port 8085 as the response.The setup is working as expected, but we can improve him by scaling our eureka client.\n\nOpen a new tab in your terminal and run the following command.\n\n````sh\nmvn spring-boot:run -Dspring-boot.run.arguments=--server.port=8083\n````\n\nJust wait a few seconds and retry to call your new route.\n\n````sh\ncurl 'http://eureka-client.oto.tools:8080/port'\n$ 8082\ncurl 'http://eureka-client.oto.tools:8080/port'\n$ 8085\ncurl 'http://eureka-client.oto.tools:8080/port'\n$ 8085\ncurl 'http://eureka-client.oto.tools:8080/port'\n$ 8082\n````\n\nThe configuration is ready and the setup is working, Otoroshi use all instances of your app to dispatch clients on it.\n\n### Connect to an external Eureka server\n\nOtoroshi has the possibility to discover services by connecting to an Eureka Server.\n\nLet's create a route with an Eureka application as Otoroshi target:\n\n* Create a new blank API route\n* Search and add the `External Eureka Server` plugin\n* Set your eureka URL\n* Click on `Fetch Services` button to discover the applications of the Eureka instance\n* In the appeared selector, choose the application to target\n* Once the frontend configured, save your route and try to call it.\n\nWell done, you have exposed your Eureka application through the Otoroshi discovery services.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"},{"name":"experimental.md","id":"/includes/experimental.md","url":"/includes/experimental.html","title":"@@@ warning","content":"@@@ warning\n\nthis feature is **EXPERIMENTAL** and might not work as expected.
\nIf you encounter any bugs, [please fill an issue](https://github.com/MAIF/otoroshi/issues/new), it will help us a lot :)\n\n@@@\n"},{"name":"fetch-and-start.md","id":"/includes/fetch-and-start.md","url":"/includes/fetch-and-start.html","title":"","content":"\nIf you already have an up and running otoroshi instance, you can skip the following instructions\n\nLet's start by downloading the latest Otoroshi.\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar'\n```\n\nthen you can run start Otoroshi :\n\n```sh\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n```\n\nNow you can log into Otoroshi at @link:[http://otoroshi.oto.tools:8080](http://otoroshi.oto.tools:8080) { open=new } with `admin@otoroshi.io/password`\n"},{"name":"initialize.md","id":"/includes/initialize.md","url":"/includes/initialize.html","title":"","content":"\n\nIf you already have an up and running otoroshi instance, you can skip the following instructions\n\n\n@@@div { .instructions }\n\n
\nSet up an Otoroshi\n\n
\n\nLet's start by downloading the latest Otoroshi.\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar'\n```\n\nthen you can run start Otoroshi :\n\n```sh\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n```\n\nNow you can log into Otoroshi at http://otoroshi.oto.tools:8080 with `admin@otoroshi.io/password`\n\nCreate a new route, exposed on `http://myservice.oto.tools:8080`, which will forward all requests to the mirror `https://mirror.otoroshi.io`. Each call to this service will returned the body and the headers received by the mirror.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/routes' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"my-service\",\n \"frontend\": {\n \"domains\": [\"myservice.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n }\n}\nEOF\n```\n\n\n@@@\n"},{"name":"index.md","id":"/index.md","url":"/index.html","title":"Otoroshi","content":"# Otoroshi\n\n**Otoroshi** is a layer of lightweight api management on top of a modern http reverse proxy written in Scala and developped by the MAIF OSS team that can handle all the calls to and between your microservices without service locator and let you change configuration dynamicaly at runtime.\n\n\n> *The Otoroshi is a large hairy monster that tends to lurk on the top of the torii gate in front of Shinto shrines. It's a hostile creature, but also said to be the guardian of the shrine and is said to leap down from the top of the gate to devour those who approach the shrine for only self-serving purposes.*\n\n@@@ div { .centered-img }\n[![Join the discord](https://img.shields.io/discord/1089571852940218538?color=f9b000&label=Community&logo=Discord&logoColor=f9b000)](https://discord.gg/dmbwZrfpcQ) [ ![Download](https://img.shields.io/github/release/MAIF/otoroshi.svg) ](hhttps://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar)\n@@@\n\n@@@ div { .centered-img }\n\n@@@\n\n## Installation\n\nYou can download the latest build of Otoroshi as a @ref:[fat jar](./install/get-otoroshi.md#from-jar-file), as a @ref:[zip package](./install/get-otoroshi.md#from-zip) or as a @ref:[docker image](./install/get-otoroshi.md#from-docker).\n\nYou can install and run Otoroshi with this little bash snippet\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar'\njava -jar otoroshi.jar\n```\n\nor using docker\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi:16.5.2\n```\n\nnow open your browser to http://otoroshi.oto.tools:8080/, **log in with the credential generated in the logs** and explore by yourself, if you want better instructions, just go to the @ref:[Quick Start](./getting-started.md) or directly to the @ref:[installation instructions](./install/get-otoroshi.md)\n\n## Documentation\n\n* @ref:[About Otoroshi](./about.md)\n* @ref:[Architecture](./architecture.md)\n* @ref:[Features](./features.md)\n* @ref:[Getting started](./getting-started.md)\n* @ref:[Install Otoroshi](./install/index.md)\n* @ref:[Main entities](./entities/index.md)\n* @ref:[Detailed topics](./topics/index.md)\n* @ref:[How to's](./how-to-s/index.md)\n* @ref:[Plugins](./plugins/index.md)\n* @ref:[Admin REST API](./api.md)\n* @ref:[Deploy to production](./deploy/index.md)\n* @ref:[Developing Otoroshi](./dev.md)\n\n## Discussion\n\nJoin the @link:[Otoroshi server](https://discord.gg/dmbwZrfpcQ) { open=new } Discord\n\n## Sources\n\nThe sources of Otoroshi are available on @link:[Github](https://github.com/MAIF/otoroshi) { open=new }.\n\n## Logo\n\nYou can find the official Otoroshi logo @link:[on GitHub](https://github.com/MAIF/otoroshi/blob/master/resources/otoroshi-logo.png) { open=new }. The Otoroshi logo has been created by François Galioto ([@fgalioto](https://twitter.com/fgalioto))\n\n## Changelog\n\nEvery release, along with the migration instructions, is documented on the @link:[Github Releases](https://github.com/MAIF/otoroshi/releases) { open=new } page. A condensed version of the changelog is available on @link:[github](https://github.com/MAIF/otoroshi/blob/master/CHANGELOG.md) { open=new }\n\n## Patrons\n\nThe work on Otoroshi was funded by MAIF with the help of the community.\n\n## Licence\n\nOtoroshi is Open Source and available under the @link:[Apache 2 License](https://opensource.org/licenses/Apache-2.0) { open=new }\n\n@@@ index\n\n* [About Otoroshi](./about.md)\n* [Architecture](./architecture.md)\n* [Features](./features.md)\n* [Getting started](./getting-started.md)\n* [Install Otoroshi](./install/index.md)\n* [Main entities](./entities/index.md)\n* [Detailed topics](./topics/index.md)\n* [How to's](./how-to-s/index.md)\n* [Plugins](./plugins/index.md)\n* [Admin REST API](./api.md)\n* [Deploy to production](./deploy/index.md)\n* [Developing Otoroshi](./dev.md)\n\n@@@\n\n"},{"name":"get-otoroshi.md","id":"/install/get-otoroshi.md","url":"/install/get-otoroshi.html","title":"Get Otoroshi","content":"# Get Otoroshi\n\nAll release can be bound on the releases page of the @link:[repository](https://github.com/MAIF/otoroshi/releases) { open=new }.\n\n## From zip\n\n```sh\n# Download the latest version\nwget https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi-16.5.2.zip\nunzip ./otoroshi-16.5.2.zip\ncd otoroshi-16.5.2\n```\n\n## From jar file\n\n```sh\n# Download the latest version\nwget https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar\n```\n\n## From Docker\n\n```sh\n# Download the latest version\ndocker pull maif/otoroshi:16.5.2-jdk11\n```\n\n## From Sources\n\nTo build Otoroshi from sources, just go to the @ref:[dev documentation](../dev.md)\n"},{"name":"index.md","id":"/install/index.md","url":"/install/index.html","title":"Install","content":"# Install\n\nIn this sections, you will find informations about how to install and run Otoroshi\n\n* @ref:[Get Otoroshi](./get-otoroshi.md)\n* @ref:[Setup Otoroshi](./setup-otoroshi.md)\n* @ref:[Run Otoroshi](./run-otoroshi.md)\n\n@@@ index\n\n* [Get Otoroshi](./get-otoroshi.md)\n* [Setup Otoroshi](./setup-otoroshi.md)\n* [Run Otoroshi](./run-otoroshi.md)\n\n@@@\n"},{"name":"run-otoroshi.md","id":"/install/run-otoroshi.md","url":"/install/run-otoroshi.html","title":"Run Otoroshi","content":"# Run Otoroshi\n\nNow you are ready to run Otoroshi. You can run the following command with some tweaks depending on the way you want to configure Otoroshi. If you want to pass a custom configuration file, use the `-Dconfig.file=/path/to/file.conf` flag in the following commands.\n\n## From .zip file\n\n```sh\ncd otoroshi-vx.x.x\n./bin/otoroshi\n```\n\n## From .jar file\n\nFor Java 11\n\n```sh\njava -jar otoroshi.jar\n```\n\nif you want to run the jar file for on a JDK above JDK11, you'll have to add the following flags\n\n```sh\njava \\\n --add-opens=java.base/javax.net.ssl=ALL-UNNAMED \\\n --add-opens=java.base/sun.net.www.protocol.file=ALL-UNNAMED \\\n --add-exports=java.base/sun.security.x509=ALL-UNNAMED \\\n --add-opens=java.base/sun.security.ssl=ALL-UNNAMED \\\n -Dlog4j2.formatMsgNoLookups=true \\\n -jar otoroshi.jar\n```\n\n## From docker\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi\n```\n\nYou can also pass useful args like :\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi -Dconfig.file=/usr/app/otoroshi/conf/otoroshi.conf -Dlogger.file=/usr/app/otoroshi/conf/otoroshi.xml\n```\n\nIf you want to provide your own config file, you can read @ref:[the documentation about config files](./setup-otoroshi.md).\n\nYou can also provide some ENV variable using the `--env` flag to customize your Otoroshi instance.\n\nThe list of possible env variables is available @ref:[here](./setup-otoroshi.md).\n\nYou can use a volume to provide configuration like :\n\n```sh\ndocker run -p \"8080:8080\" -v \"$(pwd):/usr/app/otoroshi/conf\" maif/otoroshi\n```\n\nYou can also use a volume if you choose to use `filedb` datastore like :\n\n```sh\ndocker run -p \"8080:8080\" -v \"$(pwd)/filedb:/usr/app/otoroshi/filedb\" maif/otoroshi -Dotoroshi.storage=file\n```\n\nYou can also use a volume if you choose to use exports files :\n\n```sh\ndocker run -p \"8080:8080\" -v \"$(pwd):/usr/app/otoroshi/imports\" maif/otoroshi -Dotoroshi.importFrom=/usr/app/otoroshi/imports/export.json\n```\n\n## Run examples\n\n```sh\n$ java \\\n -Xms2G \\\n -Xmx8G \\\n -Dhttp.port=8080 \\\n -Dotoroshi.importFrom=/home/user/otoroshi.json \\\n -Dconfig.file=/home/user/otoroshi.conf \\\n -jar ./otoroshi.jar\n\n[warn] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[warn] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[warn] otoroshi-env - Importing from: /home/user/otoroshi.json\n[info] play.api.Play - Application started (Prod)\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n```\n\nIf you choose to start Otoroshi without importing existing data, Otoroshi will create a new admin user and print the login details in the log. When you will log into the admin dashboard, Otoroshi will ask you to create another account to avoid security issues.\n\n```sh\n$ java \\\n -Xms2G \\\n -Xmx8G \\\n -Dhttp.port=8080 \\\n -jar otoroshi.jar\n\n[warn] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[warn] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[warn] otoroshi-env - You can log into the Otoroshi admin console with the following credentials: admin@otoroshi.io / HHUsiF2UC3OPdmg0lGngEv3RrbIwWV5W\n[info] play.api.Play - Application started (Prod)\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n```\n"},{"name":"setup-otoroshi.md","id":"/install/setup-otoroshi.md","url":"/install/setup-otoroshi.html","title":"Setup Otoroshi","content":"# Setup Otoroshi\n\nin this section we are going to configure otoroshi before running it for the first time\n\n## Setup the database\n\nRight now, Otoroshi supports multiple datastore. You can choose one datastore over another depending on your use case.\n\n@@@div { .plugin .platform } \n
Redis
\n\n
Recommended
\n\nThe **redis** datastore is quite nice when you want to easily deploy several Otoroshi instances.\n\n\n\n@link:[Documentation](https://redis.io/topics/quickstart)\n@@@\n\n@@@div { .plugin .platform } \n
In memory
\n\nThe **in-memory** datastore is kind of interesting. It can be used for testing purposes, but it is also a good candidate for production because of its fastness.\n\n\n\n@ref:[Start with](../getting-started.md)\n@@@\n\n@@@div { .plugin .platform } \n
Cassandra
\n\n
Clustering
\n\nExperimental support, should be used in cluster mode for leaders\n\n\n\n@link:[Documentation](https://cassandra.apache.org/doc/latest/cassandra/getting_started/installing.html)\n@@@\n\n@@@div { .plugin .platform } \n
Postgresql
\n\n
Clustering
\n\nOr any postgresql compatible databse like cockroachdb for instance (experimental support, should be used in cluster mode for leaders)\n\n\n\n@link:[Documentation](https://www.postgresql.org/docs/10/tutorial-install.html)\n@@@\n\n@@@div { .plugin .platform } \n\n
FileDB
\n\nThe **filedb** datastore is pretty handy for testing purposes, but is not supposed to be used in production mode. \nNot suitable for production usage.\n\n\n\n@@@\n\n\n@@@ div { .centered-img }\n\n@@@\n\nthe first thing to setup is what kind of datastore you want to use with the `otoroshi.storage` setting\n\n```conf\notoroshi {\n storage = \"inmemory\" # the storage used by otoroshi. possible values are lettuce (for redis), inmemory, file, http, s3, cassandra, postgresql \n storage = ${?APP_STORAGE} # the storage used by otoroshi. possible values are lettuce (for redis), inmemory, file, http, s3, cassandra, postgresql \n storage = ${?OTOROSHI_STORAGE} # the storage used by otoroshi. possible values are lettuce (for redis), inmemory, file, http, s3, cassandra, postgresql \n}\n```\n\ndepending on the value you chose, you will be able to configure your datastore with the following configuration\n\ninmemory\n: @@snip [inmemory.conf](../snippets/datastores/inmemory.conf) \n\nfile\n: @@snip [file.conf](../snippets/datastores/file.conf) \n\nhttp\n: @@snip [http.conf](../snippets/datastores/http.conf) \n\ns3\n: @@snip [s3.conf](../snippets/datastores/s3.conf) \n\nredis\n: @@snip [lettuce.conf](../snippets/datastores/lettuce.conf) \n\npostgresql\n: @@snip [pg.conf](../snippets/datastores/pg.conf) \n\ncassandra\n: @@snip [inmemory.conf](../snippets/datastores/cassandra.conf) \n\n## Setup your hosts before running\n\nBy default, Otoroshi starts with domain `oto.tools` that automatically targets `127.0.0.1` with no changes to your `/etc/hosts` file. Of course you can change the domain value, you have to add the values in your `/etc/hosts` file according to the setting you put in Otoroshi configuration or define the right ip address at the DNS provider level\n\n* `otoroshi.domain` => `mydomain.org`\n* `otoroshi.backoffice.subdomain` => `otoroshi`\n* `otoroshi.privateapps.subdomain` => `privateapps`\n* `otoroshi.adminapi.exposedSubdomain` => `otoroshi-api`\n* `otoroshi.adminapi.targetSubdomain` => `otoroshi-admin-internal-api`\n\nfor instance if you want to change the default domain and use something like `otoroshi.mydomain.org`, then start otoroshi like \n\n```sh\njava -Dotoroshi.domain=mydomain.org -jar otoroshi.jar\n```\n\n@@@ warning\nOtoroshi cannot be accessed using `http://127.0.0.1:8080` or `http://localhost:8080` because Otoroshi uses Otoroshi to serve it's own UI and API. When otoroshi starts with an empty database, it will create a service descriptor for that using `otoroshi.domain` and the settings listed on this page and in the here that serve Otoroshi API and UI on `http://otoroshi-api.${otoroshi.domain}` and `http://otoroshi.${otoroshi.domain}`.\nOnce the descriptor is saved in database, if you want to change `otoroshi.domain`, you'll have to edit the descriptor in the database or restart Otoroshi with an empty database.\n@@@\n\n@@@ warning\nif your otoroshi instance runs behind a reverse proxy (L4 / L7) or inside a docker container where exposed ports (that you will use to access otoroshi) are not the same that the ones configured in otoroshi (`http.port` and `https.port`), you'll have to configure otoroshi exposed port to avoid bad redirection URLs when using authentication modules and other otoroshi tools. To do that, just set the values of the exposed ports in `otoroshi.exposed-ports.http = $theExposedHttpPort` (OTOROSHI_EXPOSED_PORTS_HTTP) and `otoroshi.exposed-ports.https = $theExposedHttpsPort` (OTOROSHI_EXPOSED_PORTS_HTTPS)\n@@@\n\n## Setup your configuration file\n\nThere is a lot of things you can configure in Otoroshi. By default, Otoroshi provides a configuration that should be enough for testing purpose. But you'll likely need to update this configuration when you'll need to move into production.\n\nIn this page, any configuration property can be set at runtime using a `-D` flag when launching Otoroshi like \n\n```sh\njava -Dhttp.port=8080 -jar otoroshi.jar\n```\n\nor\n\n```sh\n./bin/otoroshi -Dhttp.port=8080 \n```\n\nif you want to define your own config file and use it on an otoroshi instance, use the following flag\n\n```sh\njava -Dconfig.file=/path/to/otoroshi.conf -jar otoroshi.jar\n``` \n\n### Example of a custom. configuration file\n\n```conf\ninclude \"application.conf\"\n\nhttp.port = 8080\n\napp {\n storage = \"inmemory\"\n importFrom = \"./my-state.json\"\n env = \"prod\"\n domain = \"oto.tools\"\n rootScheme = \"http\"\n snowflake {\n seed = 0\n }\n events {\n maxSize = 1000\n }\n backoffice {\n subdomain = \"otoroshi\"\n session {\n exp = 86400000\n }\n }\n privateapps {\n subdomain = \"privateapps\"\n session {\n exp = 86400000\n }\n }\n adminapi {\n targetSubdomain = \"otoroshi-admin-internal-api\"\n exposedSubdomain = \"otoroshi-api\"\n defaultValues {\n backOfficeGroupId = \"admin-api-group\"\n backOfficeApiKeyClientId = \"admin-api-apikey-id\"\n backOfficeApiKeyClientSecret = \"admin-api-apikey-secret\"\n backOfficeServiceId = \"admin-api-service\"\n }\n }\n claim {\n sharedKey = \"mysecret\"\n }\n filedb {\n path = \"./filedb/state.ndjson\"\n }\n}\n\nplay.http {\n session {\n secure = false\n httpOnly = true\n maxAge = 2592000000\n domain = \".oto.tools\"\n cookieName = \"oto-sess\"\n }\n}\n```\n\n### Reference configuration\n\n@@snip [reference.conf](../snippets/reference.conf) \n\n### More config. options\n\nSee default configuration at\n\n* @link:[Base configuration](https://github.com/MAIF/otoroshi/blob/master/otoroshi/conf/base.conf) { open=new }\n* @link:[Application configuration](https://github.com/MAIF/otoroshi/blob/master/otoroshi/conf/application.conf) { open=new }\n\n## Configuration with env. variables\n\nEevery property in the configuration file can be overriden by an environment variable if it has env variable override written like `${?ENV_VARIABLE}`).\n\n## Reference configuration for env. variables\n\n@@snip [reference-env.conf](../snippets/reference-env.conf) \n"},{"name":"built-in-legacy-plugins.md","id":"/plugins/built-in-legacy-plugins.md","url":"/plugins/built-in-legacy-plugins.html","title":"Built-in legacy plugins","content":"# Built-in legacy plugins\n\nOtoroshi provides some plugins out of the box. Here is the available plugins with their documentation and reference configuration\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.accesslog.AccessLog }\n\n## Access log (CLF)\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `AccessLog`\n\n### Description\n\nWith this plugin, any access to a service will be logged in CLF format.\n\nLog format is the following:\n\n`\"$service\" $clientAddress - \"$userId\" [$timestamp] \"$host $method $path $protocol\" \"$status $statusTxt\" $size $snowflake \"$to\" \"$referer\" \"$userAgent\" $http $duration $errorMsg`\n\nThe plugin accepts the following configuration\n\n```json\n{\n \"AccessLog\": {\n \"enabled\": true,\n \"statuses\": [], // list of status to enable logs, if none, log everything\n \"paths\": [], // list of paths to enable logs, if none, log everything\n \"methods\": [], // list of http methods to enable logs, if none, log everything\n \"identities\": [] // list of identities to enable logs, if none, log everything\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"AccessLog\" : {\n \"enabled\" : true,\n \"statuses\" : [ ],\n \"paths\" : [ ],\n \"methods\" : [ ],\n \"identities\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.accesslog.AccessLogJson }\n\n## Access log (JSON)\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `AccessLog`\n\n### Description\n\nWith this plugin, any access to a service will be logged in json format.\n\nThe plugin accepts the following configuration\n\n```json\n{\n \"AccessLog\": {\n \"enabled\": true,\n \"statuses\": [], // list of status to enable logs, if none, log everything\n \"paths\": [], // list of paths to enable logs, if none, log everything\n \"methods\": [], // list of http methods to enable logs, if none, log everything\n \"identities\": [] // list of identities to enable logs, if none, log everything\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"AccessLog\" : {\n \"enabled\" : true,\n \"statuses\" : [ ],\n \"paths\" : [ ],\n \"methods\" : [ ],\n \"identities\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.accesslog.KafkaAccessLog }\n\n## Kafka access log\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `KafkaAccessLog`\n\n### Description\n\nWith this plugin, any access to a service will be logged as an event in a kafka topic.\n\nThe plugin accepts the following configuration\n\n```json\n{\n \"KafkaAccessLog\": {\n \"enabled\": true,\n \"topic\": \"otoroshi-access-log\",\n \"statuses\": [], // list of status to enable logs, if none, log everything\n \"paths\": [], // list of paths to enable logs, if none, log everything\n \"methods\": [], // list of http methods to enable logs, if none, log everything\n \"identities\": [] // list of identities to enable logs, if none, log everything\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"KafkaAccessLog\" : {\n \"enabled\" : true,\n \"topic\" : \"otoroshi-access-log\",\n \"statuses\" : [ ],\n \"paths\" : [ ],\n \"methods\" : [ ],\n \"identities\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.authcallers.BasicAuthCaller }\n\n## Basic Auth. caller\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `BasicAuthCaller`\n\n### Description\n\nThis plugin can be used to call api that are authenticated using basic auth.\n\nThis plugin accepts the following configuration\n\n{\n \"username\" : \"the_username\",\n \"password\" : \"the_password\",\n \"headerName\" : \"Authorization\",\n \"headerValueFormat\" : \"Basic %s\"\n}\n\n\n\n### Default configuration\n\n```json\n{\n \"username\" : \"the_username\",\n \"password\" : \"the_password\",\n \"headerName\" : \"Authorization\",\n \"headerValueFormat\" : \"Basic %s\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.authcallers.OAuth2Caller }\n\n## OAuth2 caller\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `OAuth2Caller`\n\n### Description\n\nThis plugin can be used to call api that are authenticated using OAuth2 client_credential/password flow.\nDo not forget to enable client retry to handle token generation on expire.\n\nThis plugin accepts the following configuration\n\n{\n \"kind\" : \"the oauth2 flow, can be 'client_credentials' or 'password'\",\n \"url\" : \"https://127.0.0.1:8080/oauth/token\",\n \"method\" : \"POST\",\n \"headerName\" : \"Authorization\",\n \"headerValueFormat\" : \"Bearer %s\",\n \"jsonPayload\" : false,\n \"clientId\" : \"the client_id\",\n \"clientSecret\" : \"the client_secret\",\n \"scope\" : \"an optional scope\",\n \"audience\" : \"an optional audience\",\n \"user\" : \"an optional username if using password flow\",\n \"password\" : \"an optional password if using password flow\",\n \"cacheTokenSeconds\" : \"the number of second to wait before asking for a new token\",\n \"tlsConfig\" : \"an optional TLS settings object\"\n}\n\n\n\n### Default configuration\n\n```json\n{\n \"kind\" : \"the oauth2 flow, can be 'client_credentials' or 'password'\",\n \"url\" : \"https://127.0.0.1:8080/oauth/token\",\n \"method\" : \"POST\",\n \"headerName\" : \"Authorization\",\n \"headerValueFormat\" : \"Bearer %s\",\n \"jsonPayload\" : false,\n \"clientId\" : \"the client_id\",\n \"clientSecret\" : \"the client_secret\",\n \"scope\" : \"an optional scope\",\n \"audience\" : \"an optional audience\",\n \"user\" : \"an optional username if using password flow\",\n \"password\" : \"an optional password if using password flow\",\n \"cacheTokenSeconds\" : \"the number of second to wait before asking for a new token\",\n \"tlsConfig\" : \"an optional TLS settings object\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.cache.ResponseCache }\n\n## Response Cache\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `ResponseCache`\n\n### Description\n\nThis plugin can cache responses from target services in the otoroshi datasstore\nIt also provides a debug UI at `/.well-known/otoroshi/bodylogger`.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"ResponseCache\": {\n \"enabled\": true, // enabled cache\n \"ttl\": 300000, // store it for some times (5 minutes by default)\n \"maxSize\": 5242880, // max body size (body will be cut after that)\n \"autoClean\": true, // cleanup older keys when all bigger than maxSize\n \"filter\": { // cache only for some status, method and paths\n \"statuses\": [],\n \"methods\": [],\n \"paths\": [],\n \"not\": {\n \"statuses\": [],\n \"methods\": [],\n \"paths\": []\n }\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"ResponseCache\" : {\n \"enabled\" : true,\n \"ttl\" : 3600000,\n \"maxSize\" : 52428800,\n \"autoClean\" : true,\n \"filter\" : {\n \"statuses\" : [ ],\n \"methods\" : [ ],\n \"paths\" : [ ],\n \"not\" : {\n \"statuses\" : [ ],\n \"methods\" : [ ],\n \"paths\" : [ ]\n }\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.clientcert.ClientCertChainHeader }\n\n## Client certificate header\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `ClientCertChain`\n\n### Description\n\nThis plugin pass client certificate informations to the target in headers.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"ClientCertChain\": {\n \"pem\": { // send client cert as PEM format in a header\n \"send\": false,\n \"header\": \"X-Client-Cert-Pem\"\n },\n \"dns\": { // send JSON array of DNs in a header\n \"send\": false,\n \"header\": \"X-Client-Cert-DNs\"\n },\n \"chain\": { // send JSON representation of client cert chain in a header\n \"send\": true,\n \"header\": \"X-Client-Cert-Chain\"\n },\n \"claims\": { // pass JSON representation of client cert chain in the otoroshi JWT token\n \"send\": false,\n \"name\": \"clientCertChain\"\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"ClientCertChain\" : {\n \"pem\" : {\n \"send\" : false,\n \"header\" : \"X-Client-Cert-Pem\"\n },\n \"dns\" : {\n \"send\" : false,\n \"header\" : \"X-Client-Cert-DNs\"\n },\n \"chain\" : {\n \"send\" : true,\n \"header\" : \"X-Client-Cert-Chain\"\n },\n \"claims\" : {\n \"send\" : false,\n \"name\" : \"clientCertChain\"\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.defer.DeferPlugin }\n\n## Defer Responses\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `DeferPlugin`\n\n### Description\n\nThis plugin will expect a `X-Defer` header or a `defer` query param and defer the response according to the value in milliseconds.\nThis plugin is some kind of inside joke as one a our customer ask us to make slower apis.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"DeferPlugin\": {\n \"defaultDefer\": 0 // default defer in millis\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"DeferPlugin\" : {\n \"defaultDefer\" : 0\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.discovery.DiscoverySelfRegistrationTransformer }\n\n## Self registration endpoints (service discovery)\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `DiscoverySelfRegistration`\n\n### Description\n\nThis plugin add support for self registration endpoint on a specific service.\n\nThis plugin accepts the following configuration:\n\n\n\n### Default configuration\n\n```json\n{\n \"DiscoverySelfRegistration\" : {\n \"hosts\" : [ ],\n \"targetTemplate\" : { },\n \"registrationTtl\" : 60000\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.geoloc.GeolocationInfoEndpoint }\n\n## Geolocation endpoint\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: ``none``\n\n### Description\n\nThis plugin will expose current geolocation informations on the following endpoint.\n\n`/.well-known/otoroshi/plugins/geolocation`\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.geoloc.GeolocationInfoHeader }\n\n## Geolocation header\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `GeolocationInfoHeader`\n\n### Description\n\nThis plugin will send informations extracted by the Geolocation details extractor to the target service in a header.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"GeolocationInfoHeader\": {\n \"headerName\": \"X-Geolocation-Info\" // header in which info will be sent\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"GeolocationInfoHeader\" : {\n \"headerName\" : \"X-Geolocation-Info\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.hmac.HMACCallerPlugin }\n\n## HMAC caller plugin\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `HMACCallerPlugin`\n\n### Description\n\nThis plugin can be used to call a \"protected\" api by an HMAC signature. It will adds a signature with the secret configured on the plugin.\n The signature string will always the content of the header list listed in the plugin configuration.\n\n\n\n### Default configuration\n\n```json\n{\n \"HMACCallerPlugin\" : {\n \"secret\" : \"my-defaut-secret\",\n \"algo\" : \"HMAC-SHA512\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.izanami.IzanamiCanary }\n\n## Izanami Canary Campaign\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `IzanamiCanary`\n\n### Description\n\nThis plugin allow you to perform canary testing based on an izanami experiment campaign (A/B test).\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"IzanamiCanary\" : {\n \"experimentId\" : \"foo:bar:qix\",\n \"configId\" : \"foo:bar:qix:config\",\n \"izanamiUrl\" : \"https://izanami.foo.bar\",\n \"izanamiClientId\" : \"client\",\n \"izanamiClientSecret\" : \"secret\",\n \"timeout\" : 5000,\n \"mtls\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"IzanamiCanary\" : {\n \"experimentId\" : \"foo:bar:qix\",\n \"configId\" : \"foo:bar:qix:config\",\n \"izanamiUrl\" : \"https://izanami.foo.bar\",\n \"izanamiClientId\" : \"client\",\n \"izanamiClientSecret\" : \"secret\",\n \"timeout\" : 5000,\n \"mtls\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.izanami.IzanamiProxy }\n\n## Izanami APIs Proxy\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `IzanamiProxy`\n\n### Description\n\nThis plugin exposes routes to proxy Izanami configuration and features tree APIs.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"IzanamiProxy\" : {\n \"path\" : \"/api/izanami\",\n \"featurePattern\" : \"*\",\n \"configPattern\" : \"*\",\n \"autoContext\" : false,\n \"featuresEnabled\" : true,\n \"featuresWithContextEnabled\" : true,\n \"configurationEnabled\" : false,\n \"izanamiUrl\" : \"https://izanami.foo.bar\",\n \"izanamiClientId\" : \"client\",\n \"izanamiClientSecret\" : \"secret\",\n \"timeout\" : 5000\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"IzanamiProxy\" : {\n \"path\" : \"/api/izanami\",\n \"featurePattern\" : \"*\",\n \"configPattern\" : \"*\",\n \"autoContext\" : false,\n \"featuresEnabled\" : true,\n \"featuresWithContextEnabled\" : true,\n \"configurationEnabled\" : false,\n \"izanamiUrl\" : \"https://izanami.foo.bar\",\n \"izanamiClientId\" : \"client\",\n \"izanamiClientSecret\" : \"secret\",\n \"timeout\" : 5000\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.jq.JqBodyTransformer }\n\n## JQ bodies transformer\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `JqBodyTransformer`\n\n### Description\n\nThis plugin let you transform JSON bodies (in requests and responses) using [JQ filters](https://stedolan.github.io/jq/manual/#Basicfilters).\n\nSome JSON variables are accessible by default :\n\n * `$url`: the request url\n * `$path`: the request path\n * `$domain`: the request domain\n * `$method`: the request method\n * `$headers`: the current request headers (with name in lowercase)\n * `$queryParams`: the current request query params\n * `$otoToken`: the otoroshi protocol token (if one)\n * `$inToken`: the first matched JWT token as is (from verifiers, if one)\n * `$token`: the first matched JWT token as is (from verifiers, if one)\n * `$user`: the current user (if one)\n * `$apikey`: the current apikey (if one)\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"JqBodyTransformer\" : {\n \"request\" : {\n \"filter\" : \".\",\n \"included\" : [ ],\n \"excluded\" : [ ]\n },\n \"response\" : {\n \"filter\" : \".\",\n \"included\" : [ ],\n \"excluded\" : [ ]\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"JqBodyTransformer\" : {\n \"request\" : {\n \"filter\" : \".\",\n \"included\" : [ ],\n \"excluded\" : [ ]\n },\n \"response\" : {\n \"filter\" : \".\",\n \"included\" : [ ],\n \"excluded\" : [ ]\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.jsoup.HtmlPatcher }\n\n## Html Patcher\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `HtmlPatcher`\n\n### Description\n\nThis plugin can inject elements in html pages (in the body or in the head) returned by the service\n\n\n\n### Default configuration\n\n```json\n{\n \"HtmlPatcher\" : {\n \"appendHead\" : [ ],\n \"appendBody\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.log4j.Log4ShellFilter }\n\n## Log4Shell mitigation plugin\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `Log4ShellFilter`\n\n### Description\n\nThis plugin try to detect Log4Shell attacks in request and block them.\n\nThis plugin can accept the following configuration\n\n```javascript\n{\n \"Log4ShellFilter\": {\n \"status\": 200, // the status send back when an attack expression is found\n \"body\": \"\", // the body send back when an attack expression is found\n \"parseBody\": false // enables request body parsing to find attack expression\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"Log4ShellFilter\" : {\n \"status\" : 200,\n \"body\" : \"\",\n \"parseBody\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.loggers.BodyLogger }\n\n## Body logger\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `BodyLogger`\n\n### Description\n\nThis plugin can log body present in request and response. It can just logs it, store in in the redis store with a ttl and send it to analytics.\nIt also provides a debug UI at `/.well-known/otoroshi/bodylogger`.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"BodyLogger\": {\n \"enabled\": true, // enabled logging\n \"log\": true, // just log it\n \"store\": false, // store bodies in datastore\n \"ttl\": 300000, // store it for some times (5 minutes by default)\n \"sendToAnalytics\": false, // send bodies to analytics\n \"maxSize\": 5242880, // max body size (body will be cut after that)\n \"password\": \"password\", // password for the ui, if none, it's public\n \"filter\": { // log only for some status, method and paths\n \"statuses\": [],\n \"methods\": [],\n \"paths\": [],\n \"not\": {\n \"statuses\": [],\n \"methods\": [],\n \"paths\": []\n }\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"BodyLogger\" : {\n \"enabled\" : true,\n \"log\" : true,\n \"store\" : false,\n \"ttl\" : 300000,\n \"sendToAnalytics\" : false,\n \"maxSize\" : 5242880,\n \"password\" : \"password\",\n \"filter\" : {\n \"statuses\" : [ ],\n \"methods\" : [ ],\n \"paths\" : [ ],\n \"not\" : {\n \"statuses\" : [ ],\n \"methods\" : [ ],\n \"paths\" : [ ]\n }\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.mirror.MirroringPlugin }\n\n## Mirroring plugin\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `MirroringPlugin`\n\n### Description\n\nThis plugin will mirror every request to other targets\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"MirroringPlugin\": {\n \"enabled\": true, // enabled mirroring\n \"to\": \"https://foo.bar.dev\", // the url of the service to mirror\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"MirroringPlugin\" : {\n \"enabled\" : true,\n \"to\" : \"https://foo.bar.dev\",\n \"captureResponse\" : false,\n \"generateEvents\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.oauth1.OAuth1CallerPlugin }\n\n## OAuth1 caller\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `OAuth1Caller`\n\n### Description\n\nThis plugin can be used to call api that are authenticated using OAuth1.\n Consumer key, secret, and OAuth token et OAuth token secret can be pass through the metadata of an api key\n or via the configuration of this plugin.\n\n\n\n### Default configuration\n\n```json\n{\n \"OAuth1Caller\" : {\n \"algo\" : \"HmacSHA512\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.oidc.OIDCHeaders }\n\n## OIDC headers\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `OIDCHeaders`\n\n### Description\n\nThis plugin injects headers containing tokens and profile from current OIDC provider.\n\n\n\n### Default configuration\n\n```json\n{\n \"OIDCHeaders\" : {\n \"profile\" : {\n \"send\" : true,\n \"headerName\" : \"X-OIDC-User\"\n },\n \"idtoken\" : {\n \"send\" : false,\n \"name\" : \"id_token\",\n \"headerName\" : \"X-OIDC-Id-Token\",\n \"jwt\" : true\n },\n \"accesstoken\" : {\n \"send\" : false,\n \"name\" : \"access_token\",\n \"headerName\" : \"X-OIDC-Access-Token\",\n \"jwt\" : true\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.security.SecurityTxt }\n\n## Security Txt\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `SecurityTxt`\n\n### Description\n\nThis plugin exposes a special route `/.well-known/security.txt` as proposed at [https://securitytxt.org/](https://securitytxt.org/).\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"SecurityTxt\": {\n \"Contact\": \"contact@foo.bar\", // mandatory, a link or e-mail address for people to contact you about security issues\n \"Encryption\": \"http://url-to-public-key\", // optional, a link to a key which security researchers should use to securely talk to you\n \"Acknowledgments\": \"http://url\", // optional, a link to a web page where you say thank you to security researchers who have helped you\n \"Preferred-Languages\": \"en, fr, es\", // optional\n \"Policy\": \"http://url\", // optional, a link to a policy detailing what security researchers should do when searching for or reporting security issues\n \"Hiring\": \"http://url\", // optional, a link to any security-related job openings in your organisation\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"SecurityTxt\" : {\n \"Contact\" : \"contact@foo.bar\",\n \"Encryption\" : \"https://...\",\n \"Acknowledgments\" : \"https://...\",\n \"Preferred-Languages\" : \"en, fr\",\n \"Policy\" : \"https://...\",\n \"Hiring\" : \"https://...\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.static.StaticResponse }\n\n## Static Response\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `StaticResponse`\n\n### Description\n\nThis plugin returns a static response for any request\n\n\n\n### Default configuration\n\n```json\n{\n \"StaticResponse\" : {\n \"status\" : 200,\n \"headers\" : {\n \"Content-Type\" : \"application/json\"\n },\n \"body\" : \"{\\\"message\\\":\\\"hello world!\\\"}\",\n \"bodyBase64\" : null\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.useragent.UserAgentInfoEndpoint }\n\n## User-Agent endpoint\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: ``none``\n\n### Description\n\nThis plugin will expose current user-agent informations on the following endpoint.\n\n`/.well-known/otoroshi/plugins/user-agent`\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.useragent.UserAgentInfoHeader }\n\n## User-Agent header\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `UserAgentInfoHeader`\n\n### Description\n\nThis plugin will sent informations extracted by the User-Agent details extractor to the target service in a header.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"UserAgentInfoHeader\": {\n \"headerName\": \"X-User-Agent-Info\" // header in which info will be sent\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"UserAgentInfoHeader\" : {\n \"headerName\" : \"X-User-Agent-Info\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.workflow.WorkflowEndpoint }\n\n## [DEPRECATED] Workflow endpoint\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `WorkflowEndpoint`\n\n### Description\n\nThis plugin runs a workflow and return the response\n\n\n\n### Default configuration\n\n```json\n{\n \"WorkflowEndpoint\" : {\n \"workflow\" : { }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.biscuit.BiscuitValidator }\n\n## Biscuit token validator\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: ``none``\n\n### Description\n\nThis plugin validates a Biscuit token.\n\n\n\n### Default configuration\n\n```json\n{\n \"publicKey\" : \"xxxxxx\",\n \"checks\" : [ ],\n \"facts\" : [ ],\n \"resources\" : [ ],\n \"rules\" : [ ],\n \"revocation_ids\" : [ ],\n \"enforce\" : false,\n \"extractor\" : {\n \"type\" : \"header\",\n \"name\" : \"Authorization\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.clientcert.HasClientCertMatchingApikeyValidator }\n\n## Client Certificate + Api Key only\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: ``none``\n\n### Description\n\nCheck if a client certificate is present in the request and that the apikey used matches the client certificate.\nYou can set the client cert. DN in an apikey metadata named `allowed-client-cert-dn`\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.clientcert.HasClientCertMatchingHttpValidator }\n\n## Client certificate matching (over http)\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: `HasClientCertMatchingHttpValidator`\n\n### Description\n\nCheck if client certificate matches the following configuration\n\nexpected response from http service is\n\n```json\n{\n \"serialNumbers\": [], // allowed certificated serial numbers\n \"subjectDNs\": [], // allowed certificated DNs\n \"issuerDNs\": [], // allowed certificated issuer DNs\n \"regexSubjectDNs\": [], // allowed certificated DNs matching regex\n \"regexIssuerDNs\": [], // allowed certificated issuer DNs matching regex\n}\n```\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"HasClientCertMatchingValidator\": {\n \"url\": \"...\", // url for the call\n \"headers\": {}, // http header for the call\n \"ttl\": 600000, // cache ttl,\n \"mtlsConfig\": {\n \"certId\": \"xxxxx\",\n \"mtls\": false,\n \"loose\": false\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"HasClientCertMatchingHttpValidator\" : {\n \"url\" : \"http://foo.bar\",\n \"ttl\" : 600000,\n \"headers\" : { },\n \"mtlsConfig\" : {\n \"certId\" : \"...\",\n \"mtls\" : false,\n \"loose\" : false\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.clientcert.HasClientCertMatchingValidator }\n\n## Client certificate matching\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: `HasClientCertMatchingValidator`\n\n### Description\n\nCheck if client certificate matches the following configuration\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"HasClientCertMatchingValidator\": {\n \"serialNumbers\": [], // allowed certificated serial numbers\n \"subjectDNs\": [], // allowed certificated DNs\n \"issuerDNs\": [], // allowed certificated issuer DNs\n \"regexSubjectDNs\": [], // allowed certificated DNs matching regex\n \"regexIssuerDNs\": [], // allowed certificated issuer DNs matching regex\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"HasClientCertMatchingValidator\" : {\n \"serialNumbers\" : [ ],\n \"subjectDNs\" : [ ],\n \"issuerDNs\" : [ ],\n \"regexSubjectDNs\" : [ ],\n \"regexIssuerDNs\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.clientcert.HasClientCertValidator }\n\n## Client Certificate Only\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: ``none``\n\n### Description\n\nCheck if a client certificate is present in the request\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.hmac.HMACValidator }\n\n## HMAC access validator\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: `HMACAccessValidator`\n\n### Description\n\nThis plugin can be used to check if a HMAC signature is present and valid in Authorization header.\n\n\n\n### Default configuration\n\n```json\n{\n \"HMACAccessValidator\" : {\n \"secret\" : \"\"\n }\n}\n```\n\n\n\n### Documentation\n\n\n The HMAC signature needs to be set on the `Authorization` or `Proxy-Authorization` header.\n The format of this header should be : `hmac algorithm=\"\", headers=\"
\", signature=\"\"`\n As example, a simple nodeJS call with the expected header\n ```js\n const crypto = require('crypto');\n const fetch = require('node-fetch');\n\n const date = new Date()\n const secret = \"my-secret\" // equal to the api key secret by default\n\n const algo = \"sha512\"\n const signature = crypto.createHmac(algo, secret)\n .update(date.getTime().toString())\n .digest('base64');\n\n fetch('http://myservice.oto.tools:9999/api/test', {\n headers: {\n \"Otoroshi-Client-Id\": \"my-id\",\n \"Otoroshi-Client-Secret\": \"my-secret\",\n \"Date\": date.getTime().toString(),\n \"Authorization\": `hmac algorithm=\"hmac-${algo}\", headers=\"Date\", signature=\"${signature}\"`,\n \"Accept\": \"application/json\"\n }\n })\n .then(r => r.json())\n .then(console.log)\n ```\n In this example, we have an Otoroshi service deployed on http://myservice.oto.tools:9999/api/test, protected by api keys.\n The secret used is the secret of the api key (by default, but you can change it and define a secret on the plugin configuration).\n We send the base64 encoded date of the day, signed by the secret, in the Authorization header. We specify the headers signed and the type of algorithm used.\n You can sign more than one header but you have to list them in the headers fields (each one separate by a space, example : headers=\"Date KeyId\").\n The algorithm used can be HMAC-SHA1, HMAC-SHA256, HMAC-SHA384 or HMAC-SHA512.\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.oidc.OIDCAccessTokenValidator }\n\n## OIDC access_token validator\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: `OIDCAccessTokenValidator`\n\n### Description\n\nThis plugin will use the third party apikey configuration and apply it while keeping the apikey mecanism of otoroshi.\nUse it to combine apikey validation and OIDC access_token validation.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"OIDCAccessTokenValidator\": {\n \"enabled\": true,\n \"atLeastOne\": false,\n // config is optional and can be either an object config or an array of objects\n \"config\": {\n \"enabled\" : true,\n \"quotasEnabled\" : true,\n \"uniqueApiKey\" : false,\n \"type\" : \"OIDC\",\n \"oidcConfigRef\" : \"some-oidc-auth-module-id\",\n \"localVerificationOnly\" : false,\n \"mode\" : \"Tmp\",\n \"ttl\" : 0,\n \"headerName\" : \"Authorization\",\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"excludedPatterns\" : [ ],\n \"scopes\" : [ ],\n \"rolesPath\" : [ ],\n \"roles\" : [ ]\n}\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"OIDCAccessTokenValidator\" : {\n \"enabled\" : true,\n \"atLeastOne\" : false,\n \"config\" : {\n \"enabled\" : true,\n \"quotasEnabled\" : true,\n \"uniqueApiKey\" : false,\n \"type\" : \"OIDC\",\n \"oidcConfigRef\" : \"some-oidc-auth-module-id\",\n \"localVerificationOnly\" : false,\n \"mode\" : \"Tmp\",\n \"ttl\" : 0,\n \"headerName\" : \"Authorization\",\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"excludedPatterns\" : [ ],\n \"scopes\" : [ ],\n \"rolesPath\" : [ ],\n \"roles\" : [ ]\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.quotas.ServiceQuotas }\n\n## Public quotas\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: `ServiceQuotas`\n\n### Description\n\nThis plugin will enforce public quotas on the current service\n\n\n\n\n\n\n\n### Default configuration\n\n```json\n{\n \"ServiceQuotas\" : {\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.users.HasAllowedUsersValidator }\n\n## Allowed users only\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: `HasAllowedUsersValidator`\n\n### Description\n\nThis plugin only let allowed users pass\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"HasAllowedUsersValidator\": {\n \"usernames\": [], // allowed usernames\n \"emails\": [], // allowed user email addresses\n \"emailDomains\": [], // allowed user email domains\n \"metadataMatch\": [], // json path expressions to match against user metadata. passes if one match\n \"metadataNotMatch\": [], // json path expressions to match against user metadata. passes if none match\n \"profileMatch\": [], // json path expressions to match against user profile. passes if one match\n \"profileNotMatch\": [], // json path expressions to match against user profile. passes if none match\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"HasAllowedUsersValidator\" : {\n \"usernames\" : [ ],\n \"emails\" : [ ],\n \"emailDomains\" : [ ],\n \"metadataMatch\" : [ ],\n \"metadataNotMatch\" : [ ],\n \"profileMatch\" : [ ],\n \"profileNotMatch\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.apikeys.ApikeyAuthModule }\n\n## Apikey auth module\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `ApikeyAuthModule`\n\n### Description\n\nThis plugin adds basic auth on service where credentials are valid apikeys on the current service.\n\n\n\n### Default configuration\n\n```json\n{\n \"ApikeyAuthModule\" : {\n \"realm\" : \"apikey-auth-module-realm\",\n \"noneTagIn\" : [ ],\n \"oneTagIn\" : [ ],\n \"allTagsIn\" : [ ],\n \"noneMetaIn\" : [ ],\n \"oneMetaIn\" : [ ],\n \"allMetaIn\" : [ ],\n \"noneMetaKeysIn\" : [ ],\n \"oneMetaKeyIn\" : [ ],\n \"allMetaKeysIn\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.apikeys.CertificateAsApikey }\n\n## Client certificate as apikey\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `CertificateAsApikey`\n\n### Description\n\nThis plugin uses client certificate as an apikey. The apikey will be stored for classic apikey usage\n\n\n\n### Default configuration\n\n```json\n{\n \"CertificateAsApikey\" : {\n \"readOnly\" : false,\n \"allowClientIdOnly\" : false,\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"constrainedServicesOnly\" : false,\n \"tags\" : [ ],\n \"metadata\" : { }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.apikeys.ClientCredentialFlowExtractor }\n\n## Client Credential Flow ApiKey extractor\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: ``none``\n\n### Description\n\nThis plugin can extract an apikey from an opaque access_token generate by the `ClientCredentialFlow` plugin\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.biscuit.BiscuitExtractor }\n\n## Apikey from Biscuit token extractor\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: ``none``\n\n### Description\n\nThis plugin extract an from a Biscuit token where the biscuit has an #authority fact 'client_id' containing\napikey client_id and an #authority fact 'client_sign' that is the HMAC256 signature of the apikey client_id with the apikey client_secret\n\n\n\n### Default configuration\n\n```json\n{\n \"publicKey\" : \"xxxxxx\",\n \"checks\" : [ ],\n \"facts\" : [ ],\n \"resources\" : [ ],\n \"rules\" : [ ],\n \"revocation_ids\" : [ ],\n \"enforce\" : false,\n \"extractor\" : {\n \"type\" : \"header\",\n \"name\" : \"Authorization\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.discovery.DiscoveryTargetsSelector }\n\n## Service discovery target selector (service discovery)\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `DiscoverySelfRegistration`\n\n### Description\n\nThis plugin select a target in the pool of discovered targets for this service.\nUse in combination with either `DiscoverySelfRegistrationSink` or `DiscoverySelfRegistrationTransformer` to make it work using the `self registration` pattern.\nOr use an implementation of `DiscoveryJob` for the `third party registration pattern`.\n\nThis plugin accepts the following configuration:\n\n\n\n### Default configuration\n\n```json\n{\n \"DiscoverySelfRegistration\" : {\n \"hosts\" : [ ],\n \"targetTemplate\" : { },\n \"registrationTtl\" : 60000\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.geoloc.IpStackGeolocationInfoExtractor }\n\n## Geolocation details extractor (using IpStack api)\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `GeolocationInfo`\n\n### Description\n\nThis plugin extract geolocation informations from ip address using the [IpStack dbs](https://ipstack.com/).\nThe informations are store in plugins attrs for other plugins to use\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"GeolocationInfo\": {\n \"apikey\": \"xxxxxxx\",\n \"timeout\": 2000, // timeout in ms\n \"log\": false // will log geolocation details\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"GeolocationInfo\" : {\n \"apikey\" : \"xxxxxxx\",\n \"timeout\" : 2000,\n \"log\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.geoloc.MaxMindGeolocationInfoExtractor }\n\n## Geolocation details extractor (using Maxmind db)\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `GeolocationInfo`\n\n### Description\n\nThis plugin extract geolocation informations from ip address using the [Maxmind dbs](https://www.maxmind.com/en/geoip2-databases).\nThe informations are store in plugins attrs for other plugins to use\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"GeolocationInfo\": {\n \"path\": \"/foo/bar/cities.mmdb\", // file path, can be \"global\"\n \"log\": false // will log geolocation details\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"GeolocationInfo\" : {\n \"path\" : \"global\",\n \"log\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.jwt.JwtUserExtractor }\n\n## Jwt user extractor\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `JwtUserExtractor`\n\n### Description\n\nThis plugin extract a user from a JWT token\n\n\n\n### Default configuration\n\n```json\n{\n \"JwtUserExtractor\" : {\n \"verifier\" : \"\",\n \"strict\" : true,\n \"namePath\" : \"name\",\n \"emailPath\" : \"email\",\n \"metaPath\" : null\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.oidc.OIDCAccessTokenAsApikey }\n\n## OIDC access_token as apikey\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `OIDCAccessTokenAsApikey`\n\n### Description\n\nThis plugin will use the third party apikey configuration to generate an apikey\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"OIDCAccessTokenValidator\": {\n \"enabled\": true,\n \"atLeastOne\": false,\n // config is optional and can be either an object config or an array of objects\n \"config\": {\n \"enabled\" : true,\n \"quotasEnabled\" : true,\n \"uniqueApiKey\" : false,\n \"type\" : \"OIDC\",\n \"oidcConfigRef\" : \"some-oidc-auth-module-id\",\n \"localVerificationOnly\" : false,\n \"mode\" : \"Tmp\",\n \"ttl\" : 0,\n \"headerName\" : \"Authorization\",\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"excludedPatterns\" : [ ],\n \"scopes\" : [ ],\n \"rolesPath\" : [ ],\n \"roles\" : [ ]\n}\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"OIDCAccessTokenAsApikey\" : {\n \"enabled\" : true,\n \"atLeastOne\" : false,\n \"config\" : {\n \"enabled\" : true,\n \"quotasEnabled\" : true,\n \"uniqueApiKey\" : false,\n \"type\" : \"OIDC\",\n \"oidcConfigRef\" : \"some-oidc-auth-module-id\",\n \"localVerificationOnly\" : false,\n \"mode\" : \"Tmp\",\n \"ttl\" : 0,\n \"headerName\" : \"Authorization\",\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"excludedPatterns\" : [ ],\n \"scopes\" : [ ],\n \"rolesPath\" : [ ],\n \"roles\" : [ ]\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.useragent.UserAgentExtractor }\n\n## User-Agent details extractor\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `UserAgentInfo`\n\n### Description\n\nThis plugin extract informations from User-Agent header such as browsser version, OS version, etc.\nThe informations are store in plugins attrs for other plugins to use\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"UserAgentInfo\": {\n \"log\": false // will log user-agent details\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"UserAgentInfo\" : {\n \"log\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-sink #otoroshi.plugins.apikeys.ClientCredentialService }\n\n## Client Credential Service\n\n\n\n### Infos\n\n* plugin type: `sink`\n* configuration root: `ClientCredentialService`\n\n### Description\n\nThis plugin add an an oauth client credentials service (`https://unhandleddomain/.well-known/otoroshi/oauth/token`) to create an access_token given a client id and secret.\n\n```json\n{\n \"ClientCredentialService\" : {\n \"domain\" : \"*\",\n \"expiration\" : 3600000,\n \"defaultKeyPair\" : \"otoroshi-jwt-signing\",\n \"secure\" : true\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"ClientCredentialService\" : {\n \"domain\" : \"*\",\n \"expiration\" : 3600000,\n \"defaultKeyPair\" : \"otoroshi-jwt-signing\",\n \"secure\" : true\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-sink #otoroshi.plugins.discovery.DiscoverySelfRegistrationSink }\n\n## Global self registration endpoints (service discovery)\n\n\n\n### Infos\n\n* plugin type: `sink`\n* configuration root: `DiscoverySelfRegistration`\n\n### Description\n\nThis plugin add support for self registration endpoint on specific hostnames.\n\nThis plugin accepts the following configuration:\n\n\n\n### Default configuration\n\n```json\n{\n \"DiscoverySelfRegistration\" : {\n \"hosts\" : [ ],\n \"targetTemplate\" : { },\n \"registrationTtl\" : 60000\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-sink #otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator }\n\n## Kubernetes admission validator webhook\n\n\n\n### Infos\n\n* plugin type: `sink`\n* configuration root: ``none``\n\n### Description\n\nThis plugin exposes a webhook to kubernetes to handle manifests validation\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-sink #otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector }\n\n## Kubernetes sidecar injector webhook\n\n\n\n### Infos\n\n* plugin type: `sink`\n* configuration root: ``none``\n\n### Description\n\nThis plugin exposes a webhook to kubernetes to inject otoroshi-sidecar in pods\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-job #otoroshi.jobs.StateExporter }\n\n## Otoroshi state exporter job\n\n\n\n### Infos\n\n* plugin type: `job`\n* configuration root: `StateExporter`\n\n### Description\n\nThis job send an event containing the full otoroshi export every n seconds\n\n\n\n### Default configuration\n\n```json\n{\n \"StateExporter\" : {\n \"every_sec\" : 3600,\n \"format\" : \"json\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-job #otoroshi.next.plugins.TailscaleCertificatesFetcherJob }\n\n## Tailscale certificate fetcher job\n\n\n\n### Infos\n\n* plugin type: `job`\n* configuration root: ``none``\n\n### Description\n\nThis job will fetch certificates from Tailscale ACME provider\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-job #otoroshi.next.plugins.TailscaleTargetsJob }\n\n## Tailscale targets job\n\n\n\n### Infos\n\n* plugin type: `job`\n* configuration root: ``none``\n\n### Description\n\nThis job will aggregates Tailscale possible online targets\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-job #otoroshi.plugins.jobs.kubernetes.KubernetesIngressControllerJob }\n\n## Kubernetes Ingress Controller\n\n\n\n### Infos\n\n* plugin type: `job`\n* configuration root: `KubernetesConfig`\n\n### Description\n\nThis plugin enables Otoroshi as an Ingress Controller\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-job #otoroshi.plugins.jobs.kubernetes.KubernetesOtoroshiCRDsControllerJob }\n\n## Kubernetes Otoroshi CRDs Controller\n\n\n\n### Infos\n\n* plugin type: `job`\n* configuration root: `KubernetesConfig`\n\n### Description\n\nThis plugin enables Otoroshi CRDs Controller\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-job #otoroshi.plugins.jobs.kubernetes.KubernetesToOtoroshiCertSyncJob }\n\n## Kubernetes to Otoroshi certs. synchronizer\n\n\n\n### Infos\n\n* plugin type: `job`\n* configuration root: `KubernetesConfig`\n\n### Description\n\nThis plugin syncs. TLS secrets from Kubernetes to Otoroshi\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-job #otoroshi.plugins.jobs.kubernetes.OtoroshiToKubernetesCertSyncJob }\n\n## Otoroshi certs. to Kubernetes secrets synchronizer\n\n\n\n### Infos\n\n* plugin type: `job`\n* configuration root: `KubernetesConfig`\n\n### Description\n\nThis plugin syncs. Otoroshi certs to Kubernetes TLS secrets\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-request-handler #otoroshi.next.proxy.ProxyEngine }\n\n## Otoroshi next proxy engine (experimental)\n\n\n\n### Infos\n\n* plugin type: `request-handler`\n* configuration root: `NextGenProxyEngine`\n\n### Description\n\nThis plugin holds the next generation otoroshi proxy engine implementation. This engine is **experimental** and may not work as expected !\n\nYou can active this plugin only on some domain names so you can easily A/B test the new engine.\nThe new proxy engine is designed to be more reactive and more efficient generally.\nIt is also designed to be very efficient on path routing where it wasn't the old engines strong suit.\n\nThe idea is to only rely on plugins to work and avoid losing time with features that are not used in service descriptors.\nAn automated conversion happens for every service descriptor. If the exposed domain is handled by this plugin, it will be served by this plugin.\nThis plugin introduces new entities that will replace (one day maybe) service descriptors:\n\n - `routes`: a unique routing rule based on hostname, path, method and headers that will execute a bunch of plugins\n - `route-compositions`: multiple routing rules based on hostname, path, method and headers that will execute the same list of plugins\n - `backends`: a list of targets to contact a backend\n\nas an example, let say you want to use the new engine on your service exposed on `api.foo.bar/api`.\nTo do that, just add the plugin in the `global plugins` section of the danger zone, inject the default configuration,\nenabled it and in `domains` add the value `api.foo.bar` (it is possible to use `*.foo.bar` if that's what you want to do).\nThe next time a request hits the `api.foo.bar` domain, the new engine will handle it instead of the old one.\n\n\n\n### Default configuration\n\n```json\n{\n \"NextGenProxyEngine\" : {\n \"enabled\" : true,\n \"domains\" : [ \"*\" ],\n \"deny_domains\" : [ ],\n \"reporting\" : true,\n \"merge_sync_steps\" : true,\n \"export_reporting\" : false,\n \"apply_legacy_checks\" : true,\n \"debug\" : false,\n \"capture\" : false,\n \"captureMaxEntitySize\" : 4194304,\n \"debug_headers\" : false,\n \"routing_strategy\" : \"tree\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-request-handler #otoroshi.script.ForwardTrafficHandler }\n\n## Forward traffic\n\n\n\n### Infos\n\n* plugin type: `request-handler`\n* configuration root: `ForwardTrafficHandler`\n\n### Description\n\nThis plugin can be use to perform a raw traffic forward to an URL without passing through otoroshi routing\n\n\n\n### Default configuration\n\n```json\n{\n \"ForwardTrafficHandler\" : {\n \"domains\" : {\n \"my.domain.tld\" : {\n \"baseUrl\" : \"https://my.otherdomain.tld\",\n \"secret\" : \"jwt signing secret\",\n \"service\" : {\n \"id\" : \"service id for analytics\",\n \"name\" : \"service name for analytics\"\n }\n }\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n\n\n"},{"name":"built-in-plugins.md","id":"/plugins/built-in-plugins.md","url":"/plugins/built-in-plugins.html","title":"Built-in plugins","content":"# Built-in plugins\n\nOtoroshi next provides some plugins out of the box. Here is the available plugins with their documentation and reference configuration\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.AdditionalHeadersIn }\n\n## Additional headers in\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.AdditionalHeadersIn`\n\n### Description\n\nThis plugin adds headers in the incoming otoroshi request\n\n\n\n### Default configuration\n\n```json\n{\n \"headers\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.AdditionalHeadersOut }\n\n## Additional headers out\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.AdditionalHeadersOut`\n\n### Description\n\nThis plugin adds headers in the otoroshi response\n\n\n\n### Default configuration\n\n```json\n{\n \"headers\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.AllowHttpMethods }\n\n## Allowed HTTP methods\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.AllowHttpMethods`\n\n### Description\n\nThis plugin verifies the current request only uses allowed http methods\n\n\n\n### Default configuration\n\n```json\n{\n \"allowed\" : [ ],\n \"forbidden\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.ApikeyAuthModule }\n\n## Apikey auth module\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.ApikeyAuthModule`\n\n### Description\n\nThis plugin adds basic auth on service where credentials are valid apikeys on the current service.\n\n\n\n### Default configuration\n\n```json\n{\n \"realm\" : \"apikey-auth-module-realm\",\n \"matcher\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.ApikeyCalls }\n\n## Apikeys\n\n### Defined on steps\n\n - `MatchRoute`\n - `ValidateAccess`\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.ApikeyCalls`\n\n### Description\n\nThis plugin expects to find an apikey to allow the request to pass\n\n\n\n### Default configuration\n\n```json\n{\n \"extractors\" : {\n \"basic\" : {\n \"enabled\" : true,\n \"header_name\" : null,\n \"query_name\" : null\n },\n \"custom_headers\" : {\n \"enabled\" : true,\n \"client_id_header_name\" : null,\n \"client_secret_header_name\" : null\n },\n \"client_id\" : {\n \"enabled\" : true,\n \"header_name\" : null,\n \"query_name\" : null\n },\n \"jwt\" : {\n \"enabled\" : true,\n \"secret_signed\" : true,\n \"keypair_signed\" : true,\n \"include_request_attrs\" : false,\n \"max_jwt_lifespan_sec\" : null,\n \"header_name\" : null,\n \"query_name\" : null,\n \"cookie_name\" : null\n }\n },\n \"routing\" : {\n \"enabled\" : false\n },\n \"validate\" : true,\n \"mandatory\" : true,\n \"pass_with_user\" : false,\n \"wipe_backend_request\" : true,\n \"update_quotas\" : true\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.ApikeyQuotas }\n\n## Apikey quotas\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.ApikeyQuotas`\n\n### Description\n\nIncrements quotas for the currents apikey. Useful when 'legacy checks' are disabled on a service/globally or when apikey are extracted in a custom fashion.\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.AuthModule }\n\n## Authentication\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.AuthModule`\n\n### Description\n\nThis plugin applies an authentication module\n\n\n\n### Default configuration\n\n```json\n{\n \"pass_with_apikey\" : false,\n \"auth_module\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.BasicAuthCaller }\n\n## Basic Auth. caller\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.BasicAuthCaller`\n\n### Description\n\nThis plugin can be used to call api that are authenticated using basic auth.\n\n\n\n### Default configuration\n\n```json\n{\n \"username\" : null,\n \"passaword\" : null,\n \"headerName\" : \"Authorization\",\n \"headerValueFormat\" : \"Basic %s\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.BuildMode }\n\n## Build mode\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.BuildMode`\n\n### Description\n\nThis plugin displays a build page\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.CanaryMode }\n\n## Canary mode\n\n### Defined on steps\n\n - `PreRoute`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.CanaryMode`\n\n### Description\n\nThis plugin can split a portion of the traffic to canary backends\n\n\n\n### Default configuration\n\n```json\n{\n \"traffic\" : 0.2,\n \"targets\" : [ ],\n \"root\" : \"/\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.ContextValidation }\n\n## Context validator\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.ContextValidation`\n\n### Description\n\nThis plugin validates the current context using JSONPath validators.\n\nThis plugin let you configure a list of validators that will check if the current call can pass.\nA validator is composed of a [JSONPath](https://goessner.net/articles/JsonPath/) that will tell what to check and a value that is the expected value.\nThe JSONPath will be applied on a document that will look like\n\n```js\n{\n \"snowflake\" : \"1516772930422308903\",\n \"apikey\" : { // current apikey\n \"clientId\" : \"vrmElDerycXrofar\",\n \"clientName\" : \"default-apikey\",\n \"metadata\" : {\n \"foo\" : \"bar\"\n },\n \"tags\" : [ ]\n },\n \"user\" : null, // current user\n \"request\" : {\n \"id\" : 1,\n \"method\" : \"GET\",\n \"headers\" : {\n \"Host\" : \"ctx-validation-next-gen.oto.tools:9999\",\n \"Accept\" : \"*/*\",\n \"User-Agent\" : \"curl/7.64.1\",\n \"Authorization\" : \"Basic dnJtRWxEZXJ5Y1hyb2ZhcjpvdDdOSTkyVGI2Q2J4bWVMYU9UNzJxamdCU2JlRHNLbkxtY1FBcXBjVjZTejh0Z3I1b2RUOHAzYjB5SEVNRzhZ\",\n \"Remote-Address\" : \"127.0.0.1:58929\",\n \"Timeout-Access\" : \"\",\n \"Raw-Request-URI\" : \"/foo\",\n \"Tls-Session-Info\" : \"Session(1650461821330|SSL_NULL_WITH_NULL_NULL)\"\n },\n \"cookies\" : [ ],\n \"tls\" : false,\n \"uri\" : \"/foo\",\n \"path\" : \"/foo\",\n \"version\" : \"HTTP/1.1\",\n \"has_body\" : false,\n \"remote\" : \"127.0.0.1\",\n \"client_cert_chain\" : null\n },\n \"config\" : {\n \"validators\" : [ {\n \"path\" : \"$.apikey.metadata.foo\",\n \"value\" : \"bar\"\n } ]\n },\n \"global_config\" : { ... }, // global config\n \"attrs\" : {\n \"otoroshi.core.SnowFlake\" : \"1516772930422308903\",\n \"otoroshi.core.ElCtx\" : {\n \"requestId\" : \"1516772930422308903\",\n \"requestSnowflake\" : \"1516772930422308903\",\n \"requestTimestamp\" : \"2022-04-20T15:37:01.548+02:00\"\n },\n \"otoroshi.next.core.Report\" : \"otoroshi.next.proxy.NgExecutionReport@277b44e2\",\n \"otoroshi.core.RequestStart\" : 1650461821545,\n \"otoroshi.core.RequestWebsocket\" : false,\n \"otoroshi.core.RequestCounterOut\" : 0,\n \"otoroshi.core.RemainingQuotas\" : {\n \"authorizedCallsPerSec\" : 10000000,\n \"currentCallsPerSec\" : 0,\n \"remainingCallsPerSec\" : 10000000,\n \"authorizedCallsPerDay\" : 10000000,\n \"currentCallsPerDay\" : 2,\n \"remainingCallsPerDay\" : 9999998,\n \"authorizedCallsPerMonth\" : 10000000,\n \"currentCallsPerMonth\" : 269,\n \"remainingCallsPerMonth\" : 9999731\n },\n \"otoroshi.next.core.MatchedRoutes\" : \"MutableList(route_022825450-e97d-42ed-8e22-b23342c1c7c8)\",\n \"otoroshi.core.RequestNumber\" : 1,\n \"otoroshi.next.core.Route\" : { ... }, // current route as json\n \"otoroshi.core.RequestTimestamp\" : \"2022-04-20T15:37:01.548+02:00\",\n \"otoroshi.core.ApiKey\" : { ... }, // current apikey as json\n \"otoroshi.core.User\" : { ... }, // current user as json\n \"otoroshi.core.RequestCounterIn\" : 0\n },\n \"route\" : { ... },\n \"token\" : null // current valid jwt token if one\n}\n```\n\nthe expected value support some syntax tricks like\n\n* `Not(value)` on a string to check if the current value does not equals another value\n* `Regex(regex)` on a string to check if the current value matches the regex\n* `RegexNot(regex)` on a string to check if the current value does not matches the regex\n* `Wildcard(*value*)` on a string to check if the current value matches the value with wildcards\n* `WildcardNot(*value*)` on a string to check if the current value does not matches the value with wildcards\n* `Contains(value)` on a string to check if the current value contains a value\n* `ContainsNot(value)` on a string to check if the current value does not contains a value\n* `Contains(Regex(regex))` on an array to check if one of the item of the array matches the regex\n* `ContainsNot(Regex(regex))` on an array to check if one of the item of the array does not matches the regex\n* `Contains(Wildcard(*value*))` on an array to check if one of the item of the array matches the wildcard value\n* `ContainsNot(Wildcard(*value*))` on an array to check if one of the item of the array does not matches the wildcard value\n* `Contains(value)` on an array to check if the array contains a value\n* `ContainsNot(value)` on an array to check if the array does not contains a value\n\nfor instance to check if the current apikey has a metadata name `foo` with a value containing `bar`, you can write the following validator\n\n```js\n{\n \"path\": \"$.apikey.metadata.foo\",\n \"value\": \"Contains(bar)\"\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"validators\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.Cors }\n\n## CORS\n\n### Defined on steps\n\n - `PreRoute`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.Cors`\n\n### Description\n\nThis plugin applies CORS rules\n\n\n\n### Default configuration\n\n```json\n{\n \"allow_origin\" : \"*\",\n \"expose_headers\" : [ ],\n \"allow_headers\" : [ ],\n \"allow_methods\" : [ ],\n \"excluded_patterns\" : [ ],\n \"max_age\" : null,\n \"allow_credentials\" : true\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.DisableHttp10 }\n\n## Disable HTTP/1.0\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.DisableHttp10`\n\n### Description\n\nThis plugin forbids HTTP/1.0 requests\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.EndlessHttpResponse }\n\n## Endless HTTP responses\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.EndlessHttpResponse`\n\n### Description\n\nThis plugin returns 128 Gb of 0 to the ip addresses is in the list\n\n\n\n### Default configuration\n\n```json\n{\n \"finger\" : false,\n \"addresses\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.EurekaServerSink }\n\n## Eureka instance\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.EurekaServerSink`\n\n### Description\n\nEureka plugin description\n\n\n\n### Default configuration\n\n```json\n{\n \"evictionTimeout\" : 300\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.EurekaTarget }\n\n## Internal Eureka target\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.EurekaTarget`\n\n### Description\n\nThis plugin can be used to used a target that come from an internal Eureka server.\n If you want to use a target which it locate outside of Otoroshi, you must use the External Eureka Server.\n\n\n\n### Default configuration\n\n```json\n{\n \"eureka_server\" : null,\n \"eureka_app\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.ExternalEurekaTarget }\n\n## External Eureka target\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.ExternalEurekaTarget`\n\n### Description\n\nThis plugin can be used to used a target that come from an external Eureka server.\n If you want to use a target that is directly exposed by an implementation of Eureka by Otoroshi,\n you must use the Internal Eureka Server.\n\n\n\n### Default configuration\n\n```json\n{\n \"eureka_server\" : null,\n \"eureka_app\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.ForceHttpsTraffic }\n\n## Force HTTPS traffic\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.ForceHttpsTraffic`\n\n### Description\n\nThis plugin verifies the current request uses HTTPS\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.GlobalMaintenanceMode }\n\n## Global Maintenance mode\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.GlobalMaintenanceMode`\n\n### Description\n\nThis plugin displays a maintenance page for every services. Useful when 'legacy checks' are disabled on a service/globally\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.GlobalPerIpAddressThrottling }\n\n## Global per ip address throttling \n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.GlobalPerIpAddressThrottling`\n\n### Description\n\nEnforce global per ip address throttling. Useful when 'legacy checks' are disabled on a service/globally\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.GlobalThrottling }\n\n## Global throttling \n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.GlobalThrottling`\n\n### Description\n\nEnforce global throttling. Useful when 'legacy checks' are disabled on a service/globally\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.GraphQLBackend }\n\n## GraphQL Composer\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.GraphQLBackend`\n\n### Description\n\nThis plugin exposes a GraphQL API that you can compose with whatever you want\n\n\n\n### Default configuration\n\n```json\n{\n \"schema\" : \"\\n type User {\\n name: String!\\n firstname: String!\\n }\\n\\n type Query {\\n users: [User] @json(data: \\\"[{ \\\\\\\"firstname\\\\\\\": \\\\\\\"Foo\\\\\\\", \\\\\\\"name\\\\\\\": \\\\\\\"Bar\\\\\\\" }, { \\\\\\\"firstname\\\\\\\": \\\\\\\"Bar\\\\\\\", \\\\\\\"name\\\\\\\": \\\\\\\"Foo\\\\\\\" }]\\\")\\n }\\n \",\n \"permissions\" : [ ],\n \"initial_data\" : null,\n \"max_depth\" : 15\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.GraphQLProxy }\n\n## GraphQL Proxy\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.GraphQLProxy`\n\n### Description\n\nThis plugin can apply validations (query, schema, max depth, max complexity) on graphql endpoints\n\n\n\n### Default configuration\n\n```json\n{\n \"endpoint\" : \"https://countries.trevorblades.com/graphql\",\n \"schema\" : null,\n \"max_depth\" : 50,\n \"max_complexity\" : 50000,\n \"path\" : \"/graphql\",\n \"headers\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.GraphQLQuery }\n\n## GraphQL Query to REST\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.GraphQLQuery`\n\n### Description\n\nThis plugin can be used to call GraphQL query endpoints and expose it as a REST endpoint\n\n\n\n### Default configuration\n\n```json\n{\n \"url\" : \"https://some.graphql/endpoint\",\n \"headers\" : { },\n \"method\" : \"POST\",\n \"query\" : \"{\\n\\n}\",\n \"timeout\" : 60000,\n \"response_path\" : null,\n \"response_filter\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.GzipResponseCompressor }\n\n## Gzip compression\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.GzipResponseCompressor`\n\n### Description\n\nThis plugin can compress responses using gzip\n\n\n\n### Default configuration\n\n```json\n{\n \"excluded_patterns\" : [ ],\n \"allowed_list\" : [ \"text/*\", \"application/javascript\", \"application/json\" ],\n \"blocked_list\" : [ ],\n \"buffer_size\" : 8192,\n \"chunked_threshold\" : 102400,\n \"compression_level\" : 5\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.HMACCaller }\n\n## HMAC caller plugin\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.HMACCaller`\n\n### Description\n\nThis plugin can be used to call a \"protected\" api by an HMAC signature. It will adds a signature with the secret configured on the plugin.\n The signature string will always the content of the header list listed in the plugin configuration.\n\n\n\n### Default configuration\n\n```json\n{\n \"secret\" : null,\n \"algo\" : \"HMAC-SHA512\",\n \"authorizationHeader\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.HMACValidator }\n\n## HMAC access validator\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.HMACValidator`\n\n### Description\n\nThis plugin can be used to check if a HMAC signature is present and valid in Authorization header.\n\n\n\n### Default configuration\n\n```json\n{\n \"secret\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.HeadersValidation }\n\n## Headers validation\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.HeadersValidation`\n\n### Description\n\nThis plugin validates the values of incoming request headers\n\n\n\n### Default configuration\n\n```json\n{\n \"headers\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.Http3Switch }\n\n## Http3 traffic switch\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.Http3Switch`\n\n### Description\n\nThis plugin injects additional alt-svc header to switch to the http3 server\n\n\n\n### Default configuration\n\n```json\n{\n \"ma\" : 3600\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.IpAddressAllowedList }\n\n## IP allowed list\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.IpAddressAllowedList`\n\n### Description\n\nThis plugin verifies the current request ip address is in the allowed list\n\n\n\n### Default configuration\n\n```json\n{\n \"addresses\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.IpAddressBlockList }\n\n## IP block list\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.IpAddressBlockList`\n\n### Description\n\nThis plugin verifies the current request ip address is not in the blocked list\n\n\n\n### Default configuration\n\n```json\n{\n \"addresses\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JQ }\n\n## JQ\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JQ`\n\n### Description\n\nThis plugin let you transform JSON bodies (in requests and responses) using [JQ filters](https://stedolan.github.io/jq/manual/#Basicfilters).\n\n\n\n### Default configuration\n\n```json\n{\n \"request\" : \".\",\n \"response\" : \"\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JQRequest }\n\n## JQ transform request\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JQRequest`\n\n### Description\n\nThis plugin let you transform request JSON body using [JQ filters](https://stedolan.github.io/jq/manual/#Basicfilters).\n\n\n\n### Default configuration\n\n```json\n{\n \"filter\" : \".\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JQResponse }\n\n## JQ transform response\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JQResponse`\n\n### Description\n\nThis plugin let you transform JSON response using [JQ filters](https://stedolan.github.io/jq/manual/#Basicfilters).\n\n\n\n### Default configuration\n\n```json\n{\n \"filter\" : \".\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JsonToXmlRequest }\n\n## request body json-to-xml\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JsonToXmlRequest`\n\n### Description\n\nThis plugin transform incoming request body from json to xml and may apply a jq transformation\n\n\n\n### Default configuration\n\n```json\n{\n \"filter\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JsonToXmlResponse }\n\n## response body json-to-xml\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JsonToXmlResponse`\n\n### Description\n\nThis plugin transform response body from json to xml and may apply a jq transformation\n\n\n\n### Default configuration\n\n```json\n{\n \"filter\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JwtSigner }\n\n## Jwt signer\n\n### Defined on steps\n\n - `ValidateAccess`\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JwtSigner`\n\n### Description\n\nThis plugin can only generate token\n\n\n\n### Default configuration\n\n```json\n{\n \"verifier\" : null,\n \"replace_if_present\" : true,\n \"fail_if_present\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JwtVerification }\n\n## Jwt verifiers\n\n### Defined on steps\n\n - `ValidateAccess`\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JwtVerification`\n\n### Description\n\nThis plugin verifies the current request with one or more jwt verifier\n\n\n\n### Default configuration\n\n```json\n{\n \"verifiers\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JwtVerificationOnly }\n\n## Jwt verification only\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JwtVerificationOnly`\n\n### Description\n\nThis plugin verifies the current request with one jwt verifier\n\n\n\n### Default configuration\n\n```json\n{\n \"verifier\" : null,\n \"fail_if_absent\" : true\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.MaintenanceMode }\n\n## Maintenance mode\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.MaintenanceMode`\n\n### Description\n\nThis plugin displays a maintenance page\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.MissingHeadersIn }\n\n## Missing headers in\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.MissingHeadersIn`\n\n### Description\n\nThis plugin adds headers (if missing) in the incoming otoroshi request\n\n\n\n### Default configuration\n\n```json\n{\n \"headers\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.MissingHeadersOut }\n\n## Missing headers out\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.MissingHeadersOut`\n\n### Description\n\nThis plugin adds headers (if missing) in the otoroshi response\n\n\n\n### Default configuration\n\n```json\n{\n \"headers\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.MockResponses }\n\n## Mock Responses\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.MockResponses`\n\n### Description\n\nThis plugin returns mock responses\n\n\n\n### Default configuration\n\n```json\n{\n \"responses\" : [ ],\n \"pass_through\" : true\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgAuthModuleExpectedUser }\n\n## User logged in expected\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgAuthModuleExpectedUser`\n\n### Description\n\nThis plugin enforce that a user from any auth. module is logged in\n\n\n\n### Default configuration\n\n```json\n{\n \"only_from\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgAuthModuleUserExtractor }\n\n## User extraction from auth. module\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgAuthModuleUserExtractor`\n\n### Description\n\nThis plugin extracts users from an authentication module without enforcing login\n\n\n\n### Default configuration\n\n```json\n{\n \"auth_module\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgBiscuitExtractor }\n\n## Apikey from Biscuit token extractor\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgBiscuitExtractor`\n\n### Description\n\nThis plugin extract an from a Biscuit token where the biscuit has an #authority fact 'client_id' containing\napikey client_id and an #authority fact 'client_sign' that is the HMAC256 signature of the apikey client_id with the apikey client_secret\n\n\n\n### Default configuration\n\n```json\n{\n \"public_key\" : null,\n \"checks\" : [ ],\n \"facts\" : [ ],\n \"resources\" : [ ],\n \"rules\" : [ ],\n \"revocation_ids\" : [ ],\n \"extractor\" : {\n \"name\" : \"Authorization\",\n \"type\" : \"header\"\n },\n \"enforce\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgBiscuitValidator }\n\n## Biscuit token validator\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgBiscuitValidator`\n\n### Description\n\nThis plugin validates a Biscuit token\n\n\n\n### Default configuration\n\n```json\n{\n \"public_key\" : null,\n \"checks\" : [ ],\n \"facts\" : [ ],\n \"resources\" : [ ],\n \"rules\" : [ ],\n \"revocation_ids\" : [ ],\n \"extractor\" : {\n \"name\" : \"Authorization\",\n \"type\" : \"header\"\n },\n \"enforce\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgCertificateAsApikey }\n\n## Client certificate as apikey\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgCertificateAsApikey`\n\n### Description\n\nThis plugin uses client certificate as an apikey. The apikey will be stored for classic apikey usage\n\n\n\n### Default configuration\n\n```json\n{\n \"read_only\" : false,\n \"allow_client_id_only\" : false,\n \"throttling_quota\" : 100,\n \"daily_quota\" : 10000000,\n \"monthly_quota\" : 10000000,\n \"constrained_services_only\" : false,\n \"tags\" : [ ],\n \"metadata\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgClientCertChainHeader }\n\n## Client certificate header\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgClientCertChainHeader`\n\n### Description\n\nThis plugin pass client certificate informations to the target in headers\n\n\n\n### Default configuration\n\n```json\n{\n \"send_pem\" : false,\n \"pem_header_name\" : \"X-Client-Cert-Pem\",\n \"send_dns\" : false,\n \"dns_header_name\" : \"X-Client-Cert-DNs\",\n \"send_chain\" : false,\n \"chain_header_name\" : \"X-Client-Cert-Chain\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgClientCredentials }\n\n## Client Credential Service\n\n### Defined on steps\n\n - `Sink`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgClientCredentials`\n\n### Description\n\nThis plugin add an an oauth client credentials service (`https://unhandleddomain/.well-known/otoroshi/oauth/token`) to create an access_token given a client id and secret\n\n\n\n### Default configuration\n\n```json\n{\n \"expiration\" : 3600000,\n \"default_key_pair\" : \"otoroshi-jwt-signing\",\n \"domain\" : \"*\",\n \"secure\" : true,\n \"biscuit\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgDefaultRequestBody }\n\n## Default request body\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgDefaultRequestBody`\n\n### Description\n\nThis plugin adds a default request body if none specified\n\n\n\n### Default configuration\n\n```json\n{\n \"bodyBinary\" : \"\",\n \"contentType\" : \"text/plain\",\n \"contentEncoding\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgDeferPlugin }\n\n## Defer Responses\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgDeferPlugin`\n\n### Description\n\nThis plugin will expect a `X-Defer` header or a `defer` query param and defer the response according to the value in milliseconds.\nThis plugin is some kind of inside joke as one a our customer ask us to make slower apis.\n\n\n\n### Default configuration\n\n```json\n{\n \"duration\" : 0\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgDiscoverySelfRegistrationSink }\n\n## Global self registration endpoints (service discovery)\n\n### Defined on steps\n\n - `Sink`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgDiscoverySelfRegistrationSink`\n\n### Description\n\nThis plugin add support for self registration endpoint on specific hostnames\n\n\n\n### Default configuration\n\n```json\n{ }\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgDiscoverySelfRegistrationTransformer }\n\n## Self registration endpoints (service discovery)\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgDiscoverySelfRegistrationTransformer`\n\n### Description\n\nThis plugin add support for self registration endpoint on a specific service\n\n\n\n### Default configuration\n\n```json\n{ }\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgDiscoveryTargetsSelector }\n\n## Service discovery target selector (service discovery)\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgDiscoveryTargetsSelector`\n\n### Description\n\nThis plugin select a target in the pool of discovered targets for this service.\nUse in combination with either `DiscoverySelfRegistrationSink` or `DiscoverySelfRegistrationTransformer` to make it work using the `self registration` pattern.\nOr use an implementation of `DiscoveryJob` for the `third party registration pattern`.\n\n\n\n### Default configuration\n\n```json\n{ }\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgErrorRewriter }\n\n## Error response rewrite\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgErrorRewriter`\n\n### Description\n\nThis plugin catch http response with specific statuses and rewrite the response\n\n\n\n### Default configuration\n\n```json\n{\n \"ranges\" : [ {\n \"from\" : 500,\n \"to\" : 599\n } ],\n \"templates\" : {\n \"default\" : \"\\n \\n

An error occurred with id: ${error_id}

\\n

please contact your administrator with this error id !

\\n \\n\"\n },\n \"log\" : true,\n \"export\" : true\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgGeolocationInfoEndpoint }\n\n## Geolocation endpoint\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgGeolocationInfoEndpoint`\n\n### Description\n\nThis plugin will expose current geolocation informations on the following endpoint `/.well-known/otoroshi/plugins/geolocation`\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgGeolocationInfoHeader }\n\n## Geolocation header\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgGeolocationInfoHeader`\n\n### Description\n\nThis plugin will send informations extracted by the Geolocation details extractor to the target service in a header.\n\n\n\n### Default configuration\n\n```json\n{\n \"header_name\" : \"X-User-Agent-Info\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgHasAllowedUsersValidator }\n\n## Allowed users only\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgHasAllowedUsersValidator`\n\n### Description\n\nThis plugin only let allowed users pass\n\n\n\n### Default configuration\n\n```json\n{\n \"usernames\" : [ ],\n \"emails\" : [ ],\n \"email_domains\" : [ ],\n \"metadata_match\" : [ ],\n \"metadata_not_match\" : [ ],\n \"profile_match\" : [ ],\n \"profile_not_match\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgHasClientCertMatchingApikeyValidator }\n\n## Client Certificate + Api Key only\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgHasClientCertMatchingApikeyValidator`\n\n### Description\n\nCheck if a client certificate is present in the request and that the apikey used matches the client certificate.\nYou can set the client cert. DN in an apikey metadata named `allowed-client-cert-dn`\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgHasClientCertMatchingHttpValidator }\n\n## Client certificate matching (over http)\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgHasClientCertMatchingHttpValidator`\n\n### Description\n\nCheck if client certificate matches the following fetched from an http endpoint\n\n\n\n### Default configuration\n\n```json\n{\n \"serial_numbers\" : [ ],\n \"subject_dns\" : [ ],\n \"issuer_dns\" : [ ],\n \"regex_subject_dns\" : [ ],\n \"regex_issuer_dns\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgHasClientCertMatchingValidator }\n\n## Client certificate matching\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgHasClientCertMatchingValidator`\n\n### Description\n\nCheck if client certificate matches the following configuration\n\n\n\n### Default configuration\n\n```json\n{\n \"serial_numbers\" : [ ],\n \"subject_dns\" : [ ],\n \"issuer_dns\" : [ ],\n \"regex_subject_dns\" : [ ],\n \"regex_issuer_dns\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgHasClientCertValidator }\n\n## Client Certificate Only\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgHasClientCertValidator`\n\n### Description\n\nCheck if a client certificate is present in the request\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgHtmlPatcher }\n\n## Html Patcher\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgHtmlPatcher`\n\n### Description\n\nThis plugin can inject elements in html pages (in the body or in the head) returned by the service\n\n\n\n### Default configuration\n\n```json\n{\n \"append_head\" : [ ],\n \"append_body\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgHttpClientCache }\n\n## HTTP Client Cache\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgHttpClientCache`\n\n### Description\n\nThis plugin add cache headers to responses\n\n\n\n### Default configuration\n\n```json\n{\n \"max_age_seconds\" : 86400,\n \"methods\" : [ \"GET\" ],\n \"status\" : [ 200 ],\n \"mime_types\" : [ \"text/html\" ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgIpStackGeolocationInfoExtractor }\n\n## Geolocation details extractor (using IpStack api)\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgIpStackGeolocationInfoExtractor`\n\n### Description\n\nThis plugin extract geolocation informations from ip address using the [IpStack dbs](https://ipstack.com/).\nThe informations are store in plugins attrs for other plugins to use\n\n\n\n### Default configuration\n\n```json\n{\n \"apikey\" : null,\n \"timeout\" : 2000,\n \"log\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgIzanamiV1Canary }\n\n## Izanami V1 Canary Campaign\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgIzanamiV1Canary`\n\n### Description\n\nThis plugin allow you to perform canary testing based on an izanami experiment campaign (A/B test)\n\n\n\n### Default configuration\n\n```json\n{\n \"experiment_id\" : \"foo:bar:qix\",\n \"config_id\" : \"foo:bar:qix:config\",\n \"izanami_url\" : \"https://izanami.foo.bar\",\n \"tls\" : {\n \"certs\" : [ ],\n \"trusted_certs\" : [ ],\n \"enabled\" : false,\n \"loose\" : false,\n \"trust_all\" : false\n },\n \"client_id\" : \"client\",\n \"client_secret\" : \"secret\",\n \"timeout\" : 5000,\n \"route_config\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgIzanamiV1Proxy }\n\n## Izanami v1 APIs Proxy\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgIzanamiV1Proxy`\n\n### Description\n\nThis plugin exposes routes to proxy Izanami configuration and features tree APIs\n\n\n\n### Default configuration\n\n```json\n{\n \"path\" : \"/api/izanami\",\n \"feature_pattern\" : \"*\",\n \"config_pattern\" : \"*\",\n \"auto_context\" : false,\n \"features_enabled\" : true,\n \"features_with_context_enabled\" : true,\n \"configuration_enabled\" : false,\n \"tls\" : {\n \"certs\" : [ ],\n \"trusted_certs\" : [ ],\n \"enabled\" : false,\n \"loose\" : false,\n \"trust_all\" : false\n },\n \"izanami_url\" : \"https://izanami.foo.bar\",\n \"client_id\" : \"client\",\n \"client_secret\" : \"secret\",\n \"timeout\" : 500\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgJwtUserExtractor }\n\n## Jwt user extractor\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgJwtUserExtractor`\n\n### Description\n\nThis plugin extract a user from a JWT token\n\n\n\n### Default configuration\n\n```json\n{\n \"verifier\" : \"none\",\n \"strict\" : true,\n \"strip\" : false,\n \"name_path\" : null,\n \"email_path\" : null,\n \"meta_path\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgLegacyApikeyCall }\n\n## Legacy apikeys\n\n### Defined on steps\n\n - `MatchRoute`\n - `ValidateAccess`\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgLegacyApikeyCall`\n\n### Description\n\nThis plugin expects to find an apikey to allow the request to pass. This plugin behaves exactly like the service descriptor does\n\n\n\n### Default configuration\n\n```json\n{\n \"public_patterns\" : [ ],\n \"private_patterns\" : [ ],\n \"extractors\" : {\n \"basic\" : {\n \"enabled\" : true,\n \"header_name\" : null,\n \"query_name\" : null\n },\n \"custom_headers\" : {\n \"enabled\" : true,\n \"client_id_header_name\" : null,\n \"client_secret_header_name\" : null\n },\n \"client_id\" : {\n \"enabled\" : true,\n \"header_name\" : null,\n \"query_name\" : null\n },\n \"jwt\" : {\n \"enabled\" : true,\n \"secret_signed\" : true,\n \"keypair_signed\" : true,\n \"include_request_attrs\" : false,\n \"max_jwt_lifespan_sec\" : null,\n \"header_name\" : null,\n \"query_name\" : null,\n \"cookie_name\" : null\n }\n },\n \"routing\" : {\n \"enabled\" : false\n },\n \"validate\" : true,\n \"mandatory\" : true,\n \"pass_with_user\" : false,\n \"wipe_backend_request\" : true,\n \"update_quotas\" : true\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgLegacyAuthModuleCall }\n\n## Legacy Authentication\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgLegacyAuthModuleCall`\n\n### Description\n\nThis plugin applies an authentication module the same way service descriptor does\n\n\n\n### Default configuration\n\n```json\n{\n \"public_patterns\" : [ ],\n \"private_patterns\" : [ ],\n \"pass_with_apikey\" : false,\n \"auth_module\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgLog4ShellFilter }\n\n## Log4Shell mitigation plugin\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgLog4ShellFilter`\n\n### Description\n\nThis plugin try to detect Log4Shell attacks in request and block them\n\n\n\n### Default configuration\n\n```json\n{\n \"status\" : 200,\n \"body\" : \"\",\n \"parse_body\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgMaxMindGeolocationInfoExtractor }\n\n## Geolocation details extractor (using Maxmind db)\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgMaxMindGeolocationInfoExtractor`\n\n### Description\n\nThis plugin extract geolocation informations from ip address using the [Maxmind dbs](https://www.maxmind.com/en/geoip2-databases).\nThe informations are store in plugins attrs for other plugins to use\n\n\n\n### Default configuration\n\n```json\n{\n \"path\" : \"global\",\n \"log\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgResponseCache }\n\n## Response Cache\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgResponseCache`\n\n### Description\n\nThis plugin can cache responses from target services in the otoroshi datasstore\nIt also provides a debug UI at `/.well-known/otoroshi/bodylogger`.\n\n\n\n### Default configuration\n\n```json\n{\n \"ttl\" : 3600000,\n \"maxSize\" : 52428800,\n \"autoClean\" : true,\n \"filter\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgSecurityTxt }\n\n## Security Txt\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgSecurityTxt`\n\n### Description\n\nThis plugin exposes a special route `/.well-known/security.txt` as proposed at [https://securitytxt.org/](https://securitytxt.org/)\n\n\n\n### Default configuration\n\n```json\n{\n \"contact\" : \"contact@foo.bar\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgServiceQuotas }\n\n## Public quotas\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgServiceQuotas`\n\n### Description\n\nThis plugin will enforce public quotas on the current route\n\n\n\n### Default configuration\n\n```json\n{\n \"throttling_quota\" : 10000000,\n \"daily_quota\" : 10000000,\n \"monthly_quota\" : 10000000\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgTrafficMirroring }\n\n## Traffic Mirroring\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgTrafficMirroring`\n\n### Description\n\nThis plugin will mirror every request to other targets\n\n\n\n### Default configuration\n\n```json\n{\n \"to\" : \"https://foo.bar.dev\",\n \"enabled\" : true,\n \"capture_response\" : false,\n \"generate_events\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgUserAgentExtractor }\n\n## User-Agent details extractor\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgUserAgentExtractor`\n\n### Description\n\nThis plugin extract informations from User-Agent header such as browsser version, OS version, etc.\nThe informations are store in plugins attrs for other plugins to use\n\n\n\n### Default configuration\n\n```json\n{\n \"log\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgUserAgentInfoEndpoint }\n\n## User-Agent endpoint\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgUserAgentInfoEndpoint`\n\n### Description\n\nThis plugin will expose current user-agent informations on the following endpoint: /.well-known/otoroshi/plugins/user-agent\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgUserAgentInfoHeader }\n\n## User-Agent header\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgUserAgentInfoHeader`\n\n### Description\n\nThis plugin will sent informations extracted by the User-Agent details extractor to the target service in a header\n\n\n\n### Default configuration\n\n```json\n{\n \"header_name\" : \"X-User-Agent-Info\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OAuth1Caller }\n\n## OAuth1 caller\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OAuth1Caller`\n\n### Description\n\nThis plugin can be used to call api that are authenticated using OAuth1.\n Consumer key, secret, and OAuth token et OAuth token secret can be pass through the metadata of an api key\n or via the configuration of this plugin.\n\n\n\n### Default configuration\n\n```json\n{\n \"consumerKey\" : null,\n \"consumerSecret\" : null,\n \"token\" : null,\n \"tokenSecret\" : null,\n \"algo\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OAuth2Caller }\n\n## OAuth2 caller\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OAuth2Caller`\n\n### Description\n\nThis plugin can be used to call api that are authenticated using OAuth2 client_credential/password flow.\nDo not forget to enable client retry to handle token generation on expire.\n\n\n\n### Default configuration\n\n```json\n{\n \"kind\" : \"client_credentials\",\n \"url\" : \"https://127.0.0.1:8080/oauth/token\",\n \"method\" : \"POST\",\n \"headerName\" : \"Authorization\",\n \"headerValueFormat\" : \"Bearer %s\",\n \"jsonPayload\" : false,\n \"clientId\" : \"the client_id\",\n \"clientSecret\" : \"the client_secret\",\n \"scope\" : null,\n \"audience\" : null,\n \"user\" : null,\n \"password\" : null,\n \"cacheTokenSeconds\" : 600000,\n \"tlsConfig\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OIDCAccessTokenAsApikey }\n\n## OIDC access_token as apikey\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OIDCAccessTokenAsApikey`\n\n### Description\n\nThis plugin will use the third party apikey configuration to generate an apikey\n\n\n\n### Default configuration\n\n```json\n{\n \"enabled\" : true,\n \"atLeastOne\" : false,\n \"config\" : {\n \"enabled\" : true,\n \"quotasEnabled\" : true,\n \"uniqueApiKey\" : false,\n \"type\" : \"OIDC\",\n \"oidcConfigRef\" : \"some-oidc-auth-module-id\",\n \"localVerificationOnly\" : false,\n \"mode\" : \"Tmp\",\n \"ttl\" : 0,\n \"headerName\" : \"Authorization\",\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"excludedPatterns\" : [ ],\n \"scopes\" : [ ],\n \"rolesPath\" : [ ],\n \"roles\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OIDCAccessTokenValidator }\n\n## OIDC access_token validator\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OIDCAccessTokenValidator`\n\n### Description\n\nThis plugin will use the third party apikey configuration and apply it while keeping the apikey mecanism of otoroshi.\nUse it to combine apikey validation and OIDC access_token validation.\n\n\n\n### Default configuration\n\n```json\n{\n \"enabled\" : true,\n \"atLeastOne\" : false,\n \"config\" : {\n \"enabled\" : true,\n \"quotasEnabled\" : true,\n \"uniqueApiKey\" : false,\n \"type\" : \"OIDC\",\n \"oidcConfigRef\" : \"some-oidc-auth-module-id\",\n \"localVerificationOnly\" : false,\n \"mode\" : \"Tmp\",\n \"ttl\" : 0,\n \"headerName\" : \"Authorization\",\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"excludedPatterns\" : [ ],\n \"scopes\" : [ ],\n \"rolesPath\" : [ ],\n \"roles\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OIDCHeaders }\n\n## OIDC headers\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OIDCHeaders`\n\n### Description\n\nThis plugin injects headers containing tokens and profile from current OIDC provider.\n\n\n\n### Default configuration\n\n```json\n{\n \"profile\" : {\n \"send\" : false,\n \"headerName\" : \"X-OIDC-User\"\n },\n \"idToken\" : {\n \"send\" : false,\n \"name\" : \"id_token\",\n \"headerName\" : \"X-OIDC-Id-Token\",\n \"jwt\" : true\n },\n \"accessToken\" : {\n \"send\" : false,\n \"name\" : \"access_token\",\n \"headerName\" : \"X-OIDC-Access-Token\",\n \"jwt\" : true\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OtoroshiChallenge }\n\n## Otoroshi challenge token\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OtoroshiChallenge`\n\n### Description\n\nThis plugin adds a jwt challenge token to the request to a backend and expects a response with a matching token\n\n\n\n### Default configuration\n\n```json\n{\n \"version\" : \"V2\",\n \"ttl\" : 30,\n \"request_header_name\" : null,\n \"response_header_name\" : null,\n \"algo_to_backend\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"algo_from_backend\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"state_resp_leeway\" : 10\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OtoroshiHeadersIn }\n\n## Otoroshi headers in\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OtoroshiHeadersIn`\n\n### Description\n\nThis plugin adds Otoroshi specific headers to the request\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OtoroshiInfos }\n\n## Otoroshi info. token\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OtoroshiInfos`\n\n### Description\n\nThis plugin adds a jwt token with informations about the caller to the backend\n\n\n\n### Default configuration\n\n```json\n{\n \"version\" : \"Latest\",\n \"ttl\" : 30,\n \"header_name\" : null,\n \"add_fields\" : null,\n \"algo\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OverrideHost }\n\n## Override host header\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OverrideHost`\n\n### Description\n\nThis plugin override the current Host header with the Host of the backend target\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.PublicPrivatePaths }\n\n## Public/Private paths\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.PublicPrivatePaths`\n\n### Description\n\nThis plugin allows or forbid request based on path patterns\n\n\n\n### Default configuration\n\n```json\n{\n \"strict\" : false,\n \"private_patterns\" : [ ],\n \"public_patterns\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.QueryTransformer }\n\n## Query param transformer\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.QueryTransformer`\n\n### Description\n\nThis plugin can modify the query params of the request\n\n\n\n### Default configuration\n\n```json\n{\n \"remove\" : [ ],\n \"rename\" : { },\n \"add\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.RBAC }\n\n## RBAC\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.RBAC`\n\n### Description\n\nThis plugin check if current user/apikey/jwt token has the right role\n\n\n\n### Default configuration\n\n```json\n{\n \"allow\" : [ ],\n \"deny\" : [ ],\n \"allow_all\" : false,\n \"deny_all\" : false,\n \"jwt_path\" : null,\n \"apikey_path\" : null,\n \"user_path\" : null,\n \"role_prefix\" : null,\n \"roles\" : \"roles\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.ReadOnlyCalls }\n\n## Read only requests\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.ReadOnlyCalls`\n\n### Description\n\nThis plugin verifies the current request only reads data\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.Redirection }\n\n## Redirection\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.Redirection`\n\n### Description\n\nThis plugin redirects the current request elsewhere\n\n\n\n### Default configuration\n\n```json\n{\n \"code\" : 303,\n \"to\" : \"https://www.otoroshi.io\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.RemoveHeadersIn }\n\n## Remove headers in\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.RemoveHeadersIn`\n\n### Description\n\nThis plugin removes headers in the incoming otoroshi request\n\n\n\n### Default configuration\n\n```json\n{\n \"header_names\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.RemoveHeadersOut }\n\n## Remove headers out\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.RemoveHeadersOut`\n\n### Description\n\nThis plugin removes headers in the otoroshi response\n\n\n\n### Default configuration\n\n```json\n{\n \"header_names\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.Robots }\n\n## Robots\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.Robots`\n\n### Description\n\nThis plugin provides all the necessary tool to handle search engine robots\n\n\n\n### Default configuration\n\n```json\n{\n \"robot_txt_enabled\" : true,\n \"robot_txt_content\" : \"User-agent: *\\nDisallow: /\\n\",\n \"meta_enabled\" : true,\n \"meta_content\" : \"noindex,nofollow,noarchive\",\n \"header_enabled\" : true,\n \"header_content\" : \"noindex, nofollow, noarchive\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.RoutingRestrictions }\n\n## Routing Restrictions\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.RoutingRestrictions`\n\n### Description\n\nThis plugin apply routing restriction `method domain/path` on the current request/route\n\n\n\n### Default configuration\n\n```json\n{\n \"allow_last\" : true,\n \"allowed\" : [ ],\n \"forbidden\" : [ ],\n \"not_found\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.S3Backend }\n\n## S3 Static backend\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.S3Backend`\n\n### Description\n\nThis plugin is able to S3 bucket with file content\n\n\n\n### Default configuration\n\n```json\n{\n \"bucket\" : \"\",\n \"endpoint\" : \"\",\n \"region\" : \"eu-west-1\",\n \"access\" : \"client\",\n \"secret\" : \"secret\",\n \"key\" : \"\",\n \"chunkSize\" : 8388608,\n \"v4auth\" : true,\n \"writeEvery\" : 60000,\n \"acl\" : \"private\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.SOAPAction }\n\n## SOAP action\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.SOAPAction`\n\n### Description\n\nThis plugin is able to call SOAP actions and expose it as a rest endpoint\n\n\n\n### Default configuration\n\n```json\n{\n \"url\" : null,\n \"envelope\" : \"\",\n \"action\" : null,\n \"preserve_query\" : true,\n \"charset\" : null,\n \"jq_request_filter\" : null,\n \"jq_response_filter\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.SendOtoroshiHeadersBack }\n\n## Send otoroshi headers back\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.SendOtoroshiHeadersBack`\n\n### Description\n\nThis plugin adds response header containing useful informations about the current call\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.SnowMonkeyChaos }\n\n## Snow Monkey Chaos\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.SnowMonkeyChaos`\n\n### Description\n\nThis plugin introduce some chaos into you life\n\n\n\n### Default configuration\n\n```json\n{\n \"large_request_fault\" : null,\n \"large_response_fault\" : null,\n \"latency_injection_fault\" : null,\n \"bad_responses_fault\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.StaticBackend }\n\n## Static backend\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.StaticBackend`\n\n### Description\n\nThis plugin is able to serve a static folder with file content\n\n\n\n### Default configuration\n\n```json\n{\n \"root_path\" : \"/tmp\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.StaticResponse }\n\n## Static Response\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.StaticResponse`\n\n### Description\n\nThis plugin returns static responses\n\n\n\n### Default configuration\n\n```json\n{\n \"status\" : 200,\n \"headers\" : { },\n \"body\" : \"\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.TailscaleSelectTargetByName }\n\n## Tailscale select target by name\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.TailscaleSelectTargetByName`\n\n### Description\n\nThis plugin selects a machine instance on Tailscale network based on its name\n\n\n\n### Default configuration\n\n```json\n{\n \"machine_name\" : \"my-machine\",\n \"use_ip_address\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.TcpTunnel }\n\n## TCP Tunnel\n\n### Defined on steps\n\n - `HandlesTunnel`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.TcpTunnel`\n\n### Description\n\nThis plugin creates TCP tunnels through otoroshi\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.UdpTunnel }\n\n## UDP Tunnel\n\n### Defined on steps\n\n - `HandlesTunnel`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.UdpTunnel`\n\n### Description\n\nThis plugin creates UDP tunnels through otoroshi\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.W3CTracing }\n\n## W3C Trace Context\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.W3CTracing`\n\n### Description\n\nThis plugin propagates W3C Trace Context spans and can export it to Jaeger or Zipkin\n\n\n\n### Default configuration\n\n```json\n{\n \"kind\" : \"noop\",\n \"endpoint\" : \"http://localhost:3333/spans\",\n \"timeout\" : 30000,\n \"baggage\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmAccessValidator }\n\n## Wasm Access control\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmAccessValidator`\n\n### Description\n\nDelegate route access to a wasm plugin\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmBackend }\n\n## Wasm Backend\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmBackend`\n\n### Description\n\nThis plugin can be used to use a wasm plugin as backend\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmOPA }\n\n## Open Policy Agent (OPA)\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmOPA`\n\n### Description\n\nRepo policies as WASM modules\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : true,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmPreRoute }\n\n## Wasm pre-route\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmPreRoute`\n\n### Description\n\nThis plugin can be used to use a wasm plugin as in pre-route phase\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmRequestTransformer }\n\n## Wasm Request Transformer\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmRequestTransformer`\n\n### Description\n\nTransform the content of the request with a wasm plugin\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmResponseTransformer }\n\n## Wasm Response Transformer\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmResponseTransformer`\n\n### Description\n\nTransform the content of a response with a wasm plugin\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmRouteMatcher }\n\n## Wasm Route Matcher\n\n### Defined on steps\n\n - `MatchRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmRouteMatcher`\n\n### Description\n\nThis plugin can be used to use a wasm plugin as route matcher\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmRouter }\n\n## Wasm Router\n\n### Defined on steps\n\n - `Router`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmRouter`\n\n### Description\n\nCan decide for routing with a wasm plugin\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmSink }\n\n## Wasm Sink\n\n### Defined on steps\n\n - `Sink`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmSink`\n\n### Description\n\nHandle unmatched requests with a wasm plugin\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.XForwardedHeaders }\n\n## X-Forwarded-* headers\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.XForwardedHeaders`\n\n### Description\n\nThis plugin adds all the X-Forwarder-* headers to the request for the backend target\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.XmlToJsonRequest }\n\n## request body xml-to-json\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.XmlToJsonRequest`\n\n### Description\n\nThis plugin transform incoming request body from xml to json and may apply a jq transformation\n\n\n\n### Default configuration\n\n```json\n{\n \"filter\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.XmlToJsonResponse }\n\n## response body xml-to-json\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.XmlToJsonResponse`\n\n### Description\n\nThis plugin transform response body from xml to json and may apply a jq transformation\n\n\n\n### Default configuration\n\n```json\n{\n \"filter\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.tunnel.TunnelPlugin }\n\n## Remote tunnel calls\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.tunnel.TunnelPlugin`\n\n### Description\n\nThis plugin can contact remote service using tunnels\n\n\n\n### Default configuration\n\n```json\n{\n \"tunnel_id\" : \"default\"\n}\n```\n\n\n\n\n\n@@@\n\n\n\n\n"},{"name":"create-plugins.md","id":"/plugins/create-plugins.md","url":"/plugins/create-plugins.html","title":"Create plugins","content":"# Create plugins\n\n@@@ warning\nThis section is under rewrite. The following content is deprecated\n@@@\n\nWhen everything has failed and you absolutely need a feature in Otoroshi to make your use case work, there is a solution. Plugins is the feature in Otoroshi that allow you to code how Otoroshi should behave when receiving, validating and routing an http request. With request plugin, you can change request / response headers and request / response body the way you want, provide your own apikey, etc.\n\n## Plugin types\n\nthere are many plugin types explained @ref:[here](./plugins.md) \n\n## Code and signatures\n\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/requestsink.scala#L14-L19\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/routing.scala#L75-L78\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/accessvalidator.scala#L65-L85\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/script.scala#269-L540\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/eventlistener.scala#L27-L48\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/job.scala#L69-L164\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/job.scala#L108-L110\n\n\nfor more information about APIs you can use\n\n* https://www.playframework.com/documentation/2.8.x/api/scala/index.html#package\n* https://www.playframework.com/documentation/2.8.x/api/scala/index.html#play.api.mvc.Results\n* https://github.com/MAIF/otoroshi\n* https://doc.akka.io/docs/akka/2.5/stream/index.html\n* https://doc.akka.io/api/akka/current/akka/stream/index.html\n* https://doc.akka.io/api/akka/current/akka/stream/scaladsl/Source.html\n\n## Plugin examples\n\n@ref:[A lot of plugins](./built-in-plugins.md) comes with otoroshi, you can find them on [github](https://github.com/MAIF/otoroshi/tree/master/otoroshi/app/plugins)\n\n## Writing a plugin from Otoroshi UI\n\nLog into Otoroshi and go to `Settings (cog icon) / Plugins`. Here you can create multiple request transformer scripts and associate it with service descriptor later.\n\n@@@ div { .centered-img }\n\n@@@\n\nwhen you write for instance a transformer in the Otoroshi UI, do the following\n\n```scala\nimport akka.stream.Materializer\nimport env.Env\nimport models.{ApiKey, PrivateAppsUser, ServiceDescriptor}\nimport otoroshi.script._\nimport play.api.Logger\nimport play.api.mvc.{Result, Results}\nimport scala.util._\nimport scala.concurrent.{ExecutionContext, Future}\n\nclass MyTransformer extends RequestTransformer {\n\n val logger = Logger(\"my-transformer\")\n\n // implements the methods you want\n}\n\n// WARN: do not forget this line to provide a working instance of your transformer to Otoroshi\nnew MyTransformer()\n```\n\nYou can use the compile button to check if the script compiles, or code the transformer in your IDE (see next point).\n\nThen go to a service descriptor, scroll to the bottom of the page, and select your transformer in the list\n\n@@@ div { .centered-img }\n\n@@@\n\n## Providing a transformer from Java classpath\n\nYou can write your own transformer using your favorite IDE. Just create an SBT project with the following dependencies. It can be quite handy to manage the source code like any other piece of code, and it avoid the compilation time for the script at Otoroshi startup.\n\n```scala\nlazy val root = (project in file(\".\")).\n settings(\n inThisBuild(List(\n organization := \"com.example\",\n scalaVersion := \"2.12.7\",\n version := \"0.1.0-SNAPSHOT\"\n )),\n name := \"request-transformer-example\",\n libraryDependencies += \"fr.maif\" %% \"otoroshi\" % \"1.x.x\"\n )\n```\n\n@@@ warning\nyou MUST provide plugins that lies in the `otoroshi_plugins` package or in a sub-package of `otoroshi_plugins`. If you do not, your plugin will not be found by otoroshi. for example\n\n```scala\npackage otoroshi_plugins.com.my.company.myplugin\n```\n\nalso you don't have to instantiate your plugin at the end of the file like in the Otoroshi UI\n@@@\n\nWhen your code is ready, create a jar file \n\n```\nsbt package\n```\n\nand add the jar file to the Otoroshi classpath\n\n```sh\njava -cp \"/path/to/transformer.jar:$/path/to/otoroshi.jar\" play.core.server.ProdServerStart\n```\n\nthen, in your service descriptor, you can chose your transformer in the list. If you want to do it from the API, you have to defined the transformerRef using `cp:` prefix like \n\n```json\n{\n \"transformerRef\": \"cp:otoroshi_plugins.my.class.package.MyTransformer\"\n}\n```\n\n## Getting custom configuration from the Otoroshi config. file\n\nLet say you need to provide custom configuration values for a script, then you can customize a configuration file of Otoroshi\n\n```hocon\ninclude \"application.conf\"\n\notoroshi {\n scripts {\n enabled = true\n }\n}\n\nmy-transformer {\n env = \"prod\"\n maxRequestBodySize = 2048\n maxResponseBodySize = 2048\n}\n```\n\nthen start Otoroshi like\n\n```sh\njava -Dconfig.file=/path/to/custom.conf -jar otoroshi.jar\n```\n\nthen, in your transformer, you can write something like \n\n```scala\npackage otoroshi_plugins.com.example.otoroshi\n\nimport akka.stream.Materializer\nimport akka.stream.scaladsl._\nimport akka.util.ByteString\nimport env.Env\nimport models.{ApiKey, PrivateAppsUser, ServiceDescriptor}\nimport otoroshi.script._\nimport play.api.Logger\nimport play.api.mvc.{Result, Results}\nimport scala.util._\nimport scala.concurrent.{ExecutionContext, Future}\n\nclass BodyLengthLimiter extends RequestTransformer {\n\n override def def transformResponseWithCtx(ctx: TransformerResponseContext)(implicit env: Env, ec: ExecutionContext, mat: Materializer): Source[ByteString, _] = {\n val max = env.configuration.getOptional[Long](\"my-transformer.maxResponseBodySize\").getOrElse(Long.MaxValue)\n ctx.body.limitWeighted(max)(_.size)\n }\n\n override def transformRequestWithCtx(ctx: TransformerRequestContext)(implicit env: Env, ec: ExecutionContext, mat: Materializer): Source[ByteString, _] = {\n val max = env.configuration.getOptional[Long](\"my-transformer.maxRequestBodySize\").getOrElse(Long.MaxValue)\n ctx.body.limitWeighted(max)(_.size)\n }\n}\n```\n\n## Using a library that is not embedded in Otoroshi\n\nJust use the `classpath` option when running Otoroshi\n\n```sh\njava -cp \"/path/to/library.jar:$/path/to/otoroshi.jar\" play.core.server.ProdServerStart\n```\n\nBe carefull as your library can conflict with other libraries used by Otoroshi and affect its stability\n\n## Enabling plugins\n\nplugins can be enabled per service from the service settings page or globally from the danger zone in the plugins section.\n\n## Full example\n\na full external plugin example can be found @link:[here](https://github.com/mathieuancelin/otoroshi-wasmer-plugin)\n"},{"name":"index.md","id":"/plugins/index.md","url":"/plugins/index.html","title":"Otoroshi plugins","content":"# Otoroshi plugins\n\nIn this sections, you will find informations about Otoroshi plugins system\n\n* @ref:[Plugins system](./plugins.md)\n* @ref:[Create plugins](./create-plugins.md)\n* @ref:[Built in plugins](./built-in-plugins.md)\n* @ref:[Built in legacy plugins](./built-in-legacy-plugins.md)\n\n@@@ index\n\n* [Plugins system](./plugins.md)\n* [Create plugins](./create-plugins.md)\n* [Built in plugins](./built-in-plugins.md)\n* [Built in legacy plugins](./built-in-legacy-plugins.md)\n\n@@@"},{"name":"plugins.md","id":"/plugins/plugins.md","url":"/plugins/plugins.html","title":"Otoroshi plugins system","content":"# Otoroshi plugins system\n\nOtoroshi includes several extension points that allows you to create your own plugins and support stuff not supported by default\n\n## Available plugins\n\n@@@ div { .plugin .script }\n## Request Sink\n### Description\nUsed when no services are matched in Otoroshi. Can reply with any content.\n@@@\n\n@@@ div { .plugin .script }\n## Pre routing\n### Description\nUsed to extract values (like custom apikeys) and provide them to other plugins or Otoroshi engine\n@@@\n\n@@@ div { .plugin .script }\n## Access Validator\n### Description\nUsed to validate if a request can pass or not based on whatever you want\n@@@\n\n@@@ div { .plugin .script }\n## Request Transformer\n### Description\nUsed to transform request, responses and their body. Can be used to return arbitrary content\n@@@\n\n@@@ div { .plugin .script }\n## Event listener\n### Description\nAny plugin type can listen to Otoroshi internal events and react to thems\n@@@\n\n@@@ div { .plugin .script }\n## Job\n### Description\nTasks that can run automatically once, on be scheduled with a cron expression or every defined interval\n@@@\n\n@@@ div { .plugin .script }\n## Exporter\n### Description\nUsed to export events and Otoroshi alerts to an external source\n@@@\n\n@@@ div { .plugin .script }\n## Request handler\n### Description\nUsed to handle traffic without passing through Otoroshi routing and apply own rules\n@@@\n\n@@@ div { .plugin .script }\n## Nano app\n### Description\nUsed to write an api directly in Otoroshi in Scala language\n@@@"},{"name":"anonymous-reporting.md","id":"/topics/anonymous-reporting.md","url":"/topics/anonymous-reporting.html","title":"Anonymous reporting","content":"# Anonymous reporting\n\nThe best way of supporting us in Otoroshi developement is to enable Anonymous reporting.\n\n## Details\n\nWhen this feature is active, Otoroshi perdiodically send anonymous information about its configuration.\n\nThis information helps us to know how Otoroshi is used, it's a precious hint to prioritise our roadmap.\n\nBelow is an example of what is send by Otoroshi. You can find more information about these fields either on @ref:[entities documentation](../entities/index.md) or [by reading the source code](https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/jobs/reporting.scala#L174-L458).\n\n```json\n{\n \"@timestamp\": 1679514537259,\n \"timestamp_str\": \"2023-03-22T20:48:57.259+01:00\",\n \"@id\": \"4edb54171-8156-4947-b821-41d6c2bd1ba7\",\n \"otoroshi_cluster_id\": \"1148aee35-a487-47b0-b494-a2a44862c618\",\n \"otoroshi_version\": \"16.0.0-dev\",\n \"java_version\": {\n \"version\": \"11.0.16.1\",\n \"vendor\": \"Eclipse Adoptium\"\n },\n \"os\": {\n \"name\": \"Mac OS X\",\n \"version\": \"13.1\",\n \"arch\": \"x86_64\"\n },\n \"datastore\": \"file\",\n \"env\": \"dev\",\n \"features\": {\n \"snow_monkey\": false,\n \"clever_cloud\": false,\n \"kubernetes\": false,\n \"elastic_read\": true,\n \"lets_encrypt\": false,\n \"auto_certs\": false,\n \"wasm_manager\": true,\n \"backoffice_login\": false\n },\n \"stats\": {\n \"calls\": 3823,\n \"data_in\": 480406,\n \"data_out\": 4698261,\n \"rate\": 0,\n \"duration\": 35.89899494949495,\n \"overhead\": 24.696984848484846,\n \"data_in_rate\": 0,\n \"data_out_rate\": 0,\n \"concurrent_requests\": 0\n },\n \"engine\": {\n \"uses_new\": true,\n \"uses_new_full\": false\n },\n \"cluster\": {\n \"mode\": \"Leader\",\n \"all_nodes\": 1,\n \"alive_nodes\": 1,\n \"leaders_count\": 1,\n \"workers_count\": 0,\n \"nodes\": [\n {\n \"id\": \"node_15ac62ec3-3e0d-48c1-a8ea-15de97088e3c\",\n \"os\": {\n \"name\": \"Mac OS X\",\n \"version\": \"13.1\",\n \"arch\": \"x86_64\"\n },\n \"java_version\": {\n \"version\": \"11.0.16.1\",\n \"vendor\": \"Eclipse Adoptium\"\n },\n \"version\": \"16.0.0-dev\",\n \"type\": \"Leader\",\n \"cpu_usage\": 10.992902320605205,\n \"load_average\": 44.38720703125,\n \"heap_used\": 527,\n \"heap_size\": 2048,\n \"relay\": true,\n \"tunnels\": 0\n }\n ]\n },\n \"entities\": {\n \"scripts\": {\n \"count\": 0,\n \"by_kind\": {}\n },\n \"routes\": {\n \"count\": 24,\n \"plugins\": {\n \"min\": 1,\n \"max\": 26,\n \"avg\": 4\n }\n },\n \"router_routes\": {\n \"count\": 27,\n \"http_clients\": {\n \"ahc\": 25,\n \"akka\": 2,\n \"netty\": 0,\n \"akka_ws\": 0\n },\n \"plugins\": {\n \"min\": 1,\n \"max\": 26,\n \"avg\": 4\n }\n },\n \"route_compositions\": {\n \"count\": 1,\n \"plugins\": {\n \"min\": 1,\n \"max\": 1,\n \"avg\": 1\n },\n \"by_kind\": {\n \"global\": 1\n }\n },\n \"apikeys\": {\n \"count\": 6,\n \"by_kind\": {\n \"disabled\": 0,\n \"with_rotation\": 0,\n \"with_read_only\": 0,\n \"with_client_id_only\": 0,\n \"with_constrained_services\": 0,\n \"with_meta\": 2,\n \"with_tags\": 1\n },\n \"authorized_on\": {\n \"min\": 1,\n \"max\": 4,\n \"avg\": 2\n }\n },\n \"jwt_verifiers\": {\n \"count\": 6,\n \"by_strategy\": {\n \"pass_through\": 6\n },\n \"by_alg\": {\n \"HSAlgoSettings\": 6\n }\n },\n \"certificates\": {\n \"count\": 9,\n \"by_kind\": {\n \"auto_renew\": 6,\n \"exposed\": 6,\n \"client\": 1,\n \"keypair\": 1\n }\n },\n \"auth_modules\": {\n \"count\": 8,\n \"by_kind\": {\n \"basic\": 7,\n \"oauth2\": 1\n }\n },\n \"service_descriptors\": {\n \"count\": 3,\n \"plugins\": {\n \"old\": 0,\n \"new\": 0\n },\n \"by_kind\": {\n \"disabled\": 1,\n \"fault_injection\": 0,\n \"health_check\": 1,\n \"gzip\": 0,\n \"jwt\": 0,\n \"cors\": 1,\n \"auth\": 0,\n \"protocol\": 0,\n \"restrictions\": 0\n }\n },\n \"teams\": {\n \"count\": 2\n },\n \"tenants\": {\n \"count\": 2\n },\n \"service_groups\": {\n \"count\": 2\n },\n \"data_exporters\": {\n \"count\": 10,\n \"by_kind\": {\n \"elastic\": 5,\n \"file\": 2,\n \"metrics\": 1,\n \"console\": 1,\n \"s3\": 1\n }\n },\n \"otoroshi_admins\": {\n \"count\": 5,\n \"by_kind\": {\n \"simple\": 2,\n \"webauthn\": 3\n }\n },\n \"backoffice_sessions\": {\n \"count\": 1,\n \"by_kind\": {\n \"simple\": 1\n }\n },\n \"private_apps_sessions\": {\n \"count\": 0,\n \"by_kind\": {}\n },\n \"tcp_services\": {\n \"count\": 0\n }\n },\n \"plugins_usage\": {\n \"cp:otoroshi.next.plugins.AdditionalHeadersOut\": 2,\n \"cp:otoroshi.next.plugins.DisableHttp10\": 2,\n \"cp:otoroshi.next.plugins.OverrideHost\": 27,\n \"cp:otoroshi.next.plugins.TailscaleFetchCertificate\": 1,\n \"cp:otoroshi.next.plugins.OtoroshiInfos\": 6,\n \"cp:otoroshi.next.plugins.MissingHeadersOut\": 2,\n \"cp:otoroshi.next.plugins.Redirection\": 2,\n \"cp:otoroshi.next.plugins.OtoroshiChallenge\": 5,\n \"cp:otoroshi.next.plugins.BuildMode\": 2,\n \"cp:otoroshi.next.plugins.XForwardedHeaders\": 2,\n \"cp:otoroshi.next.plugins.NgLegacyAuthModuleCall\": 2,\n \"cp:otoroshi.next.plugins.Cors\": 4,\n \"cp:otoroshi.next.plugins.OtoroshiHeadersIn\": 2,\n \"cp:otoroshi.next.plugins.NgDefaultRequestBody\": 1,\n \"cp:otoroshi.next.plugins.NgHttpClientCache\": 1,\n \"cp:otoroshi.next.plugins.ReadOnlyCalls\": 2,\n \"cp:otoroshi.next.plugins.RemoveHeadersIn\": 2,\n \"cp:otoroshi.next.plugins.JwtVerificationOnly\": 1,\n \"cp:otoroshi.next.plugins.ApikeyCalls\": 3,\n \"cp:otoroshi.next.plugins.WasmAccessValidator\": 3,\n \"cp:otoroshi.next.plugins.WasmBackend\": 3,\n \"cp:otoroshi.next.plugins.IpAddressAllowedList\": 2,\n \"cp:otoroshi.next.plugins.AuthModule\": 4,\n \"cp:otoroshi.next.plugins.RemoveHeadersOut\": 2,\n \"cp:otoroshi.next.plugins.IpAddressBlockList\": 2,\n \"cp:otoroshi.next.proxy.ProxyEngine\": 1,\n \"cp:otoroshi.next.plugins.JwtVerification\": 3,\n \"cp:otoroshi.next.plugins.GzipResponseCompressor\": 2,\n \"cp:otoroshi.next.plugins.SendOtoroshiHeadersBack\": 3,\n \"cp:otoroshi.next.plugins.AdditionalHeadersIn\": 4,\n \"cp:otoroshi.next.plugins.SOAPAction\": 1,\n \"cp:otoroshi.next.plugins.NgLegacyApikeyCall\": 6,\n \"cp:otoroshi.next.plugins.ForceHttpsTraffic\": 2,\n \"cp:otoroshi.next.plugins.NgErrorRewriter\": 1,\n \"cp:otoroshi.next.plugins.MissingHeadersIn\": 2,\n \"cp:otoroshi.next.plugins.MaintenanceMode\": 3,\n \"cp:otoroshi.next.plugins.RoutingRestrictions\": 2,\n \"cp:otoroshi.next.plugins.HeadersValidation\": 2\n }\n}\n```\n\n## Toggling\n\nAnonymous reporting can be toggled any time using :\n\n- the UI (Features > Danger zone > Send anonymous reports)\n- `otoroshi.anonymous-reporting.enabled` configuration\n- `OTOROSHI_ANONYMOUS_REPORTING_ENABLED` env variable\n"},{"name":"chaos-engineering.md","id":"/topics/chaos-engineering.md","url":"/topics/chaos-engineering.html","title":"Chaos engineering with the Snow Monkey","content":"# Chaos engineering with the Snow Monkey\n\nNihonzaru (the Snow Monkey) is the chaos engineering tool provided by Otoroshi. You can access it at `Settings (cog icon) / Snow Monkey`.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Chaos engineering\n\nOtoroshi offers some tools to introduce [chaos engineering](https://principlesofchaos.org/) in your everyday life. With chaos engineering, you will improve the resilience of your architecture by creating faults in production on running systems. With [Nihonzaru (the snow monkey)](https://en.wikipedia.org/wiki/Japanese_macaque) Otoroshi helps you to create faults on http request/response handled by Otoroshi. \n\n@@@ div { .centered-img }\n\n@@@\n\n## Settings\n\n@@@ div { .centered-img }\n\n@@@\n\nThe snow monkey let you define a few settings to work properly :\n\n* **Include user facing apps.**: you want to create fault in production, but maybe you don't want your users to enjoy some nice snow monkey generated error pages. Using this switch let you include of not user facing apps (ui apps). Each service descriptor has a `User facing app switch` that will be used by the snow monkey.\n* **Dry run**: when dry run is enabled, outages will be registered and will generate events and alerts (in the otoroshi eventing system) but requests won't be actualy impacted. It's a good way to prepare applications to the snow monkey habits\n* **Outage strategy**: Either `AllServicesPerGroup` or `OneServicePerGroup`. It means that only one service per group or all services per groups will have `n` outages (see next bullet point) during the snow monkey working period\n* **Outages per day**: during snow monkey working period, each service per group or one service per group will have only `n` outages registered \n* **Working period**: the snow monkey only works during a working period. Here you can defined when it starts and when it stops\n* **Outage duration**: here you can defined the bounds for the random outage duration when an outage is created on a service\n* **Impacted groups**: here you can define a list of service groups impacted by the snow monkey. If none is specified, then all service groups will be impacted\n\n## Faults\n\nWith the snow monkey, you can generate four types of faults\n\n* **Large request fault**: Add trailing bytes at the end of the request body (if one)\n* **Large response fault**: Add trailing bytes at the end of the response body\n* **Latency injection fault**: Add random response latency between two bounds\n* **Bad response injection fault**: Create predefined responses with custom headers, body and status code\n\nEach fault let you define a ratio for impacted requests. If you specify a ratio of `0.2`, then 20% of the requests for the impacte service will be impacted by this fault\n\n@@@ div { .centered-img }\n\n@@@\n\nThen you juste have to start the snow monkey and enjoy the show ;)\n\n@@@ div { .centered-img }\n\n@@@\n\n## Current outages\n\nIn the last section of the snow monkey page, you can see current outages (per service), when they started, their duration, etc ...\n\n@@@ div { .centered-img }\n\n@@@"},{"name":"dev-portal.md","id":"/topics/dev-portal.md","url":"/topics/dev-portal.html","title":"Developer portal with Daikoku","content":"# Developer portal with Daikoku\n\nWhile Otoroshi is the perfect tool to manage your webapps in a technical point of view it lacked of business perspective. This is not the case anymore with Daikoku.\n\nWhile Otoroshi is a standalone, Daikoku is a developer portal which stands in front of Otoroshi and provides some business feature.\n\nWhether you want to use Daikoku for your public APIs, you want to monetize or with your private APIs to provide some documentation, facilitation and self-service feature, it will be the perfect portal for Otoroshi.\n\n@@@div { .plugin .platform }\n## Daikoku\n\nRun your first Daikoku with a simple jar or with one Docker command.\n\n\n
\nTry Daikoku \n
\n@link:[With jar](https://maif.github.io/daikoku/devmanual/getdaikoku/frombinaries.html)\n@link:[With Docker](https://maif.github.io/daikoku/devmanual/getdaikoku/fromdocker.html)\n@@@\n\n@@@div { .plugin .platform }\n## Contribute\n\nDaikoku is opensource, so all contributions are welcome.\n\n\n@link:[Show the repository](https://github.com/MAIF/daikoku)\n@@@\n\n@@@div { .plugin .platform }\n## Documentation\n\nDaikoku and its UI are fully documented.\n\n\n@link:[Read the documentation](https://maif.github.io/daikoku/devmanual/)\n@@@\n\n"},{"name":"engine.md","id":"/topics/engine.md","url":"/topics/engine.html","title":"Proxy engine","content":"# Proxy engine\n\nStarting from the `1.5.3` release, otoroshi offers a new plugin that implements the next generation of the proxy engine. \nThis engine has been designed based on our 5 years experience building, maintaining and running the previous one.\nIt tries to fix all the drawback we may have encountered during those years and highly improve performances, user experience, reporting and debugging capabilities. \n\nThe new engine is fully plugin oriented in order to spend CPU cycles only on useful stuff. You can enable this plugin only on some domain names so you can easily A/B test the new engine. The new proxy engine is designed to be more reactive and more efficient generally. It is also designed to be very efficient on path routing where it wasn't the old engines strong suit.\n\nStarting from version `16.0.0`, this engine will be enabled by default on any new otoroshi cluster. In a future version, the engine will be enabled for any new or exisiting otoroshi cluster.\n\n## Enabling the new engine\n\nBy default, all freshly started Otoroshi instances have the new proxy engine enabled by default, for the other, to enable the new proxy engine on an otoroshi instance, just add the plugin in the `global plugins` section of the danger zone, inject the default configuration, enable it and in `domains` add the values of the desired domains (let say we want to use the new engine on `api.foo.bar`. It is possible to use `*.foo.bar` if that's what you want to do).\n\nThe next time a request hits the `api.foo.bar` domain, the new engine will handle it instead of the previous one.\n\n```json\n{\n \"NextGenProxyEngine\" : {\n \"enabled\" : true,\n \"debug_headers\" : false,\n \"reporting\": true,\n \"domains\" : [ \"api.foo.bar\" ],\n \"deny_domains\" : [ ],\n }\n}\n```\n\nif you need to enable global plugin with the new engine, you can add the following configuration in the `global plugins` configuration object \n\n```javascript\n{\n ...\n \"ng\": {\n \"slots\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.W3CTracing\",\n \"enabled\": true,\n \"include\": [],\n \"exclude\": [],\n \"config\": {\n \"baggage\": {\n \"foo\": \"bar\"\n }\n }\n },\n {\n \"plugin\": \"cp:otoroshi.next.plugins.wrappers.RequestSinkWrapper\",\n \"enabled\": true,\n \"include\": [],\n \"exclude\": [],\n \"config\": {\n \"plugin\": \"cp:otoroshi.plugins.apikeys.ClientCredentialService\",\n \"ClientCredentialService\": {\n \"domain\": \"ccs-next-gen.oto.tools\",\n \"expiration\": 3600000,\n \"defaultKeyPair\": \"otoroshi-jwt-signing\",\n \"secure\": false\n }\n }\n }\n ]\n }\n ...\n}\n```\n\n## Entities\n\nThis plugin introduces new entities that will replace (one day maybe) service descriptors:\n\n - `routes`: a unique routing rule based on hostname, path, method and headers that will execute a bunch of plugins\n - `backends`: a list of targets to contact a backend\n\n## Entities sync\n\nA new behavior introduced for the new proxy engine is the entities sync job. To avoid unecessary operations on the underlying datastore when routing requests, a new job has been setup in otoroshi that synchronize the content of the datastore (at least a part of it) with an in-memory cache. Because of it, the propagation of changes between an admin api call and the actual result on routing can be longer than before. When a node creates, updates, or deletes an entity via the admin api, other nodes need to wait for the next poll to purge the old cached entity and start using the new one. You can change the interval between syncs with the configuration key `otoroshi.next.state-sync-interval` or the env. variable `OTOROSHI_NEXT_STATE_SYNC_INTERVAL`. The default value is `10000` and the unit is `milliseconds`\n\n@@@ warning\nBecause of entities sync, memory consumption of otoroshi will be significantly higher than previous versions. You can use `otoroshi.next.monitor-proxy-state-size=true` config (or `OTOROSHI_NEXT_MONITOR_PROXY_STATE_SIZE` env. variable) to monitor the actual memory size of the entities cache. This will produce the `ng-proxy-state-size-monitoring` metric in standard otoroshi metrics\n@@@\n\n## Automatic conversion\n\nThe new engine uses new entities for its configuration, but in order to facilitate transition between the old world and the new world, all the `service descriptors` of an otoroshi instance are automatically converted live into `routes` periodically. Any `service descriptor` should still work as expected through the new engine while enjoying all the perks.\n\n@@@ warning\nthe experimental nature of the engine can imply unexpected behaviors for converted service descriptors\n@@@\n\n## Routing\n\nthe new proxy engine introduces a new router that has enhanced capabilities and performances. The router can handle thousands of routes declarations without compromising performances.\n\nThe new route allow routes to be matched on a combination of\n\n* hostname\n* path\n* header values\n * where values can be `exact_value`, or `Regex(value_regex)`, or `Wildcard(value_with_*)`\n* query param values\n * where values can be `exact_value`, or `Regex(value_regex)`, or `Wildcard(value_with_*)`\n\npatch matching works \n\n* exactly\n * matches `/api/foo` with `/api/foo` and not with `/api/foo/bar`\n* starting with value (default behavior, like the previous engine)\n * matches `/api/foo` with `/api/foo` but also with `/api/foo/bar`\n\npath matching can also include wildcard paths and even path params\n\n* plain old path: `subdomain.domain.tld/api/users`\n* wildcard path: `subdomain.domain.tld/api/users/*/bills`\n* named path params: `subdomain.domain.tld/api/users/:id/bills`\n* named regex path params: `subdomain.domain.tld/api/users/$id<[0-9]+>/bills`\n\nhostname matching works on \n\n* exact values\n * `subdomain.domain.tld`\n* wildcard values like\n * `*.domain.tld`\n * `subdomain.*.tld`\n\nas path matching can now include named path params, it is possible to perform a ful url rewrite on the target path like \n\n* input: `subdomain.domain.tld/api/users/$id<[0-9]+>/bills`\n* output: `target.domain.tld/apis/v1/basic_users/${req.pathparams.id}/all_bills`\n\n## Plugins\n\nthe new route entity defines a plugin pipline where any plugin can be enabled or not and can be active only on some paths. \nEach plugin slot in the pipeline holds the plugin id and the plugin configuration. \n\nYou can also enable debugging only on a plugin instance instead of the whole route (see [the debugging section](#debugging))\n\n```javascript\n{ \n ...\n \"plugins\" : [ {\n \"enabled\" : true,\n \"debug\" : false,\n \"plugin\" : \"cp:otoroshi.next.plugins.OverrideHost\",\n \"include\" : [ ],\n \"exclude\" : [ ],\n \"config\" : { }\n }, {\n \"enabled\" : true,\n \"debug\" : false,\n \"plugin\" : \"cp:otoroshi.next.plugins.ApikeyCalls\",\n \"include\" : [ ],\n \"exclude\" : [ \"/openapi.json\" ],\n \"config\" : { }\n } ]\n}\n```\n\nyou can find the list of built-in plugins @ref:[here](../plugins/built-in-plugins.md)\n\n## Using legacy plugins\n\nif you need to use legacy otoroshi plugins with the new engine, you can use several wrappers in order to do so\n\n* `otoroshi.next.plugins.wrappers.PreRoutingWrapper`\n* `otoroshi.next.plugins.wrappers.AccessValidatorWrapper`\n* `otoroshi.next.plugins.wrappers.RequestSinkWrapper`\n* `otoroshi.next.plugins.wrappers.RequestTransformerWrapper`\n* `otoroshi.next.plugins.wrappers.CompositeWrapper`\n\nto use it, just declare a plugin slot with the right wrapper and in the config, declare the `plugin` you want to use and its configuration like:\n\n```javascript\n{\n \"plugin\": \"cp:otoroshi.next.plugins.wrappers.PreRoutingWrapper\",\n \"enabled\": true,\n \"include\": [],\n \"exclude\": [],\n \"config\": {\n \"plugin\": \"cp:otoroshi.plugins.jwt.JwtUserExtractor\",\n \"JwtUserExtractor\": {\n \"verifier\" : \"$ref\",\n \"strict\" : true,\n \"namePath\" : \"name\",\n \"emailPath\": \"email\",\n \"metaPath\" : null\n }\n }\n}\n```\n\n## Reporting\n\nby default, any request hiting the new engine will generate an execution report with informations about how the request pipeline steps were performed. It is possible to export those reports as `RequestFlowReport` events using classical data exporter. By default, exporting for reports is not enabled, you must enable the `export_reporting` flag on a `route` or `service`.\n\n```javascript\n{\n \"@id\": \"8efac472-07bc-4a80-8d27-4236309d7d01\",\n \"@timestamp\": \"2022-02-15T09:51:25.402+01:00\",\n \"@type\": \"RequestFlowReport\",\n \"@product\": \"otoroshi\",\n \"@serviceId\": \"service_548f13bb-a809-4b1d-9008-fae3b1851092\",\n \"@service\": \"demo-service\",\n \"@env\": \"prod\",\n \"route\": {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"service_dev_d54f11d0-18e2-4da4-9316-cf47733fd29a\",\n \"name\" : \"hey\",\n \"description\" : \"hey\",\n \"tags\" : [ \"env:prod\" ],\n \"metadata\" : { },\n \"enabled\" : true,\n \"debug_flow\" : true,\n \"export_reporting\" : false,\n \"groups\" : [ \"default\" ],\n \"frontend\" : {\n \"domains\" : [ \"hey-next-gen.oto.tools/\", \"hey.oto.tools/\" ],\n \"strip_path\" : true,\n \"exact\" : false,\n \"headers\" : { },\n \"methods\" : [ ]\n },\n \"backend\" : {\n \"targets\" : [ {\n \"id\" : \"127.0.0.1:8081\",\n \"hostname\" : \"127.0.0.1\",\n \"port\" : 8081,\n \"tls\" : false,\n \"weight\" : 1,\n \"protocol\" : \"HTTP/1.1\",\n \"ip_address\" : null,\n \"tls_config\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n } ],\n \"target_refs\" : [ ],\n \"root\" : \"/\",\n \"rewrite\" : false,\n \"load_balancing\" : {\n \"type\" : \"RoundRobin\"\n },\n \"client\" : {\n \"useCircuitBreaker\" : true,\n \"retries\" : 1,\n \"maxErrors\" : 20,\n \"retryInitialDelay\" : 50,\n \"backoffFactor\" : 2,\n \"callTimeout\" : 30000,\n \"callAndStreamTimeout\" : 120000,\n \"connectionTimeout\" : 10000,\n \"idleTimeout\" : 60000,\n \"globalTimeout\" : 30000,\n \"sampleInterval\" : 2000,\n \"proxy\" : { },\n \"customTimeouts\" : [ ],\n \"cacheConnectionSettings\" : {\n \"enabled\" : false,\n \"queueSize\" : 2048\n }\n }\n },\n \"backend_ref\" : null,\n \"plugins\" : [ ]\n },\n \"report\": {\n \"id\" : \"ab73707b3-946b-4853-92d4-4c38bbaac6d6\",\n \"creation\" : \"2022-02-15T09:51:25.402+01:00\",\n \"termination\" : \"2022-02-15T09:51:25.408+01:00\",\n \"duration\" : 5,\n \"duration_ns\" : 5905522,\n \"overhead\" : 4,\n \"overhead_ns\" : 4223215,\n \"overhead_in\" : 2,\n \"overhead_in_ns\" : 2687750,\n \"overhead_out\" : 1,\n \"overhead_out_ns\" : 1535465,\n \"state\" : \"Successful\",\n \"steps\" : [ {\n \"task\" : \"start-handling\",\n \"start\" : 1644915085402,\n \"start_fmt\" : \"2022-02-15T09:51:25.402+01:00\",\n \"stop\" : 1644915085402,\n \"stop_fmt\" : \"2022-02-15T09:51:25.402+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 177430,\n \"ctx\" : null\n }, {\n \"task\" : \"check-concurrent-requests\",\n \"start\" : 1644915085402,\n \"start_fmt\" : \"2022-02-15T09:51:25.402+01:00\",\n \"stop\" : 1644915085402,\n \"stop_fmt\" : \"2022-02-15T09:51:25.402+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 145242,\n \"ctx\" : null\n }, {\n \"task\" : \"find-route\",\n \"start\" : 1644915085402,\n \"start_fmt\" : \"2022-02-15T09:51:25.402+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 497119,\n \"ctx\" : {\n \"found_route\" : {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"service_dev_d54f11d0-18e2-4da4-9316-cf47733fd29a\",\n \"name\" : \"hey\",\n \"description\" : \"hey\",\n \"tags\" : [ \"env:prod\" ],\n \"metadata\" : { },\n \"enabled\" : true,\n \"debug_flow\" : true,\n \"export_reporting\" : false,\n \"groups\" : [ \"default\" ],\n \"frontend\" : {\n \"domains\" : [ \"hey-next-gen.oto.tools/\", \"hey.oto.tools/\" ],\n \"strip_path\" : true,\n \"exact\" : false,\n \"headers\" : { },\n \"methods\" : [ ]\n },\n \"backend\" : {\n \"targets\" : [ {\n \"id\" : \"127.0.0.1:8081\",\n \"hostname\" : \"127.0.0.1\",\n \"port\" : 8081,\n \"tls\" : false,\n \"weight\" : 1,\n \"protocol\" : \"HTTP/1.1\",\n \"ip_address\" : null,\n \"tls_config\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n } ],\n \"target_refs\" : [ ],\n \"root\" : \"/\",\n \"rewrite\" : false,\n \"load_balancing\" : {\n \"type\" : \"RoundRobin\"\n },\n \"client\" : {\n \"useCircuitBreaker\" : true,\n \"retries\" : 1,\n \"maxErrors\" : 20,\n \"retryInitialDelay\" : 50,\n \"backoffFactor\" : 2,\n \"callTimeout\" : 30000,\n \"callAndStreamTimeout\" : 120000,\n \"connectionTimeout\" : 10000,\n \"idleTimeout\" : 60000,\n \"globalTimeout\" : 30000,\n \"sampleInterval\" : 2000,\n \"proxy\" : { },\n \"customTimeouts\" : [ ],\n \"cacheConnectionSettings\" : {\n \"enabled\" : false,\n \"queueSize\" : 2048\n }\n }\n },\n \"backend_ref\" : null,\n \"plugins\" : [ ]\n },\n \"matched_path\" : \"\",\n \"exact\" : true,\n \"params\" : { },\n \"matched_routes\" : [ \"service_dev_d54f11d0-18e2-4da4-9316-cf47733fd29a\" ]\n }\n }, {\n \"task\" : \"compute-plugins\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 105151,\n \"ctx\" : {\n \"disabled_plugins\" : [ ],\n \"filtered_plugins\" : [ ]\n }\n }, {\n \"task\" : \"tenant-check\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 26097,\n \"ctx\" : null\n }, {\n \"task\" : \"check-global-maintenance\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 14132,\n \"ctx\" : null\n }, {\n \"task\" : \"call-before-request-callbacks\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 56671,\n \"ctx\" : null\n }, {\n \"task\" : \"extract-tracking-id\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 5207,\n \"ctx\" : null\n }, {\n \"task\" : \"call-pre-route-plugins\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 39786,\n \"ctx\" : null\n }, {\n \"task\" : \"call-access-validator-plugins\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 25311,\n \"ctx\" : null\n }, {\n \"task\" : \"enforce-global-limits\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085404,\n \"stop_fmt\" : \"2022-02-15T09:51:25.404+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 296617,\n \"ctx\" : {\n \"remaining_quotas\" : {\n \"authorizedCallsPerSec\" : 10000000,\n \"currentCallsPerSec\" : 10000000,\n \"remainingCallsPerSec\" : 10000000,\n \"authorizedCallsPerDay\" : 10000000,\n \"currentCallsPerDay\" : 10000000,\n \"remainingCallsPerDay\" : 10000000,\n \"authorizedCallsPerMonth\" : 10000000,\n \"currentCallsPerMonth\" : 10000000,\n \"remainingCallsPerMonth\" : 10000000\n }\n }\n }, {\n \"task\" : \"choose-backend\",\n \"start\" : 1644915085404,\n \"start_fmt\" : \"2022-02-15T09:51:25.404+01:00\",\n \"stop\" : 1644915085404,\n \"stop_fmt\" : \"2022-02-15T09:51:25.404+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 368899,\n \"ctx\" : {\n \"backend\" : {\n \"id\" : \"127.0.0.1:8081\",\n \"hostname\" : \"127.0.0.1\",\n \"port\" : 8081,\n \"tls\" : false,\n \"weight\" : 1,\n \"protocol\" : \"HTTP/1.1\",\n \"ip_address\" : null,\n \"tls_config\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n }\n }\n }, {\n \"task\" : \"transform-request\",\n \"start\" : 1644915085404,\n \"start_fmt\" : \"2022-02-15T09:51:25.404+01:00\",\n \"stop\" : 1644915085404,\n \"stop_fmt\" : \"2022-02-15T09:51:25.404+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 506363,\n \"ctx\" : null\n }, {\n \"task\" : \"call-backend\",\n \"start\" : 1644915085404,\n \"start_fmt\" : \"2022-02-15T09:51:25.404+01:00\",\n \"stop\" : 1644915085407,\n \"stop_fmt\" : \"2022-02-15T09:51:25.407+01:00\",\n \"duration\" : 2,\n \"duration_ns\" : 2163470,\n \"ctx\" : null\n }, {\n \"task\" : \"transform-response\",\n \"start\" : 1644915085407,\n \"start_fmt\" : \"2022-02-15T09:51:25.407+01:00\",\n \"stop\" : 1644915085407,\n \"stop_fmt\" : \"2022-02-15T09:51:25.407+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 279887,\n \"ctx\" : null\n }, {\n \"task\" : \"stream-response\",\n \"start\" : 1644915085407,\n \"start_fmt\" : \"2022-02-15T09:51:25.407+01:00\",\n \"stop\" : 1644915085407,\n \"stop_fmt\" : \"2022-02-15T09:51:25.407+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 382952,\n \"ctx\" : null\n }, {\n \"task\" : \"trigger-analytics\",\n \"start\" : 1644915085407,\n \"start_fmt\" : \"2022-02-15T09:51:25.407+01:00\",\n \"stop\" : 1644915085408,\n \"stop_fmt\" : \"2022-02-15T09:51:25.408+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 812036,\n \"ctx\" : null\n }, {\n \"task\" : \"request-success\",\n \"start\" : 1644915085408,\n \"start_fmt\" : \"2022-02-15T09:51:25.408+01:00\",\n \"stop\" : 1644915085408,\n \"stop_fmt\" : \"2022-02-15T09:51:25.408+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 0,\n \"ctx\" : null\n } ]\n }\n}\n```\n\n## Debugging\n\nwith the new reporting capabilities, the new engine also have debugging capabilities built in. In you enable the `debug_flow` flag on a route (or service), the resulting `RequestFlowReport` will be enriched with contextual informations between each plugins of the route plugin pipeline\n\n@@@ note\nyou can also use the `Try it` feature of the new route designer UI to get debug reports automatically for a specific call\n@@@\n\n## HTTP traffic capture\n\nusing the `capture` flag, a `TrafficCaptureEvent` is generated for each http request/response. This event will contains request and response body. Those events can be exported using @ref:[data exporters](../entities/data-exporters.md) as usual. You can also use the @ref:[GoReplay file exporter](../entities/data-exporters.md#goreplay-file) that is specifically designed to ingest those events and create [GoReplay](https://goreplay.org/) files (`.gor`)\n\n@@@ warning\nthis feature can have actual impact on CPU and RAM consumption\n@@@\n\n```json\n{\n \"@id\": \"d5998b0c4-cb08-43e6-9921-27472c7a56e0\",\n \"@timestamp\": 1651828801115,\n \"@type\": \"TrafficCaptureEvent\",\n \"@product\": \"otoroshi\",\n \"@serviceId\": \"route_2b2670879-131c-423d-b755-470c7b1c74b1\",\n \"@service\": \"test-server\",\n \"@env\": \"prod\",\n \"route\": {\n \"id\": \"route_2b2670879-131c-423d-b755-470c7b1c74b1\",\n \"name\": \"test-server\"\n },\n \"request\": {\n \"id\": \"152250645825034725600000\",\n \"int_id\": 115,\n \"method\": \"POST\",\n \"headers\": {\n \"Host\": \"test-server-next-gen.oto.tools:9999\",\n \"Accept\": \"*/*\",\n \"Cookie\": \"fifoo=fibar\",\n \"User-Agent\": \"curl/7.64.1\",\n \"Content-Type\": \"application/json\",\n \"Content-Length\": \"13\",\n \"Remote-Address\": \"127.0.0.1:57660\",\n \"Timeout-Access\": \"\",\n \"Raw-Request-URI\": \"/\",\n \"Tls-Session-Info\": \"Session(1651828041285|SSL_NULL_WITH_NULL_NULL)\"\n },\n \"cookies\": [\n {\n \"name\": \"fifoo\",\n \"value\": \"fibar\",\n \"path\": \"/\",\n \"domain\": null,\n \"http_only\": true,\n \"max_age\": null,\n \"secure\": false,\n \"same_site\": null\n }\n ],\n \"tls\": false,\n \"uri\": \"/\",\n \"path\": \"/\",\n \"version\": \"HTTP/1.1\",\n \"has_body\": true,\n \"remote\": \"127.0.0.1\",\n \"client_cert_chain\": null,\n \"body\": \"{\\\"foo\\\":\\\"bar\\\"}\"\n },\n \"backend_request\": {\n \"url\": \"http://localhost:3000/\",\n \"method\": \"POST\",\n \"headers\": {\n \"Host\": \"localhost\",\n \"Accept\": \"*/*\",\n \"Cookie\": \"fifoo=fibar\",\n \"User-Agent\": \"curl/7.64.1\",\n \"Content-Type\": \"application/json\",\n \"Content-Length\": \"13\"\n },\n \"version\": \"HTTP/1.1\",\n \"client_cert_chain\": null,\n \"cookies\": [\n {\n \"name\": \"fifoo\",\n \"value\": \"fibar\",\n \"domain\": null,\n \"path\": \"/\",\n \"maxAge\": null,\n \"secure\": false,\n \"httpOnly\": true\n }\n ],\n \"id\": \"152260631569472064900000\",\n \"int_id\": 33,\n \"body\": \"{\\\"foo\\\":\\\"bar\\\"}\"\n },\n \"backend_response\": {\n \"status\": 200,\n \"headers\": {\n \"Date\": \"Fri, 06 May 2022 09:20:01 GMT\",\n \"Connection\": \"keep-alive\",\n \"Set-Cookie\": \"foo=bar\",\n \"Content-Type\": \"application/json\",\n \"Transfer-Encoding\": \"chunked\"\n },\n \"cookies\": [\n {\n \"name\": \"foo\",\n \"value\": \"bar\",\n \"domain\": null,\n \"path\": null,\n \"maxAge\": null,\n \"secure\": false,\n \"httpOnly\": false\n }\n ],\n \"id\": \"152260631569472064900000\",\n \"status_txt\": \"OK\",\n \"http_version\": \"HTTP/1.1\",\n \"body\": \"{\\\"headers\\\":{\\\"host\\\":\\\"localhost\\\",\\\"accept\\\":\\\"*/*\\\",\\\"user-agent\\\":\\\"curl/7.64.1\\\",\\\"content-type\\\":\\\"application/json\\\",\\\"cookie\\\":\\\"fifoo=fibar\\\",\\\"content-length\\\":\\\"13\\\"},\\\"method\\\":\\\"POST\\\",\\\"path\\\":\\\"/\\\",\\\"body\\\":\\\"{\\\\\\\"foo\\\\\\\":\\\\\\\"bar\\\\\\\"}\\\"}\"\n },\n \"response\": {\n \"id\": \"152250645825034725600000\",\n \"status\": 200,\n \"headers\": {\n \"Date\": \"Fri, 06 May 2022 09:20:01 GMT\",\n \"Connection\": \"keep-alive\",\n \"Set-Cookie\": \"foo=bar\",\n \"Content-Type\": \"application/json\",\n \"Transfer-Encoding\": \"chunked\"\n },\n \"cookies\": [\n {\n \"name\": \"foo\",\n \"value\": \"bar\",\n \"domain\": null,\n \"path\": null,\n \"maxAge\": null,\n \"secure\": false,\n \"httpOnly\": false\n }\n ],\n \"status_txt\": \"OK\",\n \"http_version\": \"HTTP/1.1\",\n \"body\": \"{\\\"headers\\\":{\\\"host\\\":\\\"localhost\\\",\\\"accept\\\":\\\"*/*\\\",\\\"user-agent\\\":\\\"curl/7.64.1\\\",\\\"content-type\\\":\\\"application/json\\\",\\\"cookie\\\":\\\"fifoo=fibar\\\",\\\"content-length\\\":\\\"13\\\"},\\\"method\\\":\\\"POST\\\",\\\"path\\\":\\\"/\\\",\\\"body\\\":\\\"{\\\\\\\"foo\\\\\\\":\\\\\\\"bar\\\\\\\"}\\\"}\"\n },\n \"user-agent-details\": null,\n \"origin-details\": null,\n \"instance-number\": 0,\n \"instance-name\": \"dev\",\n \"instance-zone\": \"local\",\n \"instance-region\": \"local\",\n \"instance-dc\": \"local\",\n \"instance-provider\": \"local\",\n \"instance-rack\": \"local\",\n \"cluster-mode\": \"Leader\",\n \"cluster-name\": \"otoroshi-leader-9hnv5HUXpbCZD7Ee\"\n}\n```\n\n## openapi import\n\nas the new router offers possibility to match exactly on a single path and a single method, and with the help of the `service` entity, it is now pretty easy to import openapi document as `route-compositions` entities. To do that, a new api has been made available to perform the translation. Be aware that this api **DOES NOT** save the entity and just return the result of the translation. \n\n```sh\ncurl -X POST \\\n -H 'Content-Type: application/json' \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n 'http://otoroshi-api.oto.tools:8080/api/route-compositions/_openapi' \\\n -d '{\"domain\":\"oto-api-proxy.oto.tools\",\"openapi\":\"https://raw.githubusercontent.com/MAIF/otoroshi/master/otoroshi/public/openapi.json\"}'\n```\n\n@@@ div { .centered-img }\n\n@@@\n\n"},{"name":"events-and-analytics.md","id":"/topics/events-and-analytics.md","url":"/topics/events-and-analytics.html","title":"Events and analytics","content":"# Events and analytics\n\nOtoroshi is a solution fully traced : calls to services, access to UI, creation of resources, etc.\n\n@@@ warning\nYou have to use [Elastic](https://www.elastic.co) to enable analytics features in Otoroshi\n@@@\n\n## Events\n\n* Analytics event\n* Gateway event\n* TCP event\n* Healthcheck event\n\n## Event log\n\nOtoroshi can read his own exported events from an Elasticsearch instance, set up in the danger zone. Theses events are available from the UI, at the following route: `https://xxxxx/bo/dashboard/events`.\n\nThe `Global events` page display all events of **GatewayEvent** type. This page is a way to quickly read an interval of events and can be used in addition of a Kibana instance.\n\nFor each event, a list of information will be displayed and an additional button `content` to watch the full content of the event, at the JSON format. \n\n## Alerts \n\n* `MaxConcurrentRequestReachedAlert`: happening when the handled requests number are greater than the limit of concurrent requests indicated in the global configuration of Otoroshi\n* `CircuitBreakerOpenedAlert`: happening when the circuit breaker pass from closed to opened\n* `CircuitBreakerClosedAlert`: happening when the circuit breaker pass from opened to closed\n* `SessionDiscardedAlert`: send when an admin discarded an admin sessions\n* `SessionsDiscardedAlert`: send when an admin discarded all admin sessions\n* `PanicModeAlert`: send when panic mode is enabled\n* `OtoroshiExportAlert`: send when otoroshi global configuration is exported\n* `U2FAdminDeletedAlert`: send when an admin has deleted an other admin user\n* `BlackListedBackOfficeUserAlert`: send when a blacklisted user has tried to acccess to the UI\n* `AdminLoggedInAlert`: send when an user admin has logged to the UI\n* `AdminFirstLogin`: send when an user admin has successfully logged to the UI for the first time\n* `AdminLoggedOutAlert`: send when an user admin has logged out from Otoroshi\n* `GlobalConfigModification`: send when an user amdin has changed the global configuration of Otoroshi\n* `RevokedApiKeyUsageAlert`: send when an user admin has revoked an apikey\n* `ServiceGroupCreatedAlert`: send when an user admin has created a service group\n* `ServiceGroupUpdatedAlert`: send when an user admin has updated a service group\n* `ServiceGroupDeletedAlert`: send when an user admin has deleted a service group\n* `ServiceCreatedAlert`: send when an user admin has created a tcp service\n* `ServiceUpdatedAlert`: send when an user admin has updated a tcp service\n* `ServiceDeletedAlert`: send when an user admin has deleted a tcp service\n* `ApiKeyCreatedAlert`: send when an user admin has crated a new apikey\n* `ApiKeyUpdatedAlert`: send when an user admin has updated a new apikey\n* `ApiKeyDeletedAlert`: send when an user admin has deleted a new apikey\n\n## Audit\n\nWith Otoroshi, any admin action and any sucpicious/alert action is recorded. These records are stored in Otoroshi’s datastore (only the last n records, defined by the `otoroshi.events.maxSize` config key). All the records can be send through the analytics mechanism (WebHook, Kafka, Elastic) for external and/or further usage. We recommand sending away those records for security reasons.\n\nOtoroshi keep the following list of information for each executed action:\n\n* `Date`: moment of the action\n* `User`: name of the owner\n* `From`: IP of the concerned user\n* `Action`: action performed by the person. The possible actions are:\n\n * `ACCESS_APIKEY`: User accessed a apikey\n * `ACCESS_ALL_APIKEYS`: User accessed all apikeys\n * `CREATE_APIKEY`: User created a apikey\n * `UPDATE_APIKEY`: User updated a apikey\n * `DELETE_APIKEY`: User deleted a apikey\n * `ACCESS_AUTH_MODULE`: User accessed an Auth. module\n * `ACCESS_ALL_AUTH_MODULES`: User accessed all Auth. modules\n * `CREATE_AUTH_MODULE`: User created an Auth. module\n * `UPDATE_AUTH_MODULE`: User updated an Auth. module\n * `DELETE_AUTH_MODULE`: User deleted an Auth. module\n * `ACCESS_CERTIFICATE`: User accessed a certificate\n * `ACCESS_ALL_CERTIFICATES`: User accessed all certificates\n * `CREATE_CERTIFICATE`: User created a certificate\n * `UPDATE_CERTIFICATE`: User updated a certificate\n * `DELETE_CERTIFICATE`: User deleted a certificate\n * `ACCESS_CLIENT_CERT_VALIDATOR`: User accessed a client cert. validator\n * `ACCESS_ALL_CLIENT_CERT_VALIDATORS`: User accessed all client cert. validators\n * `CREATE_CLIENT_CERT_VALIDATOR`: User created a client cert. validator\n * `UPDATE_CLIENT_CERT_VALIDATOR`: User updated a client cert. validator\n * `DELETE_CLIENT_CERT_VALIDATOR`: User deleted a client cert. validator\n * `ACCESS_DATA_EXPORTER_CONFIG`: User accessed a data exporter config\n * `ACCESS_ALL_DATA_EXPORTER_CONFIG`: User accessed all data exporter config\n * `CREATE_DATA_EXPORTER_CONFIG`: User created a data exporter config\n * `UPDATE_DATA_EXPORTER_CONFIG`: User updated a data exporter config\n * `DELETE_DATA_EXPORTER_CONFIG`: User deleted a data exporter config\n * `ACCESS_GLOBAL_JWT_VERIFIER`: User accessed a global jwt verifier\n * `ACCESS_ALL_GLOBAL_JWT_VERIFIERS`: User accessed all global jwt verifiers\n * `CREATE_GLOBAL_JWT_VERIFIER`: User created a global jwt verifier\n * `UPDATE_GLOBAL_JWT_VERIFIER`: User updated a global jwt verifier\n * `DELETE_GLOBAL_JWT_VERIFIER`: User deleted a global jwt verifier\n * `ACCESS_SCRIPT`: User accessed a script\n * `ACCESS_ALL_SCRIPTS`: User accessed all scripts\n * `CREATE_SCRIPT`: User created a script\n * `UPDATE_SCRIPT`: User updated a script\n * `DELETE_SCRIPT`: User deleted a Script\n * `ACCESS_SERVICES_GROUP`: User accessed a service group\n * `ACCESS_ALL_SERVICES_GROUPS`: User accessed all services groups\n * `CREATE_SERVICE_GROUP`: User created a service group\n * `UPDATE_SERVICE_GROUP`: User updated a service group\n * `DELETE_SERVICE_GROUP`: User deleted a service group\n * `ACCESS_SERVICES_FROM_SERVICES_GROUP`: User accessed all services from a services group\n * `ACCESS_TCP_SERVICE`: User accessed a tcp service\n * `ACCESS_ALL_TCP_SERVICES`: User accessed all tcp services\n * `CREATE_TCP_SERVICE`: User created a tcp service\n * `UPDATE_TCP_SERVICE`: User updated a tcp service\n * `DELETE_TCP_SERVICE`: User deleted a tcp service\n * `ACCESS_TEAM`: User accessed a Team\n * `ACCESS_ALL_TEAMS`: User accessed all teams\n * `CREATE_TEAM`: User created a team\n * `UPDATE_TEAM`: User updated a team\n * `DELETE_TEAM`: User deleted a team\n * `ACCESS_TENANT`: User accessed a Tenant\n * `ACCESS_ALL_TENANTS`: User accessed all tenants\n * `CREATE_TENANT`: User created a tenant\n * `UPDATE_TENANT`: User updated a tenant\n * `DELETE_TENANT`: User deleted a tenant\n * `SERVICESEARCH`: User searched for a service\n * `ACTIVATE_PANIC_MODE`: Admin activated panic mode\n\n\n* `Message`: explicit message about the action (example: the `SERVICESEARCH` action happened when an `user searched for a service`)\n* `Content`: all information at JSON format\n\n## Global metrics\n\nThe global metrics are displayed on the index page of the Otoroshi UI. Otoroshi provides information about :\n\n* the number of requests served\n* the amount of data received and sended\n* the number of concurrent requests\n* the number of requests per second\n* the current overhead\n\nMore metrics can be found on the **Global analytics** page (available at https://xxxxxx/bo/dashboard/stats).\n\n## Monitoring services\n\nOnce you have declared services, you can monitor them with Otoroshi. \n\nLet's starting by setup Otoroshi to push events to an elastic cluster via a data exporter. Then you will can setup Otoroshi events read from an elastic cluster. Go to `settings (cog icon) / Danger Zone` and expand the `Analytics: Elastic cluster (read)` section.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Service healthcheck\n\nIf you have defined an health check URL in the service descriptor, you can access the health check page from the sidebar of the service page.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Service live stats\n\nYou can also monitor live stats like total of served request, average response time, average overhead, etc. The live stats page can be accessed from the sidebar of the service page.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Service analytics\n\nYou can also get some aggregated metrics. The analytics page can be accessed from the sidebar of the service page.\n\n@@@ div { .centered-img }\n\n@@@\n\n## New proxy engine\n\n### Debug reporting\n\nwhen using the @ref:[new proxy engine](./engine.md), when a route or the global config. enables traffic capture using the `debug_flow` flag, events of type `RequestFlowReport` are generated\n\n### Traffic capture\n\nwhen using the @ref:[new proxy engine](./engine.md), when a route or the global config. enables traffic capture using the `capture` flag, events of type `TrafficCaptureEvent` are generated. It contains everything that compose otoroshi input http request and output http responses\n"},{"name":"expression-language.md","id":"/topics/expression-language.md","url":"/topics/expression-language.html","title":"Expression language","content":"# Expression language\n\n- [Documentation and examples](#documentation-and-examples)\n- [Test the expression language](#test-the-expression-language)\n\nThe expression language provides an important mechanism for accessing and manipulating Otoroshi data on different inputs. For example, with this mechanism, you can mapping a claim of an inconming token directly in a claim of a generated token (using @ref:[JWT verifiers](../entities/jwt-verifiers.md)). You can add information of the service descriptor traversed such as the domain of the service or the name of the service. This information can be useful on the backend service.\n\n## Documentation and examples\n\n@@@div { #expressions }\n \n@@@\n\nIf an input contains a string starting by `${`, Otoroshi will try to evaluate the content. If the content doesn't match a known expression,\nthe 'bad-expr' value will be set.\n\n## Test the expression language\n\nYou can test to get the same values than the right part by creating these following services. \n\n```sh\n# Let's start by downloading the latest Otoroshi.\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.2/otoroshi.jar'\n\n# Once downloading, run Otoroshi.\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n\n# Create an authentication module to protect the following route.\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/auths \\\n-H \"Otoroshi-Client-Id: admin-api-apikey-id\" \\\n-H \"Otoroshi-Client-Secret: admin-api-apikey-secret\" \\\n-H 'Content-Type: application/json; charset=utf-8' \\\n-d @- <<'EOF'\n{\"type\":\"basic\",\"id\":\"auth_mod_in_memory_auth\",\"name\":\"in-memory-auth\",\"desc\":\"in-memory-auth\",\"users\":[{\"name\":\"User Otoroshi\",\"password\":\"$2a$10$oIf4JkaOsfiypk5ZK8DKOumiNbb2xHMZUkYkuJyuIqMDYnR/zXj9i\",\"email\":\"user@foo.bar\",\"metadata\":{\"username\":\"roger\"},\"tags\":[\"foo\"],\"webauthn\":null,\"rights\":[{\"tenant\":\"*:r\",\"teams\":[\"*:r\"]}]}],\"sessionCookieValues\":{\"httpOnly\":true,\"secure\":false}}\nEOF\n\n\n# Create a proxy of the mirror.otoroshi.io on http://api.oto.tools:8080\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/routes \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-H 'Content-Type: application/json; charset=utf-8' \\\n-d @- <<'EOF'\n{\n \"id\": \"expression-language-api-service\",\n \"name\": \"expression-language\",\n \"enabled\": true,\n \"frontend\": {\n \"domains\": [\n \"api.oto.tools/\"\n ]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.OverrideHost\"\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.ApikeyCalls\",\n \"config\": {\n \"validate\": true,\n \"mandatory\": true,\n \"pass_with_user\": true,\n \"wipe_backend_request\": true,\n \"update_quotas\": true\n },\n \"plugin_index\": {\n \"validate_access\": 1,\n \"transform_request\": 2,\n \"match_route\": 0\n }\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.AuthModule\",\n \"config\": {\n \"pass_with_apikey\": true,\n \"auth_module\": null,\n \"module\": \"auth_mod_in_memory_auth\"\n },\n \"plugin_index\": {\n \"validate_access\": 1\n }\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.AdditionalHeadersIn\",\n \"config\": {\n \"headers\": {\n \"my-expr-header.apikey.unknown-tag\": \"${apikey.tags['0':'no-found-tag']}\",\n \"my-expr-header.request.uri\": \"${req.uri}\",\n \"my-expr-header.ctx.replace-field-all-value\": \"${ctx.foo.replaceAll('o','a')}\",\n \"my-expr-header.env.unknown-field\": \"${env.java_h:not-found-java_h}\",\n \"my-expr-header.service-id\": \"${service.id}\",\n \"my-expr-header.ctx.unknown-fields\": \"${ctx.foob|ctx.foot:not-found}\",\n \"my-expr-header.apikey.metadata\": \"${apikey.metadata.foo}\",\n \"my-expr-header.request.protocol\": \"${req.protocol}\",\n \"my-expr-header.service-domain\": \"${service.domain}\",\n \"my-expr-header.token.unknown-foo-field\": \"${token.foob:not-found-foob}\",\n \"my-expr-header.service-unknown-group\": \"${service.groups['0':'unkown group']}\",\n \"my-expr-header.env.path\": \"${env.PATH}\",\n \"my-expr-header.request.unknown-header\": \"${req.headers.foob:default value}\",\n \"my-expr-header.service-name\": \"${service.name}\",\n \"my-expr-header.token.foo-field\": \"${token.foob|token.foo}\",\n \"my-expr-header.request.path\": \"${req.path}\",\n \"my-expr-header.ctx.geolocation\": \"${ctx.geolocation.foo}\",\n \"my-expr-header.token.unknown-fields\": \"${token.foob|token.foob2:not-found}\",\n \"my-expr-header.request.unknown-query\": \"${req.query.foob:default value}\",\n \"my-expr-header.service-subdomain\": \"${service.subdomain}\",\n \"my-expr-header.date\": \"${date}\",\n \"my-expr-header.ctx.replace-field-value\": \"${ctx.foo.replace('o','a')}\",\n \"my-expr-header.apikey.name\": \"${apikey.name}\",\n \"my-expr-header.request.full-url\": \"${req.fullUrl}\",\n \"my-expr-header.ctx.default-value\": \"${ctx.foob:other}\",\n \"my-expr-header.service-tld\": \"${service.tld}\",\n \"my-expr-header.service-metadata\": \"${service.metadata.foo}\",\n \"my-expr-header.ctx.useragent\": \"${ctx.useragent.foo}\",\n \"my-expr-header.service-env\": \"${service.env}\",\n \"my-expr-header.request.host\": \"${req.host}\",\n \"my-expr-header.config.unknown-port-field\": \"${config.http.ports:not-found}\",\n \"my-expr-header.request.domain\": \"${req.domain}\",\n \"my-expr-header.token.replace-header-value\": \"${token.foo.replace('o','a')}\",\n \"my-expr-header.service-group\": \"${service.groups['0']}\",\n \"my-expr-header.ctx.foo\": \"${ctx.foo}\",\n \"my-expr-header.apikey.tag\": \"${apikey.tags['0']}\",\n \"my-expr-header.service-unknown-metadata\": \"${service.metadata.test:default-value}\",\n \"my-expr-header.apikey.id\": \"${apikey.id}\",\n \"my-expr-header.request.header\": \"${req.headers.foo}\",\n \"my-expr-header.request.method\": \"${req.method}\",\n \"my-expr-header.ctx.foo-field\": \"${ctx.foob|ctx.foo}\",\n \"my-expr-header.config.port\": \"${config.http.port}\",\n \"my-expr-header.token.unknown-foo\": \"${token.foo}\",\n \"my-expr-header.date-with-format\": \"${date.format('yyy-MM-dd')}\",\n \"my-expr-header.apikey.unknown-metadata\": \"${apikey.metadata.myfield:default value}\",\n \"my-expr-header.request.query\": \"${req.query.foo}\",\n \"my-expr-header.token.replace-header-all-value\": \"${token.foo.replaceAll('o','a')}\"\n }\n }\n }\n ]\n}\nEOF\n```\n\nCreate an apikey or use the default generate apikey.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/apikeys' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"clientId\": \"api-apikey-id\",\n \"clientSecret\": \"api-apikey-secret\",\n \"clientName\": \"api-apikey-name\",\n \"description\": \"api-apikey-id-description\",\n \"authorizedGroup\": \"default\",\n \"enabled\": true,\n \"throttlingQuota\": 10,\n \"dailyQuota\": 10,\n \"monthlyQuota\": 10,\n \"tags\": [\"foo\"],\n \"metadata\": {\n \"fii\": \"bar\"\n }\n}\nEOF\n```\n\nThen try to call the first service.\n\n```sh\ncurl http://api.oto.tools:8080/api/\\?foo\\=bar \\\n-H \"Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyLCJmb28iOiJiYXIifQ.lV130dFXR3bNtWBkwwf9dLmfsRVmnZhfYF9gvAaRzF8\" \\\n-H \"Otoroshi-Client-Id: api-apikey-id\" \\\n-H \"Otoroshi-Client-Secret: api-apikey-secret\" \\\n-H \"foo: bar\" | jq\n```\n\nThis will returns the list of the received headers by the mirror.\n\n```json\n{\n ...\n \"headers\": {\n ...\n \"my-expr-header.date\": \"2021-11-26T10:54:51.112+01:00\",\n \"my-expr-header.ctx.foo\": \"no-ctx-foo\",\n \"my-expr-header.env.path\": \"/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin\",\n \"my-expr-header.apikey.id\": \"admin-api-apikey-id\",\n \"my-expr-header.apikey.tag\": \"one-tag\",\n \"my-expr-header.service-id\": \"expression-language-api-service\",\n \"my-expr-header.apikey.name\": \"Otoroshi Backoffice ApiKey\",\n \"my-expr-header.config.port\": \"8080\",\n \"my-expr-header.request.uri\": \"/api/?foo=bar\",\n \"my-expr-header.service-env\": \"prod\",\n \"my-expr-header.service-tld\": \"oto.tools\",\n \"my-expr-header.request.host\": \"api.oto.tools:8080\",\n \"my-expr-header.request.path\": \"/api/\",\n \"my-expr-header.service-name\": \"expression-language\",\n \"my-expr-header.ctx.foo-field\": \"no-ctx-foob-foo\",\n \"my-expr-header.ctx.useragent\": \"no-ctx-useragent.foo\",\n \"my-expr-header.request.query\": \"bar\",\n \"my-expr-header.service-group\": \"default\",\n \"my-expr-header.request.domain\": \"api.oto.tools\",\n \"my-expr-header.request.header\": \"bar\",\n \"my-expr-header.request.method\": \"GET\",\n \"my-expr-header.service-domain\": \"api.oto.tools\",\n \"my-expr-header.apikey.metadata\": \"bar\",\n \"my-expr-header.ctx.geolocation\": \"no-ctx-geolocation.foo\",\n \"my-expr-header.token.foo-field\": \"no-token-foob-foo\",\n \"my-expr-header.date-with-format\": \"2021-11-26\",\n \"my-expr-header.request.full-url\": \"http://api.oto.tools:8080/api/?foo=bar\",\n \"my-expr-header.request.protocol\": \"http\",\n \"my-expr-header.service-metadata\": \"no-meta-foo\",\n \"my-expr-header.ctx.default-value\": \"other\",\n \"my-expr-header.env.unknown-field\": \"not-found-java_h\",\n \"my-expr-header.service-subdomain\": \"api\",\n \"my-expr-header.token.unknown-foo\": \"no-token-foo\",\n \"my-expr-header.apikey.unknown-tag\": \"one-tag\",\n \"my-expr-header.ctx.unknown-fields\": \"not-found\",\n \"my-expr-header.token.unknown-fields\": \"not-found\",\n \"my-expr-header.request.unknown-query\": \"default value\",\n \"my-expr-header.service-unknown-group\": \"default\",\n \"my-expr-header.request.unknown-header\": \"default value\",\n \"my-expr-header.apikey.unknown-metadata\": \"default value\",\n \"my-expr-header.ctx.replace-field-value\": \"no-ctx-foo\",\n \"my-expr-header.token.unknown-foo-field\": \"not-found-foob\",\n \"my-expr-header.service-unknown-metadata\": \"default-value\",\n \"my-expr-header.config.unknown-port-field\": \"not-found\",\n \"my-expr-header.token.replace-header-value\": \"no-token-foo\",\n \"my-expr-header.ctx.replace-field-all-value\": \"no-ctx-foo\",\n \"my-expr-header.token.replace-header-all-value\": \"no-token-foo\",\n }\n}\n```\n\nThen try the second call to the webapp. Navigate on your browser to `http://webapp.oto.tools:8080`. Continue with `user@foo.bar` as user and `password` as credential.\n\nThis should output:\n\n```json\n{\n ...\n \"headers\": {\n ...\n \"my-expr-header.user\": \"User Otoroshi\",\n \"my-expr-header.user.email\": \"user@foo.bar\",\n \"my-expr-header.user.metadata\": \"roger\",\n \"my-expr-header.user.profile-field\": \"User Otoroshi\",\n \"my-expr-header.user.unknown-metadata\": \"not-found\",\n \"my-expr-header.user.unknown-profile-field\": \"not-found\",\n }\n}\n```"},{"name":"graphql-composer.md","id":"/topics/graphql-composer.md","url":"/topics/graphql-composer.html","title":"GraphQL Composer Plugin","content":"# GraphQL Composer Plugin\n\n@@include[experimental.md](../includes/experimental.md) { .experimental-feature }\n\n> GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.\n[Official GraphQL website](https://graphql.org/)\n\nAPIs RESTful and GraphQL development has become one of the most popular activities for companies as well as users in recent times. In fast scaling companies, the multiplication of clients can cause the number of API needs to grow at scale.\n\nOtoroshi comes with a solution to create and meet your customers' needs without constantly creating and recreating APIs: the `GraphQL composer plugin`. The GraphQL Composer is an useful plugin to build an GraphQL API from multiples differents sources. These sources can be REST apis, GraphQL api or anything that supports the HTTP protocol. In fact, the plugin can define and expose for each of your client a specific GraphQL schema, which only corresponds to the needs of the customers.\n\n@@@ div { .centered-img }\n\n@@@\n\n\n## Tutorial\n\nLet's take an example to get a better view of this plugin. We want to build a schema with two types: \n\n* an user with a name and a password \n* an country with a name and its users.\n\nTo build this schema, we need to use three custom directives. A `directive` decorates part of a GraphQL schema or operation with additional configuration. Directives are preceded by the @ character, like so:\n\n* @ref:[rest](#directives) : to call a http rest service with dynamic path params\n* @ref:[permission](#directives) : to restrict the access to the sensitive field\n* @ref:[graphql](#directives) : to call a graphQL service by passing a url and the associated query\n\nThe final schema of our tutorial should look like this\n```graphql\ntype Country {\n name: String\n users: [User] @rest(url: \"http://localhost:5000/countries/${item.name}/users\")\n}\n\ntype User {\n name: String\n password: String @password(value: \"ADMIN\")\n}\n\ntype Query {\n users: [User] @rest(url: \"http://localhost:5000/users\", paginate: true)\n user(id: String): User @rest(url: \"http://localhost:5000/users/${params.id}\")\n countries: [Country] @graphql(url: \"https://countries.trevorblades.com\", query: \"{ countries { name }}\", paginate: true)\n}\n```\n\nNow you know the GraphQL Composer basics and how it works, let's configure it on our project:\n\n* create a route using the new Otoroshi router describing the previous countries API\n* add the GraphQL composer plugin\n* configure the plugin with the schema\n* try to call it\n\n@@@ div { .centered-img }\n\n@@@\n\n### Setup environment\n\nFirst of all, we need to download the latest Otoroshi.\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v1.5.15/otoroshi.jar'\n```\n\nNow, just run the command belows to start the Otoroshi, and look the console to see the output.\n\n```sh\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n```\n\nNow, login to [the UI](http://otoroshi.oto.tools:8080) with \n```sh\nuser = admin@otoroshi.io\npassword = password\n```\n\n### Create our countries API\n\nFirst thing to do in any new API is of course creating a `route`. We need 4 informations which are:\n\n* name: `My countries API`\n* frontend: exposed on `countries-api.oto.tools`\n* plugins: the list of plugins with only the `GraphQL composer` plugin\n\nLet's make a request call through the Otoroshi Admin API (with the default apikey), like the example below\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/routes' \\\n -d '{\n \"id\": \"countries-api\",\n \"name\": \"My countries API\",\n \"frontend\": {\n \"domains\": [\"countries.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ],\n \"load_balancing\": {\n \"type\": \"RoundRobin\"\n }\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.GraphQLBackend\"\n }\n ]\n}' \\\n -H \"Content-type: application/json\" \\\n -u admin-api-apikey-id:admin-api-apikey-secret\n```\n\n### Build the countries API \n\nLet's continue our API by patching the configuration of the GraphQL plugin with the complete schema.\n\n```sh\ncurl -X PUT 'http://otoroshi-api.oto.tools:8080/api/routes/countries-api' \\\n -d '{\n \"id\": \"countries-api\",\n \"name\": \"My countries API\",\n \"frontend\": {\n \"domains\": [\n \"countries.oto.tools\"\n ]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ],\n \"load_balancing\": {\n \"type\": \"RoundRobin\"\n }\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.GraphQLBackend\",\n \"config\": {\n \"schema\": \"type Country {\\n name: String\\n users: [User] @rest(url: \\\"http://localhost:8181/countries/${item.name}/users\\\", headers: \\\"{}\\\")\\n}\\n\\ntype Query {\\n users: [User] @rest(url: \\\"http://localhost:8181/users\\\", paginate: true, headers: \\\"{}\\\")\\n user(id: String): User @rest(url: \\\"http://localhost:8181/users/${params.id}\\\")\\n countries: [Country] @graphql(url: \\\"https://countries.trevorblades.com\\\", query: \\\"{ countries { name }}\\\", paginate: true)\\ntype User {\\n name: String\\n password: String }\\n\"\n }\n }\n ]\n}' \\\n -H \"Content-type: application/json\" \\\n -u admin-api-apikey-id:admin-api-apikey-secret\n```\n\nThe route is created but it expects an API, exposed on the localhost:8181, to work. \n\nLet's create this simple API which returns a list of users and of countries. This should look like the following snippet.\nThe API uses express as http server.\n\n```js\nconst express = require('express')\n\nconst app = express()\n\nconst users = [\n {\n name: 'Joe',\n password: 'password'\n },\n {\n name: 'John',\n password: 'password2'\n }\n]\n\nconst countries = [\n {\n name: 'Andorra',\n users: [users[0]]\n },\n {\n name: 'United Arab Emirates',\n users: [users[1]]\n }\n]\n\napp.get('/users', (_, res) => {\n return res.json(users)\n})\n\napp.get(`/users/:name`, (req, res) => {\n res.json(users.find(u => u.name === req.params.name))\n})\n\napp.get('/countries/:id/users', (req, res) => {\n const country = countries.find(c => c.name === req.params.id)\n\n if (country) \n return res.json(country.users)\n else \n return res.json([])\n})\n\napp.listen(8181, () => {\n console.log(`Listening on 8181`)\n});\n\n```\n\nLet's try to make a first call to our countries API.\n\n```sh\ncurl 'countries.oto.tools:9999/' \\\n--header 'Content-Type: application/json' \\\n--data-binary @- << EOF\n{\n \"query\": \"{\\n countries {\\n name\\n users {\\n name\\n }\\n }\\n}\"\n}\nEOF\n```\n\nYou should see the following content in your terminal.\n\n```json\n{\n \"data\": { \n \"countries\": [\n { \n \"name\":\"Andorra\",\n \"users\": [\n { \"name\":\"Joe\" }\n ]\n }\n ]\n }\n}\n```\n\nThe call graph should looks like\n\n```\n1. Calls https://countries.trevorblades.com\n2. For each country:\n - extract the field name\n - calls http://localhost:8181/countries/${country}/users to get the list of users for this country\n```\n\nYou may have noticed that we added an argument at the end of the graphql directive named `paginate`. It enabled the paging for the client accepting limit and offset parameters. These parameters are used by the plugin to filter and reduce the content.\n\nLet's make a new call that does not accept any country.\n\n```sh\ncurl 'countries.oto.tools:9999/' \\\n--header 'Content-Type: application/json' \\\n--data-binary @- << EOF\n{\n \"query\": \"{\\n countries(limit: 0) {\\n name\\n users {\\n name\\n }\\n }\\n}\"\n}\nEOF\n```\n\nYou should see the following content in your terminal.\n\n```json\n{\n \"data\": { \n \"countries\": []\n }\n}\n```\n\nLet's move on to the next section to secure sensitive field of our API.\n\n### Basics of permissions \n\nThe permission directives has been created to protect the fields of the graphql schema. The validation process starts by create a `context` for all incoming requests, based on the list of paths defined in the permissions field of the plugin. The permissions paths can refer to the request data (url, headers, etc), user credentials (api key, etc) and informations about the matched route. Then the process can validate that the value or values are present in the `context`.\n\n@@@div { .simple-block }\n\n
\nPermission\n\n
\n\n*Arguments : value and unauthorized_value*\n\nThe permission directive can be used to secure a field on **one** value. The directive checks that a specific value is present in the `context`.\n\nTwo arguments are available, the first, named `value`, is required and designates the value found. The second optional value, `unauthorized_value`, can be used to indicates, in the outcoming response, the rejection message.\n\n**Example**\n```js\ntype User {\n id: String @permission(\n value: \"FOO\", \n unauthorized_value: \"You're not authorized to get this field\")\n}\n```\n@@@\n\n@@@div { .simple-block }\n\n
\nAll permissions\n\n
\n\n*Arguments : values and unauthorized_value*\n\nThis directive is presumably the same as the previous one except that it takes a list of values.\n\n**Example**\n```js\ntype User {\n id: String @allpermissions(\n values: [\"FOO\", \"BAR\"], \n unauthorized_value: \"FOO and BAR could not be found\")\n}\n```\n@@@\n\n@@@div { .simple-block }\n\n
\nOne permissions of\n\n
\n*Arguments : values and unauthorized_value*\n\nThis directive takes a list of values and validate that one of them is in the context.\n\n**Example**\n```js\ntype User {\n id: String @onePermissionsOf(\n values: [\"FOO\", \"BAR\"], \n unauthorized_value: \"FOO or BAR could not be found\")\n}\n```\n@@@\n\n@@@div { .simple-block }\n\n
\nAuthorize\n\n
\n\n*Arguments : path, value and unauthorized_value*\n\nThe authorize directive has one more required argument, named `path`, which indicates the path to value, in the context. Unlike the last three directives, the authorize directive doesn't search in the entire context but at the specified path.\n\n**Example**\n```js\ntype User {\n id: String @authorize(\n path: \"$.raw_request.headers.foo\", \n value: \"BAR\", \n unauthorized_value: \"Bar could not be found in the foo header\")\n}\n```\n@@@\n\nLet's restrict the password field to the users that comes with a `role` header of the value `ADMIN`.\n\n1. Patch the configuration of the API by adding the permissions in the configuration of the plugin.\n```json\n...\n \"permissions\": [\"$.raw_request.headers.role\"]\n...\n```\n\n1. Add an directive on the password field in the schema\n```graphql\ntype User {\n name: String\n password: String @permission(value: \"ADMIN\")\n}\n```\n\nLet's make a call with the role header\n\n```sh\ncurl 'countries.oto.tools:9999/' \\\n--header 'Content-Type: application/json' \\\n--header 'role: ADMIN'\n--data-binary @- << EOF\n{\n \"query\": \"{\\n countries(limit: 0) {\\n name\\n users {\\n name\\n password\\n }\\n }\\n}\"\n}\nEOF\n```\n\nNow try to change the value of the role header\n\n```sh\ncurl 'countries.oto.tools:9999/' \\\n--header 'Content-Type: application/json' \\\n--header 'role: USER'\n--data-binary @- << EOF\n{\n \"query\": \"{\\n countries(limit: 0) {\\n name\\n users {\\n name\\n password\\n }\\n }\\n}\"\n}\nEOF\n```\n\nThe error message should look like \n\n```json\n{\n \"errors\": [\n {\n \"message\": \"You're not authorized\",\n \"path\": [\n \"countries\",\n 0,\n \"users\",\n 0,\n \"password\"\n ],\n ...\n }\n ]\n}\n```\n\n\n# Glossary\n\n## Directives\n\n@@@div { .simple-block }\n\n
\nRest\n\n
\n\n*Arguments : url, method, headers, timeout, data, response_path, response_filter, limit, offset, paginate*\n\nThe rest directive is used to expose servers that communicate using the http protocol. The only required argument is the `url`.\n\n**Example**\n```js\ntype Query {\n users(limit: Int, offset: Int): [User] @rest(url: \"http://foo.oto.tools/users\", method: \"GET\")\n}\n```\n\nIt can be placed on the field of a query and type. To custom your url queries, you can use the path parameter and another field with respectively, `params` and `item` variables.\n\n**Example**\n```js\ntype Country {\n name: String\n phone: String\n users: [User] @rest(url: \"http://foo.oto.tools/users/${item.name}\")\n}\n\ntype Query {\n user(id: String): User @rest(url: \"http://foo.oto.tools/users/${params.id}\")\n}\n```\n@@@\n\n@@@div { .simple-block }\n\n
\nGraphQL\n\n
\n\n*Arguments : url, method, headers, timeout, query, data, response_path, response_filter, limit, offset, paginate*\n\nThe rest directive is used to call an other graphql server.\n\nThe required argument are the `url` and the `query`.\n\n**Example**\n```js\ntype Query {\n countries: [Country] @graphql(url: \"https://countries.trevorblades.com/\", query: \"{ countries { name phone }}\")\n}\n\ntype Country {\n name: String\n phone: String\n}\n```\n@@@\n\n@@@div { .simple-block }\n\n
\nSoap\n\n
\n*Arguments: all following arguments*\n\nThe soap directive is used to call a soap service. \n\n```js\ntype Query {\n randomNumber: String @soap(\n jq_response_filter: \".[\\\"soap:Envelope\\\"] | .[\\\"soap:Body\\\"] | .[\\\"m:NumberToWordsResponse\\\"] | .[\\\"m:NumberToWordsResult\\\"]\", \n url: \"https://www.dataaccess.com/webservicesserver/numberconversion.wso\", \n envelope: \" \\n \\n \\n \\n 12 \\n \\n \\n\")\n}\n```\n\n\n##### Specific arguments\n\n| Argument | Type | Optional | Default value |\n| --------------------------- | --------- | -------- | ------------- |\n| envelope | *STRING* | Required | |\n| url | *STRING* | x | |\n| action | *STRING* | x | |\n| preserve_query | *BOOLEAN* | Required | true |\n| charset | *STRING* | x | |\n| convert_request_body_to_xml | *BOOLEAN* | Required | true |\n| jq_request_filter | *STRING* | x | |\n| jq_response_filter | *STRING* | x | |\n\n@@@\n\n@@@div { .simple-block }\n\n
\nJSON\n\n
\n*Arguments: path, json, paginate*\n\nThe json directive can be used to expose static data or mocked data. The first usage is to defined a raw stringify JSON in the `data` argument. The second usage is to set data in the predefined field of the GraphQL plugin composer and to specify a path in the `path` argument.\n\n**Example**\n```js\ntype Query {\n users_from_raw_data: [User] @json(data: \"[{\\\"firstname\\\":\\\"Foo\\\",\\\"name\\\":\\\"Bar\\\"}]\")\n users_from_predefined_data: [User] @json(path: \"users\")\n}\n```\n@@@\n\n@@@div { .simple-block }\n\n
\nMock\n\n
\n*Arguments: url*\n\nThe mock directive is to used with the Mock Responses Plugin, also named `Charlatan`. This directive can be interesting to mock your schema and start to use your Otoroshi route before starting to develop the underlying service.\n\n**Example**\n```js\ntype Query {\n users: @mock(url: \"/users\")\n}\n```\n\nThis example supposes that the Mock Responses plugin is set on the route's feed, and that an endpoint `/users` is available.\n\n@@@\n\n### List of directive arguments\n\n| Argument | Type | Optional | Default value |\n| ------------------ | ---------------- | --------------------------- | ------------- |\n| url | *STRING* | | |\n| method | *STRING* | x | GET |\n| headers | *STRING* | x | |\n| timeout | *INT* | x | 5000 |\n| data | *STRING* | x | |\n| path | *STRING* | x (only for json directive) | |\n| query | *STRING* | x | |\n| response_path | *STRING* | x | |\n| response_filter | *STRING* | x | |\n| limit | *INT* | x | |\n| offset | *INT* | x | |\n| value | *STRING* | | |\n| values | LIST of *STRING* | |\n| path | *STRING* | | |\n| paginate | *BOOLEAN* | x | |\n| unauthorized_value | *STRING* | x (only for permissions directive) | |\n"},{"name":"http3.md","id":"/topics/http3.md","url":"/topics/http3.html","title":"HTTP3 support","content":"# HTTP3 support\n\n@@include[experimental.md](../includes/experimental.md) { .experimental-feature }\n\nHTTP3 server and client previews are available in otoroshi since version 1.5.14\n\n\n## Server\n\nto enable http3 server preview, you need to enable the following flags\n\n```conf\notoroshi.next.experimental.netty-server.enabled = true\notoroshi.next.experimental.netty-server.http3.enabled = true\notoroshi.next.experimental.netty-server.http3.port = 10048\n```\n\nthen you will be able to send HTTP3 request on port 10048. For instance, using [quiche-client](https://github.com/cloudflare/quiche)\n\n```sh\ncargo run --bin quiche-client -- --no-verify 'https://my-service.oto.tools:10048'\n```\n\n## Client\n\nto consume services exposed with HTTP3, just select the `HTTP/3.0` protocol in the backend target."},{"name":"index.md","id":"/topics/index.md","url":"/topics/index.html","title":"Detailed topics","content":"# Detailed topics\n\nIn this sections, you will find informations about various Otoroshi topics \n\n* @ref:[Proxy engine](./engine.md)\n* @ref:[WASM support](./wasm-usage.md)\n* @ref:[Chaos engineering](./chaos-engineering.md)\n* @ref:[TLS](./tls.md)\n* @ref:[Otoroshi's PKI](./pki.md)\n* @ref:[Monitoring](./monitoring.md)\n* @ref:[Events and analytics](./events-and-analytics.md)\n* @ref:[Developer portal with Daikoku](./dev-portal.md)\n* @ref:[Sessions management](./sessions-mgmt.md)\n* @ref:[The Otoroshi communication protocol](./otoroshi-protocol.md)\n* @ref:[Expression language](./expression-language.md)\n* @ref:[Otoroshi user rights](./user-rights.md)\n* @ref:[GraphQL composer](./graphql-composer.md)\n* @ref:[Secret vaults](./secrets.md)\n* @ref:[Otoroshi tunnels](./tunnels.md)\n* @ref:[Relay routing](./relay-routing.md)\n* @ref:[Alternative http backend](./netty-server.md)\n* @ref:[HTTP3 support](./http3.md)\n* @ref:[Anonymous reporting](./anonymous-reporting.md)\n\n@@@ index\n\n* [Proxy engine](./engine.md)\n* [WASM support](./wasm-usage.md)\n* [Chaos engineering](./chaos-engineering.md)\n* [TLS](./tls.md)\n* [Otoroshi's PKI](./pki.md)\n* [Monitoring](./monitoring.md)\n* [Events and analytics](./events-and-analytics.md)\n* [Developer portal with Daikoku](./dev-portal.md)\n* [Sessions management](./sessions-mgmt.md)\n* [The Otoroshi communication protocol](./otoroshi-protocol.md)\n* [Expression language](./expression-language.md)\n* [Otoroshi user rights](./user-rights.md)\n* [GraphQL composer](./graphql-composer.md)\n* [Secret vaults](./secrets.md)\n* [Otoroshi tunnels](./tunnels.md)\n* [Relay routing](./relay-routing.md)\n* [Alternative http backend](./netty-server.md)\n* [HTTP3 support](./http3.md)\n* [Anonymous reporting](./anonymous-reporting.md)\n \n@@@\n"},{"name":"monitoring.md","id":"/topics/monitoring.md","url":"/topics/monitoring.html","title":"Monitoring","content":"# Monitoring\n\nThe Otoroshi API exposes two endpoints to know more about instance health. All the following endpoint are exposed on the instance host through it's ip address. It is also exposed on the otoroshi api hostname and the otoroshi backoffice hostname\n\n* `/health`: the health of the Otoroshi instance\n* `/metrics`: the metrics of the Otoroshi instance, either in JSON or Prometheus format using the `Accept` header (with `application/json` / `application/prometheus` values) or the `format` query param (with `json` or `prometheus` values)\n* `/live`: returns an http 200 response `{\"live\": true}` when the service is alive\n* `/ready`: return an http 200 response `{\"ready\": true}` when the instance is ready to accept traffic (certs synced, plugins compiled, etc). if not, returns http 503 `{\"ready\": false}`\n* `/startup`: return an http 200 response `{\"started\": true}` when the instance is ready to accept traffic (certs synced, plugins compiled, etc). if not, returns http 503 `{\"started\": false}`\n\nthose routes are also available on any hostname leading to otoroshi with a twist in the URL\n\n* http://xxxxxxxx.xxxxx.xx/.well-known/otoroshi/monitoring/health\n* http://xxxxxxxx.xxxxx.xx/.well-known/otoroshi/monitoring/metrics\n* http://xxxxxxxx.xxxxx.xx/.well-known/otoroshi/monitoring/live\n* http://xxxxxxxx.xxxxx.xx/.well-known/otoroshi/monitoring/ready\n* http://xxxxxxxx.xxxxx.xx/.well-known/otoroshi/monitoring/startup\n\n## Endpoints security\n\nThe two endpoints are exposed publicly on the Otoroshi admin api. But you can remove the corresponding public pattern and query the endpoints using standard apikeys. If you don't want to use apikeys but don't want to expose the endpoints publicly, you can defined two config. variables (`otoroshi.health.accessKey` or `HEALTH_ACCESS_KEY` and `otoroshi.metrics.accessKey` or `OTOROSHI_METRICS_ACCESS_KEY`) that will hold an access key for the endpoints. Then you can call the endpoints with an `access_key` query param with the value defined in the config. If you don't defined `otoroshi.metrics.accessKey` but define `otoroshi.health.accessKey`, `otoroshi.metrics.accessKey` will have the value of `otoroshi.health.accessKey`.\n \n## Examples\n\nlet say `otoroshi.health.accessKey` has value `MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY`\n\n```sh\n$ curl http://otoroshi-api.oto.tools:8080/health\\?access_key\\=MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY\n{\"otoroshi\":\"healthy\",\"datastore\":\"healthy\"}\n\n$ curl -H 'Accept: application/json' http://otoroshi-api.oto.tools:8080/metrics\\?access_key\\=MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY\n{\"version\":\"4.0.0\",\"gauges\":{\"attr.app.commit\":{\"value\":\"xxxx\"},\"attr.app.id\":{\"value\":\"xxxx\"},\"attr.cluster.mode\":{\"value\":\"Leader\"},\"attr.cluster.name\":{\"value\":\"otoroshi-leader-0\"},\"attr.instance.env\":{\"value\":\"prod\"},\"attr.instance.id\":{\"value\":\"xxxx\"},\"attr.instance.number\":{\"value\":\"0\"},\"attr.jvm.cpu.usage\":{\"value\":136},\"attr.jvm.heap.size\":{\"value\":1409},\"attr.jvm.heap.used\":{\"value\":112},\"internals.0.concurrent-requests\":{\"value\":1},\"internals.global.throttling-quotas\":{\"value\":2},\"jvm.attr.name\":{\"value\":\"2085@xxxx\"},\"jvm.attr.uptime\":{\"value\":2296900},\"jvm.attr.vendor\":{\"value\":\"JDK11\"},\"jvm.gc.PS-MarkSweep.count\":{\"value\":3},\"jvm.gc.PS-MarkSweep.time\":{\"value\":261},\"jvm.gc.PS-Scavenge.count\":{\"value\":12},\"jvm.gc.PS-Scavenge.time\":{\"value\":161},\"jvm.memory.heap.committed\":{\"value\":1477967872},\"jvm.memory.heap.init\":{\"value\":1690304512},\"jvm.memory.heap.max\":{\"value\":3005218816},\"jvm.memory.heap.usage\":{\"value\":0.03916456777568639},\"jvm.memory.heap.used\":{\"value\":117698096},\"jvm.memory.non-heap.committed\":{\"value\":166445056},\"jvm.memory.non-heap.init\":{\"value\":7667712},\"jvm.memory.non-heap.max\":{\"value\":994050048},\"jvm.memory.non-heap.usage\":{\"value\":0.1523920694986979},\"jvm.memory.non-heap.used\":{\"value\":151485344},\"jvm.memory.pools.CodeHeap-'non-nmethods'.committed\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'non-nmethods'.init\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'non-nmethods'.max\":{\"value\":5832704},\"jvm.memory.pools.CodeHeap-'non-nmethods'.usage\":{\"value\":0.28408093398876405},\"jvm.memory.pools.CodeHeap-'non-nmethods'.used\":{\"value\":1656960},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.committed\":{\"value\":11796480},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.init\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.max\":{\"value\":122912768},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.usage\":{\"value\":0.09536102872567315},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.used\":{\"value\":11721088},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.committed\":{\"value\":37355520},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.init\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.max\":{\"value\":122912768},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.usage\":{\"value\":0.2538573047187417},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.used\":{\"value\":31202304},\"jvm.memory.pools.Compressed-Class-Space.committed\":{\"value\":14942208},\"jvm.memory.pools.Compressed-Class-Space.init\":{\"value\":0},\"jvm.memory.pools.Compressed-Class-Space.max\":{\"value\":367001600},\"jvm.memory.pools.Compressed-Class-Space.usage\":{\"value\":0.033858838762555805},\"jvm.memory.pools.Compressed-Class-Space.used\":{\"value\":12426248},\"jvm.memory.pools.Metaspace.committed\":{\"value\":99794944},\"jvm.memory.pools.Metaspace.init\":{\"value\":0},\"jvm.memory.pools.Metaspace.max\":{\"value\":375390208},\"jvm.memory.pools.Metaspace.usage\":{\"value\":0.25168142904782426},\"jvm.memory.pools.Metaspace.used\":{\"value\":94478744},\"jvm.memory.pools.PS-Eden-Space.committed\":{\"value\":349700096},\"jvm.memory.pools.PS-Eden-Space.init\":{\"value\":422576128},\"jvm.memory.pools.PS-Eden-Space.max\":{\"value\":1110966272},\"jvm.memory.pools.PS-Eden-Space.usage\":{\"value\":0.07505125052077188},\"jvm.memory.pools.PS-Eden-Space.used\":{\"value\":83379408},\"jvm.memory.pools.PS-Eden-Space.used-after-gc\":{\"value\":0},\"jvm.memory.pools.PS-Old-Gen.committed\":{\"value\":1127219200},\"jvm.memory.pools.PS-Old-Gen.init\":{\"value\":1127219200},\"jvm.memory.pools.PS-Old-Gen.max\":{\"value\":2253914112},\"jvm.memory.pools.PS-Old-Gen.usage\":{\"value\":0.014950035505168354},\"jvm.memory.pools.PS-Old-Gen.used\":{\"value\":33696096},\"jvm.memory.pools.PS-Old-Gen.used-after-gc\":{\"value\":23791152},\"jvm.memory.pools.PS-Survivor-Space.committed\":{\"value\":1048576},\"jvm.memory.pools.PS-Survivor-Space.init\":{\"value\":70254592},\"jvm.memory.pools.PS-Survivor-Space.max\":{\"value\":1048576},\"jvm.memory.pools.PS-Survivor-Space.usage\":{\"value\":0.59375},\"jvm.memory.pools.PS-Survivor-Space.used\":{\"value\":622592},\"jvm.memory.pools.PS-Survivor-Space.used-after-gc\":{\"value\":622592},\"jvm.memory.total.committed\":{\"value\":1644412928},\"jvm.memory.total.init\":{\"value\":1697972224},\"jvm.memory.total.max\":{\"value\":3999268864},\"jvm.memory.total.used\":{\"value\":269184904},\"jvm.thread.blocked.count\":{\"value\":0},\"jvm.thread.count\":{\"value\":82},\"jvm.thread.daemon.count\":{\"value\":11},\"jvm.thread.deadlock.count\":{\"value\":0},\"jvm.thread.deadlocks\":{\"value\":[]},\"jvm.thread.new.count\":{\"value\":0},\"jvm.thread.runnable.count\":{\"value\":25},\"jvm.thread.terminated.count\":{\"value\":0},\"jvm.thread.timed_waiting.count\":{\"value\":10},\"jvm.thread.waiting.count\":{\"value\":47}},\"counters\":{},\"histograms\":{},\"meters\":{},\"timers\":{}}\n\n$ curl -H 'Accept: application/prometheus' http://otoroshi-api.oto.tools:8080/metrics\\?access_key\\=MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY\n# TYPE attr_jvm_cpu_usage gauge\nattr_jvm_cpu_usage 83.0\n# TYPE attr_jvm_heap_size gauge\nattr_jvm_heap_size 1409.0\n# TYPE attr_jvm_heap_used gauge\nattr_jvm_heap_used 220.0\n# TYPE internals_0_concurrent_requests gauge\ninternals_0_concurrent_requests 1.0\n# TYPE internals_global_throttling_quotas gauge\ninternals_global_throttling_quotas 3.0\n# TYPE jvm_attr_uptime gauge\njvm_attr_uptime 2372614.0\n# TYPE jvm_gc_PS_MarkSweep_count gauge\njvm_gc_PS_MarkSweep_count 3.0\n# TYPE jvm_gc_PS_MarkSweep_time gauge\njvm_gc_PS_MarkSweep_time 261.0\n# TYPE jvm_gc_PS_Scavenge_count gauge\njvm_gc_PS_Scavenge_count 12.0\n# TYPE jvm_gc_PS_Scavenge_time gauge\njvm_gc_PS_Scavenge_time 161.0\n# TYPE jvm_memory_heap_committed gauge\njvm_memory_heap_committed 1.477967872E9\n# TYPE jvm_memory_heap_init gauge\njvm_memory_heap_init 1.690304512E9\n# TYPE jvm_memory_heap_max gauge\njvm_memory_heap_max 3.005218816E9\n# TYPE jvm_memory_heap_usage gauge\njvm_memory_heap_usage 0.07680553268571043\n# TYPE jvm_memory_heap_used gauge\njvm_memory_heap_used 2.30817432E8\n# TYPE jvm_memory_non_heap_committed gauge\njvm_memory_non_heap_committed 1.66510592E8\n# TYPE jvm_memory_non_heap_init gauge\njvm_memory_non_heap_init 7667712.0\n# TYPE jvm_memory_non_heap_max gauge\njvm_memory_non_heap_max 9.94050048E8\n# TYPE jvm_memory_non_heap_usage gauge\njvm_memory_non_heap_usage 0.15262878997416435\n# TYPE jvm_memory_non_heap_used gauge\njvm_memory_non_heap_used 1.51720656E8\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__committed gauge\njvm_memory_pools_CodeHeap__non_nmethods__committed 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__init gauge\njvm_memory_pools_CodeHeap__non_nmethods__init 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__max gauge\njvm_memory_pools_CodeHeap__non_nmethods__max 5832704.0\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__usage gauge\njvm_memory_pools_CodeHeap__non_nmethods__usage 0.28408093398876405\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__used gauge\njvm_memory_pools_CodeHeap__non_nmethods__used 1656960.0\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__committed gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__committed 1.1862016E7\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__init gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__init 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__max gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__max 1.22912768E8\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__usage gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__usage 0.09610562183417755\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__used gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__used 1.1812608E7\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__committed gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__committed 3.735552E7\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__init gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__init 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__max gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__max 1.22912768E8\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__usage gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__usage 0.25493618368435084\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__used gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__used 3.1334912E7\n# TYPE jvm_memory_pools_Compressed_Class_Space_committed gauge\njvm_memory_pools_Compressed_Class_Space_committed 1.4942208E7\n# TYPE jvm_memory_pools_Compressed_Class_Space_init gauge\njvm_memory_pools_Compressed_Class_Space_init 0.0\n# TYPE jvm_memory_pools_Compressed_Class_Space_max gauge\njvm_memory_pools_Compressed_Class_Space_max 3.670016E8\n# TYPE jvm_memory_pools_Compressed_Class_Space_usage gauge\njvm_memory_pools_Compressed_Class_Space_usage 0.03386023385184152\n# TYPE jvm_memory_pools_Compressed_Class_Space_used gauge\njvm_memory_pools_Compressed_Class_Space_used 1.242676E7\n# TYPE jvm_memory_pools_Metaspace_committed gauge\njvm_memory_pools_Metaspace_committed 9.9794944E7\n# TYPE jvm_memory_pools_Metaspace_init gauge\njvm_memory_pools_Metaspace_init 0.0\n# TYPE jvm_memory_pools_Metaspace_max gauge\njvm_memory_pools_Metaspace_max 3.75390208E8\n# TYPE jvm_memory_pools_Metaspace_usage gauge\njvm_memory_pools_Metaspace_usage 0.25170985813247426\n# TYPE jvm_memory_pools_Metaspace_used gauge\njvm_memory_pools_Metaspace_used 9.4489416E7\n# TYPE jvm_memory_pools_PS_Eden_Space_committed gauge\njvm_memory_pools_PS_Eden_Space_committed 3.49700096E8\n# TYPE jvm_memory_pools_PS_Eden_Space_init gauge\njvm_memory_pools_PS_Eden_Space_init 4.22576128E8\n# TYPE jvm_memory_pools_PS_Eden_Space_max gauge\njvm_memory_pools_PS_Eden_Space_max 1.110966272E9\n# TYPE jvm_memory_pools_PS_Eden_Space_usage gauge\njvm_memory_pools_PS_Eden_Space_usage 0.17698545577448457\n# TYPE jvm_memory_pools_PS_Eden_Space_used gauge\njvm_memory_pools_PS_Eden_Space_used 1.96624872E8\n# TYPE jvm_memory_pools_PS_Eden_Space_used_after_gc gauge\njvm_memory_pools_PS_Eden_Space_used_after_gc 0.0\n# TYPE jvm_memory_pools_PS_Old_Gen_committed gauge\njvm_memory_pools_PS_Old_Gen_committed 1.1272192E9\n# TYPE jvm_memory_pools_PS_Old_Gen_init gauge\njvm_memory_pools_PS_Old_Gen_init 1.1272192E9\n# TYPE jvm_memory_pools_PS_Old_Gen_max gauge\njvm_memory_pools_PS_Old_Gen_max 2.253914112E9\n# TYPE jvm_memory_pools_PS_Old_Gen_usage gauge\njvm_memory_pools_PS_Old_Gen_usage 0.014950035505168354\n# TYPE jvm_memory_pools_PS_Old_Gen_used gauge\njvm_memory_pools_PS_Old_Gen_used 3.3696096E7\n# TYPE jvm_memory_pools_PS_Old_Gen_used_after_gc gauge\njvm_memory_pools_PS_Old_Gen_used_after_gc 2.3791152E7\n# TYPE jvm_memory_pools_PS_Survivor_Space_committed gauge\njvm_memory_pools_PS_Survivor_Space_committed 1048576.0\n# TYPE jvm_memory_pools_PS_Survivor_Space_init gauge\njvm_memory_pools_PS_Survivor_Space_init 7.0254592E7\n# TYPE jvm_memory_pools_PS_Survivor_Space_max gauge\njvm_memory_pools_PS_Survivor_Space_max 1048576.0\n# TYPE jvm_memory_pools_PS_Survivor_Space_usage gauge\njvm_memory_pools_PS_Survivor_Space_usage 0.59375\n# TYPE jvm_memory_pools_PS_Survivor_Space_used gauge\njvm_memory_pools_PS_Survivor_Space_used 622592.0\n# TYPE jvm_memory_pools_PS_Survivor_Space_used_after_gc gauge\njvm_memory_pools_PS_Survivor_Space_used_after_gc 622592.0\n# TYPE jvm_memory_total_committed gauge\njvm_memory_total_committed 1.644478464E9\n# TYPE jvm_memory_total_init gauge\njvm_memory_total_init 1.697972224E9\n# TYPE jvm_memory_total_max gauge\njvm_memory_total_max 3.999268864E9\n# TYPE jvm_memory_total_used gauge\njvm_memory_total_used 3.82665128E8\n# TYPE jvm_thread_blocked_count gauge\njvm_thread_blocked_count 0.0\n# TYPE jvm_thread_count gauge\njvm_thread_count 82.0\n# TYPE jvm_thread_daemon_count gauge\njvm_thread_daemon_count 11.0\n# TYPE jvm_thread_deadlock_count gauge\njvm_thread_deadlock_count 0.0\n# TYPE jvm_thread_new_count gauge\njvm_thread_new_count 0.0\n# TYPE jvm_thread_runnable_count gauge\njvm_thread_runnable_count 25.0\n# TYPE jvm_thread_terminated_count gauge\njvm_thread_terminated_count 0.0\n# TYPE jvm_thread_timed_waiting_count gauge\njvm_thread_timed_waiting_count 10.0\n# TYPE jvm_thread_waiting_count gauge\njvm_thread_waiting_count 47.0\n```"},{"name":"netty-server.md","id":"/topics/netty-server.md","url":"/topics/netty-server.html","title":"Alternative HTTP server","content":"# Alternative HTTP server\n\n@@include[experimental.md](../includes/experimental.md) { .experimental-feature }\n\nwith the change of licence in Akka, we are experimenting around using Netty as http server for otoroshi (and getting rid of akka http)\n\nin `v1.5.14` we are introducing a new alternative http server base on [`reactor-netty`](https://projectreactor.io/docs/netty/release/reference/index.html). It also include a preview of an HTTP3 server using [netty-incubator-codec-quic](https://github.com/netty/netty-incubator-codec-quic) and [netty-incubator-codec-http3](https://github.com/netty/netty-incubator-codec-http3)\n\n## The specs\n\nthis new server can start during otoroshi boot sequence and accept HTTP/1.1 (with and without TLS), H2C and H2 (with and without TLS) connections and supporting both standard HTTP calls and websockets calls.\n\n## Enable the server\n\nto enable the server, just turn on the following flag\n\n```conf\notoroshi.next.experimental.netty-server.enabled = true\n```\n\nnow you should see something like the following in the logs\n\n```log\n...\nroot [info] otoroshi-experimental-netty-server -\nroot [info] otoroshi-experimental-netty-server - Starting the experimental Netty Server !!!\nroot [info] otoroshi-experimental-netty-server -\nroot [info] otoroshi-experimental-netty-server - https://0.0.0.0:10048 (HTTP/1.1, HTTP/2)\nroot [info] otoroshi-experimental-netty-server - http://0.0.0.0:10049 (HTTP/1.1, HTTP/2 H2C)\nroot [info] otoroshi-experimental-netty-server -\n...\n```\n\n## Server options\n\nyou can also setup the host and ports of the server using\n\n```conf\notoroshi.next.experimental.netty-server.host = \"0.0.0.0\"\notoroshi.next.experimental.netty-server.http-port = 10049\notoroshi.next.experimental.netty-server.https-port = 10048\n```\n\nyou can also enable access logs using\n\n```conf\notoroshi.next.experimental.netty-server.accesslog = true\n```\n\nand enable wiretaping using \n\n```conf\notoroshi.next.experimental.netty-server.wiretap = true\n```\n\nyou can also custom number of worker thread using\n\n```conf\notoroshi.next.experimental.netty-server.thread = 0 # system automatically assign the right number of threads\n```\n\n## HTTP2\n\nyou can enable or disable HTTP2 with\n\n```conf\notoroshi.next.experimental.netty-server.http2.enabled = true\notoroshi.next.experimental.netty-server.http2.h2c = true\n```\n\n## HTTP3\n\nyou can enable or disable HTTP3 (preview ;) ) with\n\n```conf\notoroshi.next.experimental.netty-server.http3.enabled = true\notoroshi.next.experimental.netty-server.http3.port = 10048 # yep can the the same as https because its on the UDP stack\n```\n\nthe result will be something like\n\n\n```log\n...\nroot [info] otoroshi-experimental-netty-server -\nroot [info] otoroshi-experimental-netty-server - Starting the experimental Netty Server !!!\nroot [info] otoroshi-experimental-netty-server -\nroot [info] otoroshi-experimental-netty-server - https://0.0.0.0:10048 (HTTP/3)\nroot [info] otoroshi-experimental-netty-server - https://0.0.0.0:10048 (HTTP/1.1, HTTP/2)\nroot [info] otoroshi-experimental-netty-server - http://0.0.0.0:10049 (HTTP/1.1, HTTP/2 H2C)\nroot [info] otoroshi-experimental-netty-server -\n...\n```\n\n## Native transport\n\nIt is possible to enable native transport for the server\n\n```conf\notoroshi.next.experimental.netty-server.native.enabled = true\notoroshi.next.experimental.netty-server.native.driver = \"Auto\"\n```\n\npossible values for `otoroshi.next.experimental.netty-server.native.driver` are \n\n- `Auto`: the server try to find the best native option available\n- `Epoll`: the server uses Epoll native transport for Linux environments\n- `KQueue`: the server uses KQueue native transport for MacOS environments\n- `IOUring`: the server uses IOUring native transport for Linux environments that supports it (experimental, using [netty-incubator-transport-io_uring](https://github.com/netty/netty-incubator-transport-io_uring))\n\nthe result will be something like when starting on a Mac\n\n```log\n...\nroot [info] otoroshi-experimental-netty-server -\nroot [info] otoroshi-experimental-netty-server - Starting the experimental Netty Server !!!\nroot [info] otoroshi-experimental-netty-server -\nroot [info] otoroshi-experimental-netty-server - using KQueue native transport\nroot [info] otoroshi-experimental-netty-server -\nroot [info] otoroshi-experimental-netty-server - https://0.0.0.0:10048 (HTTP/3)\nroot [info] otoroshi-experimental-netty-server - https://0.0.0.0:10048 (HTTP/1.1, HTTP/2)\nroot [info] otoroshi-experimental-netty-server - http://0.0.0.0:10049 (HTTP/1.1, HTTP/2 H2C)\nroot [info] otoroshi-experimental-netty-server -\n...\n```\n\n## Env. variables\n\nyou can configure the server using the following env. variables\n\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_ENABLED`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_NEW_ENGINE_ONLY`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HOST`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_PORT`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTPS_PORT`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_WIRETAP`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_ACCESSLOG`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_THREADS`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_ALLOW_DUPLICATE_CONTENT_LENGTHS`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_VALIDATE_HEADERS`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_H_2_C_MAX_CONTENT_LENGTH`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_INITIAL_BUFFER_SIZE`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_MAX_HEADER_SIZE`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_MAX_INITIAL_LINE_LENGTH`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_MAX_CHUNK_SIZE`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP2_ENABLED`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP2_H2C`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP3_ENABLED`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP3_PORT`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_STREAMS_BIDIRECTIONAL`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_STREAM_DATA_BIDIRECTIONAL_REMOTE`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_STREAM_DATA_BIDIRECTIONAL_LOCAL`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_DATA`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_MAX_RECV_UDP_PAYLOAD_SIZE`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_MAX_SEND_UDP_PAYLOAD_SIZE`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_NATIVE_ENABLED`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_NATIVE_DRIVER`\n\n"},{"name":"otoroshi-protocol.md","id":"/topics/otoroshi-protocol.md","url":"/topics/otoroshi-protocol.html","title":"The Otoroshi communication protocol","content":"# The Otoroshi communication protocol\n\nThe exchange protocol secure the communication with an app. When it's enabled, Otoroshi will send for each request a value in pre-selected token header, and will check the same header in the return request. On routes, you will have to use the `Otoroshi challenge token` plugin to enable it.\n\n### V1 challenge\n\nIf you enable secure communication for a given service with `V1 - simple values exchange` activated, you will have to add a filter on the target application that will take the `Otoroshi-State` header and return it in a header named `Otoroshi-State-Resp`. \n\n@@@ div { .centered-img }\n\n@@@\n\nyou can find an example project that implements V1 challenge [here](https://github.com/MAIF/otoroshi/tree/master/demos/challenge)\n\n### V2 challenge\n\nIf you enable secure communication for a given service with `V2 - signed JWT token exhange` activated, you will have to add a filter on the target application that will take the `Otoroshi-State` header value containing a JWT token, verify it's content signature then extract a claim named `state` and return a new JWT token in a header named `Otoroshi-State-Resp` with the `state` value in a claim named `state-resp`. By default, the signature algorithm is HMAC+SHA512 but can you can choose your own. The sent and returned JWT tokens have short TTL to avoid being replayed. You must be validate the tokens TTL. The audience of the response token must be `Otoroshi` and you have to specify `iat`, `nbf` and `exp`.\n\n@@@ div { .centered-img }\n\n@@@\n\nyou can find an example project that implements V2 challenge [here](https://github.com/MAIF/otoroshi/tree/master/demos/challenge)\n\n### Info. token\n\nOtoroshi is also sending a JWT token in a header named `Otoroshi-Claim` that the target app can validate too. On routes, you will have to use the `Otoroshi info. token` plugin to enable it.\n\nThe `Otoroshi-Claim` is a JWT token containing some informations about the service that is called and the client if available. You can choose between a legacy version of the token and a new one that is more clear and structured.\n\nBy default, the otoroshi jwt token is signed with the `otoroshi.claim.sharedKey` config property (or using the `$CLAIM_SHAREDKEY` env. variable) and uses the `HMAC512` signing algorythm. But it is possible to customize how the token is signed from the service descriptor page in the `Otoroshi exchange protocol` section. \n\n@@@ div { .centered-img }\n\n@@@\n\nusing another signing algo.\n\n@@@ div { .centered-img }\n\n@@@\n\nhere you can choose the signing algorithm and the secret/keys used. You can use syntax like `${env.MY_ENV_VAR}` or `${config.my.config.path}` to provide secret/keys values. \n\nFor example, for a service named `my-service` with a signing key `secret` with `HMAC512` signing algorythm, the basic JWT token that will be sent should look like the following\n\n```\neyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJzdWIiOiItLSIsImF1ZCI6Im15LXNlcnZpY2UiLCJpc3MiOiJPdG9yb3NoaSIsImV4cCI6MTUyMTQ0OTkwNiwiaWF0IjoxNTIxNDQ5ODc2LCJqdGkiOiI3MTAyNWNjMTktMmFjNy00Yjk3LTljYzctMWM0ODEzYmM1OTI0In0.mRcfuFVFPLUV1FWHyL6rLHIJIu0KEpBkKQCk5xh-_cBt9cb6uD6enynDU0H1X2VpW5-bFxWCy4U4V78CbAQv4g\n```\n\nif you decode it, the payload will look something like\n\n```json\n{\n \"sub\": \"apikey_client_id\",\n \"aud\": \"my-service\",\n \"iss\": \"Otoroshi\",\n \"exp\": 1521449906,\n \"iat\": 1521449876,\n \"jti\": \"71025cc19-2ac7-4b97-9cc7-1c4813bc5924\"\n}\n```\n\nIf you want to validate the `Otoroshi-Claim` on the target app side to ensure that the input requests only comes from `Otoroshi`, you will have to write an HTTP filter to do the job. For instance, if you want to write a filter to make sure that requests only comes from Otoroshi, you can write something like the following (using playframework 2.6).\n\nScala\n: @@snip [filter.scala](../snippets/filter.scala)\n\nJava\n: @@snip [filter.java](../snippets/filter.java)\n"},{"name":"pki.md","id":"/topics/pki.md","url":"/topics/pki.html","title":"Otoroshi's PKI","content":"# Otoroshi's PKI\n\nWith Otoroshi, you can add your own certificates, your own CA and even create self signed certificates or certificates from CAs. You can enable auto renewal of thoses self signed certificates or certificates generated. Certificates have to be created with the certificate chain and the private key in PEM format.\n\nAn Otoroshi instance always starts with 5 auto-generated certificates. \n\nThe highest certificate is the **Otoroshi Default Root CA Certificate**. This certificate is used by Otoroshi to sign the intermediate CA.\n\n**Otoroshi Default Intermediate CA Certificate**: first intermediate CA that must be used to issue new certificates in Otoroshi. Creating certificates directly from the CA root certificate increases the risk of root certificate compromise, and if the CA root certificate is compromised, the entire trust infrastructure built by the SSL provider will fail\n\nThis intermediate CA signed three certificates :\n\n* **Otoroshi Default Client certificate**: \n* **Otoroshi Default Jwt Signing Keypair**: default keypair (composed of a public and private key), exposed on `https://xxxxxx/.well-known/jwks.json`, that can be used to sign and verify JWT verifier\n* **Otoroshi Default Wildcard Certificate**: this certificate has `*.oto.tools` as common name. It can be very useful to the development phase\n\n## The PKI API\n\nThe Otoroshi's PKI can be managed using the admin api of otoroshi (by default admin api is exposed on https://otoroshi-api.xxxxx)\n\nLink to the complete swagger section about PKI : https://maif.github.io/otoroshi/swagger-ui/index.html#/pki\n\n* `POST` [/api/pki/certs/_letencrypt](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.genLetsEncryptCert): generates a certificate using Let's Encrypt or any ACME compatible system\n* `POST` [/api/pki/certs/_p12](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.importCertFromP12): import a .p12 file as client certificates\n* `POST` [/api/pki/certs/_valid](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.certificateIsValid): check if a certificate is valid (based on its own data)\n* `POST` [/api/pki/certs/_data](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.certificateData): extract data from a certificate\n* `POST` [/api/pki/certs](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.genSelfSignedCert): generates a self signed certificates\n* `POST` [/api/pki/csrs](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.genCsr) : generates a CSR\n* `POST` [/api/pki/keys](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.genKeyPair) : generates a keypair\n* `POST` [/api/pki/cas](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.genSelfSignedCA) : generates a self signed CA\n* `POST` [/api/pki/cas/:ca/certs/_sign](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.signCert): sign a certificate based on CSR\n* `POST` [/api/pki/cas/:ca/certs](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.genCert): generates a certificate\n* `POST` [/api/pki/cas/:ca/cas](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.genSubCA) : generates a sub-CA\n\n## The PKI UI\n\nAll generated certificates are listed in the `https://xxxxxx/bo/dashboard/certificates` page. All those certificates can be used to serve traffic with TLS, perform mTLS calls, sign and verify JWT tokens.\n\nThe PKI UI are composed of these following actions:\n\n* **Add item**: redirects the user on the certificate creation page. It’s useful when you already had a certificate (like a pem file) and that you want to load it in Otoroshi.\n* **Let's Encrypt certificate**: asks a certificate matching a given host to Let’s encrypt\n* **Create certificate**: issues a certificate with an existing Otoroshi certificate as CA. You can create a client certificate, a server certificate or a keypair certiciate that will be used to verify and sign JWT tokens.\n* **Import .p12 file**: loads a p12 file as certificate\n\nUnder these buttons, you have the list of current certificates, imported or generated, revoked or not. For each certificate, you will find: \n\n* a **name** \n* a **description** \n* the **subject** \n* the **type** of certificate (CA / client / keypair / certificate)\n* the **revoked reason** (empty if not) \n* the **creation date** following by its **expiration date**.\n\n## Exposed public keys\n\nThe Otoroshi certificate can be turned and used as keypair (simple action that can be executed by editing a certificate or during its creation, or using the admin api). A Otoroski keypair can be used to sign and verify JWT tokens with asymetric signature. Once a jwt token is signed with a keypair, it can be necessary to provide a way to the services to verify the tokens received by Otoroshi. This usage is cover by Otoroshi by the flag `Public key exposed`, available on each certificate.\n\nOtoroshi exposes each keypair with the flag enabled, on the following routes:\n\n* `https://xxxxxxxxx.xxxxxxx.xx/.well-known/otoroshi/security/jwks.json`\n* `https://otoroshi-api.xxxxxxx.xx/.well-known/jwks.json`\n\nOn these routes, you will find the list of public keys exposed using [the JWK standard](https://datatracker.ietf.org/doc/html/rfc7517)\n\n\n## OCSP Responder\n\nOtoroshi is able to revocate a certificate, directly from the UI, and to add a revocation status to specifiy the reason. The revocation reason can be :\n\n* `VALID`: The certificate is not revoked\n* `UNSPECIFIED`: Can be used to revoke certificates for reasons other than the specific codes.\n* `KEY_COMPROMISE`: It is known or suspected that the subject's private key or other aspects have been compromised.\n* `CA_COMPROMISE`: It is known or suspected that the subject's private key or other aspects have been compromised.\n* `AFFILIATION_CHANGED`: The subject's name or other information in the certificate has been modified but there is no cause to suspect that the private key has been compromised.\n* `SUPERSEDED`: The certificate has been superseded but there is no cause to suspect that the private key has been compromised\n* `CESSATION_OF_OPERATION`: The certificate is no longer needed for the purpose for which it was issued but there is no cause to suspect that the private key has been compromised\n* `CERTIFICATE_HOLD`: The certificate is temporarily revoked but there is no cause to suspect that the private kye has been compromised\n* `REMOVE_FROM_CRL`: The certificate has been unrevoked\n* `PRIVILEGE_WITH_DRAWN`: The certificate was revoked because a privilege contained within that certificate has been withdrawn\n* `AA_COMPROMISE`: It is known or suspected that aspects of the AA validated in the attribute certificate, have been compromised\n\nOtoroshi supports the Online Certificate Status Protocol for obtaining the revocation status of its certificates. The OCSP endpoint is also add to any generated certificate. This endpoint is available at `https://otoroshi-api.xxxxxx/.well-known/otoroshi/security/ocsp`\n\n## A.I.A : Authority Information Access\n\nOtoroshi provides a way to add the A.I.A in the certificate. This certificate extension contains :\n\n* Information about how to get the issuer of this certificate (CA issuer access method)\n* Address of the OCSP responder from where revocation of this certificate can be checked (OCSP access method)\n\n`https://xxxxxxxxxx/.well-known/otoroshi/security/certificates/:cert-id`"},{"name":"relay-routing.md","id":"/topics/relay-routing.md","url":"/topics/relay-routing.html","title":"Relay Routing","content":"# Relay Routing\n\n@@include[experimental.md](../includes/experimental.md) { .experimental-feature }\n\nRelay routing is the capability to forward traffic between otoroshi leader nodes based on network location of the target. Let say we have an otoroshi cluster split accross 3 network zones. Each zone has \n\n- one or more datastore instances\n- one or more otoroshi leader instances\n- one or more otoroshi worker instances\n\nthe datastores are replicated accross network zones in an active-active fashion. Each network zone also have applications, apis, etc deployed. Sometimes the same application is deployed in multiple zones, sometimes not. \n\nit can quickly become a nightmare when you want to access an application deployed in one network zone from another network zone. You'll have to publicly expose this application to be able to access it from the other zone. This pattern is fine, but sometimes it's not enough. With `relay routing`, you will be able to flag your routes as being deployed in one zone or another, and let otoroshi handle all the heavy lifting to route the traffic to the right network zone for you.\n\n@@@ div { .centered-img }\n\n@@@\n\n\n@@@ warning { .margin-top-20 }\nthis feature may introduce additional latency as the call passes through relay nodes\n@@@\n\n## Otoroshi instance setup\n\nfirst of all, for every otoroshi instance deployed, you have to flag where the instance is deployed and, for leaders, how this instance can be contacted from other zones (this is a **MAJOR** requirement, without that, you won't be able to make relay routing work). Also, you'll have to enable the @ref:[new proxy engine](./engine.md).\n\nIn the otoroshi configuration file, for each instance, enable relay routing and configure where the instance is located and how the leader can be contacted\n\n```conf\notoroshi {\n ...\n cluster {\n mode = \"leader\" # or \"worker\" dependending on the instance kind\n ...\n relay {\n enabled = true # enable relay routing\n leaderOnly = true # use leaders as the only kind of relay node\n location { # you can use all those parameters at the same time. There is no actual network concepts bound here, just some kind of tagging system, so you can use it as you wish\n provider = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_PROVIDER}\n zone = \"zone-1\"\n region = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_REGION}\n datacenter = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_DATACENTER}\n rack = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_RACK}\n }\n exposition {\n urls = [\"https://otoroshi-api-zone-1.my.domain:443\"]\n hostname = \"otoroshi-api-zone-1.my.domain\"\n clientId = \"apkid_relay-routing-apikey\"\n }\n }\n }\n}\n```\n\nalso, to make your leaders exposed by zone, do not hesitate to add domain names to the `otoroshi-admin-api` service and setup your DNS to bind those domains to the right place\n\n@@@ div { .centered-img }\n\n@@@\n\n## Route setup for an application deployed in only one zone\n\nNow, for any route/service deployed in only one zone, you will be able to flag it using its metadata as being deployed in one zone or another. The possible metadata keys are the following\n\n- `otoroshi-deployment-providers`\n- `otoroshi-deployment-regions`\n- `otoroshi-deployment-zones`\n- `otoroshi-deployment-dcs`\n- `otoroshi-deployment-racks`\n\nlet say we set `otoroshi-deployment-zones=zone-1` on a route, if we call this route from an otoroshi instance where `otoroshi.cluster.relay.location.zone` is not `zone-1`, otoroshi will automatically forward the requests to an otoroshi leader node where `otoroshi.cluster.relay.location.zone` is `zone-1`\n\n## Route setup for an application deployed in multiple zones at the same time\n\nNow, for any route/service deployed in multiple zones zones at the same time, you will be able to flag it using its metadata as being deployed in some zones. The possible metadata keys are the following\n\n- `otoroshi-deployment-providers`\n- `otoroshi-deployment-regions`\n- `otoroshi-deployment-zones`\n- `otoroshi-deployment-dcs`\n- `otoroshi-deployment-racks`\n\nlet say we set `otoroshi-deployment-zones=zone-1, zone-2` on a route, if we call this route from an otoroshi instance where `otoroshi.cluster.relay.location.zone` is not `zone-1` or `zone-2`, otoroshi will automatically forward the requests to an otoroshi leader node where `otoroshi.cluster.relay.location.zone` is `zone-1` or `zone-2` and load balance between them.\n\nalso, you will have to setup your targets to avoid trying to contact targets that are not actually in the current zone. To do that, you'll have to set the target predicate to `NetworkLocationMatch` and fill the possible locations according to the actual location of your target\n\n@@@ div { .centered-img }\n\n@@@\n\n## Demo\n\nyou can find a demo of this setup [here](https://github.com/MAIF/otoroshi/tree/master/demos/relay). This is a `docker-compose` setup with multiple network to simulate network zones. You also have an otoroshi export to understand how to setup your routes/services\n"},{"name":"secrets.md","id":"/topics/secrets.md","url":"/topics/secrets.html","title":"Secrets management","content":"# Secrets management\n\n@@include[experimental.md](../includes/experimental.md) { .experimental-feature }\n\nSecrets are generally confidential values that should not appear in plain text in the application. There are several products that help you store, retrieve, and rotate these secrets securely. Otoroshi offers a mechanism to set up references to these secrets in its entities to benefits from the perks of your existing secrets management infrastructure. This feature only work with the @ref:[new proxy engine](./engine.md).\n\nA secret can be anything you want like an apikey secret, a certificate private key or password, a jwt verifier signing key, a password to a proxy, a value for a header, etc.\n\n## Enable secrets management in otoroshi\n\nBy default secrets management is disbaled. You can enable it by setting `otoroshi.vaults.enabled` or `${OTOROSHI_VAULTS_ENABLED}` to `true`.\n\n## Global configuration\n\nSecrets management can be only configured using otoroshi static configuration file (also using jvm args mechanism). \nThe configuration is located at `otoroshi.vaults` where you can find the global configuration of the secrets management system and the configurations for each enabled secrets management backends. Basically it looks like\n\n```conf\nvaults {\n enabled = false\n enabled = ${?OTOROSHI_VAULTS_ENABLED}\n secrets-ttl = 300000 # 5 minutes\n secrets-ttl = ${?OTOROSHI_VAULTS_SECRETS_TTL}\n cached-secrets = 10000\n cached-secrets = ${?OTOROSHI_VAULTS_CACHED_SECRETS}\n read-timeout = 10000 # 10 seconds\n read-timeout = ${?OTOROSHI_VAULTS_READ_TIMEOUT}\n # if enabled, only leader nodes fetches the secrets.\n # entities with secret values filled are then sent to workers when they poll the cluster state.\n # only works if `otoroshi.cluster.autoUpdateState=true`\n leader-fetch-only = false\n leader-fetch-only = ${?OTOROSHI_VAULTS_LEADER_FETCH_ONLY}\n env {\n type = \"env\"\n prefix = ${?OTOROSHI_VAULTS_ENV_PREFIX}\n }\n}\n```\n\nyou can see here the global configuration and a default backend configured that can retrieve secrets from environment variables. \n\nThe configuration keys can be used for \n\n- `secrets-ttl`: the amount of milliseconds before the secret value is read again from backend\n- `cached-secrets`: the number of secrets that will be cached on an otoroshi instance\n- `read-timeout`: the timeout (in milliseconds) to read a secret from a backend\n\n## Entities with secrets management\n\nthe entities that support secrets management are the following \n\n- `routes`\n- `services`\n- `service_descriptors`\n- `apikeys`\n- `certificates`\n- `jwt_verifiers`\n- `authentication_modules`\n- `targets`\n- `backends`\n- `tcp_services`\n- `data_exporters`\n\n## Define a reference to a secret\n\nin the previously listed entities, you can define, almost everywhere, references to a secret using the following syntax:\n\n`${vault://name_of_the_vault/secret/of/the/path}`\n\nlet say I define a new apikey with the following value as secret `${vault://my_env/apikey_secret}` with the following secrets management configuration\n\n```conf\nvaults {\n enabled = true\n secrets-ttl = 300000\n cached-secrets = 10000\n read-ttl = 10000\n my_env {\n type = \"env\"\n }\n}\n```\n\nif the machine running otoroshi has an environment variable named `APIKEY_SECRET` with the value `verysecret`, then you will be able to can an api with the defined apikey `client_id` and a `client_secret` value of `verysecret`\n\n```sh\ncurl 'http://my-awesome-api.oto.tools:8080/api/stuff' -u awesome_apikey:verysecret\n```\n\n## Possible backends\n\nOtoroshi comes with the support of several secrets management backends.\n\n### Environment variables\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"env\"\n prefix = \"the_prefix_added_to_the_name_of_the_env_variable\"\n }\n}\n```\n\n### Hashicorp Vault\n\na backend for [Hashicorp Vault](https://www.vaultproject.io/). Right now we only support KV engines.\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"hashicorp-vault\"\n url = \"http://127.0.0.1:8200\"\n mount = \"kv\" # the name of the secret store in vault\n kv = \"v2\" # the version of the kv store (v1 or v2)\n token = \"root\" # the token that can access to your secrets\n }\n}\n```\n\nyou should define your references like `${vault://hashicorp_vault/secret/path/key_name}`.\n\n\n### Azure Key Vault\n\na backend for [Azure Key Vault](https://azure.microsoft.com/en-en/services/key-vault/). Right now we only support secrets and not keys and certificates.\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"azure\"\n url = \"https://keyvaultname.vault.azure.net\"\n api-version = \"7.2\" # the api version of the vault\n tenant = \"xxxx-xxx-xxx\" # your azure tenant id, optional\n client_id = \"xxxxx\" # your azure client_id\n client_secret = \"xxxxx\" # your azure client_secret\n # token = \"xxx\" possible if you have a long lived existing token. will take over tenant / client_id / client_secret\n }\n}\n```\n\nyou should define your references like `${vault://azure_vault/secret_name/secret_version}`. `secret_version` is mandatory\n\nIf you want to use certificates and keys objects from the azure key vault, you will have to specify an option in the reference named `azure_secret_kind` with possible value `certificate`, `privkey`, `pubkey` like the following :\n\n```\n${vault://azure_vault/myprivatekey/secret_version?azure_secret_kind=privkey}\n```\n\n### AWS Secrets Manager\n\na backend for [AWS Secrets Manager](https://aws.amazon.com/en/secrets-manager/)\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"aws\"\n access-key = \"key\"\n access-key-secret = \"secret\"\n region = \"eu-west-3\" # the aws region of your secrets management\n }\n}\n```\n\nyou should define your references like `${vault://aws_vault/secret_name/secret_version}`. `secret_version` is optional\n\n### Google Cloud Secrets Manager\n\na backend for [Google Cloud Secrets Manager](https://cloud.google.com/secret-manager)\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"gcloud\"\n url = \"https://secretmanager.googleapis.com\"\n apikey = \"secret\"\n }\n}\n```\n\nyou should define your references like `${vault://gcloud_vault/projects/foo/secrets/bar/versions/the_version}`. `the_version` can be `latest`\n\n### AlibabaCloud Cloud Secrets Manager\n\na backend for [AlibabaCloud Secrets Manager](https://www.alibabacloud.com/help/en/doc-detail/152001.html)\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"alibaba-cloud\"\n url = \"https://kms.eu-central-1.aliyuncs.com\"\n access-key-id = \"access-key\"\n access-key-secret = \"secret\"\n }\n}\n```\n\nyou should define your references like `${vault://alibaba_vault/secret_name}`\n\n\n### Kubernetes Secrets\n\na backend for [Kubernetes secrets](https://kubernetes.io/en/docs/concepts/configuration/secret/)\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"kubernetes\"\n # see the configuration of the kubernetes plugin, \n # by default if the pod if well configured, \n # you don't have to setup anything\n }\n}\n```\n\nyou should define your references like `${vault://k8s_vault/namespace/secret_name/key_name}`. `key_name` is optional. if present, otoroshi will try to lookup `key_name` in the secrets `stringData`, if not defined the secrets `data` will be base64 decoded and used.\n\n\n### Izanami config.\n\na backend for [Izanami config.](https://maif.github.io/izanami/manual/)\n\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"izanami\"\n url = \"http://127.0.0.1:8200\"\n client-id = \"client\"\n client-secret = \"secret\"\n }\n}\n```\n\nyou should define your references like `${vault://izanami_vault/the:secret:id/key_name}`. `key_name` is optional if the secret value is not a json object\n\n### Spring Cloud Config\n\na backend for [Spring Cloud Config.](https://docs.spring.io/spring-cloud-config/docs/current/reference/html/)\n\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"spring-cloud\"\n url = \"http://127.0.0.1:8000\"\n root = \"myapp/prod\"\n headers {\n authorization = \"Basic xxxx\"\n }\n }\n}\n```\n\nyou should define your references like `${vault://spring_vault/the/path/of/the/value}` where `/the/path/of/the/value` is the path of the value.\n\n### Http backend\n\na backend for that uses the result of an http endpoint\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"http\"\n url = \"http://127.0.0.1:8000/endpoint/for/config\"\n headers {\n authorization = \"Basic xxxx\"\n }\n }\n}\n```\n\nyou should define your references like `${vault://http_vault/the/path/of/the/value}` where `/the/path/of/the/value` is the path of the value.\n"},{"name":"sessions-mgmt.md","id":"/topics/sessions-mgmt.md","url":"/topics/sessions-mgmt.html","title":"Sessions management","content":"# Sessions management\n\n## Admins\n\nAll logged users to an Otoroshi instance are administrators. An user session is created for each sucessfull connection to the UI. \n\nThese sessions are listed in the `Admin users sessions` (available in the cog icon menu or at this location of your instance `/bo/dashboard/sessions/admin`).\n\nAn admin user session is composed of: \n\n* `name`: the name of the connected user\n* `email`: the unique email\n* `Created at`: the creation date of the user session\n* `Expires at`: date until the user session is drop\n* `Profile`: user profile, at JSON format, containing name, email and others linked metadatas\n* `Rights`: list of rules to authorize the connected user on each tenant and teams.\n* `Discard session`: action to kill a session. On click, a modal will appear with the session ID\n\nIn the `Admin users sessions` page, you have two more actions:\n\n* `Discard all sessions`: kills all current sessions (including the session of the owner of this action)\n* `Discard old sessions`: kill all outdated sessions\n\n## Private apps\n\nAll logged users to a protected application has an private user session.\n\nThese sessions are listed in the `Private apps users sessions` (available in the cog icon menu or at this location of your instance `/bo/dashboard/sessions/private`).\n\nAn private user session is composed of: \n\n* `name`: the name of the connected user\n* `email`: the unique email\n* `Created at`: the creation date of the user session\n* `Expires at`: date until the user session is drop\n* `Profile`: user profile, at JSON format, containing name, email and others linked metadatas\n* `Meta.`: list of metadatas added by the authentication module.\n* `Tokens`: list of tokens received from the identity provider used. In the case of a memory authentication, this part will keep empty.\n* `Discard session`: action to kill a session. On click, a modal will appear with the session ID\n"},{"name":"tls.md","id":"/topics/tls.md","url":"/topics/tls.html","title":"TLS","content":"# TLS\n\nas you might have understand, otoroshi can store TLS certificates and use them dynamically. It means that once a certificate is imported or created in otoroshi, you can immediately use it to serve http request over TLS, to call https backends that requires mTLS or that do not have certicates signed by a globally knowned authority.\n\n## TLS termination\n\nany certficate added to otoroshi with a valid `CN` and `SANs` can be used in the following seconds to serve https requests. If you do not provide a private key with a certificate chain, the certificate will only be trusted like a CA. If you want to perform mTLS calls on you otoroshi instance, do not forget to enabled it (it is disabled by default for performance reasons as the TLS handshake is bigger with mTLS enabled)\n\n```sh\notoroshi.ssl.fromOutside.clientAuth=None|Want|Need\n```\n\nor using env. variables\n\n```sh\nSSL_OUTSIDE_CLIENT_AUTH=None|Want|Need\n```\n\n### TLS termination configuration\n\nYou can configure TLS termination statically using config. file or env. variables. Everything is available at `otoroshi.tls`\n\n```conf\notoroshi {\n tls {\n # the cipher suites used by otoroshi TLS termination\n cipherSuitesJDK11 = [\"TLS_AES_128_GCM_SHA256\", \"TLS_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_DHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_DHE_DSS_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_DHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_DHE_DSS_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384\", \"TLS_RSA_WITH_AES_256_CBC_SHA256\", \"TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384\", \"TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384\", \"TLS_DHE_RSA_WITH_AES_256_CBC_SHA256\", \"TLS_DHE_DSS_WITH_AES_256_CBC_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA\", \"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA\", \"TLS_RSA_WITH_AES_256_CBC_SHA\", \"TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA\", \"TLS_ECDH_RSA_WITH_AES_256_CBC_SHA\", \"TLS_DHE_RSA_WITH_AES_256_CBC_SHA\", \"TLS_DHE_DSS_WITH_AES_256_CBC_SHA\", \"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256\", \"TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_DHE_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_DHE_DSS_WITH_AES_128_CBC_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA\", \"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA\", \"TLS_RSA_WITH_AES_128_CBC_SHA\", \"TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA\", \"TLS_ECDH_RSA_WITH_AES_128_CBC_SHA\", \"TLS_DHE_RSA_WITH_AES_128_CBC_SHA\", \"TLS_DHE_DSS_WITH_AES_128_CBC_SHA\", \"TLS_EMPTY_RENEGOTIATION_INFO_SCSV\"]\n cipherSuitesJDK8 = [\"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384\", \"TLS_RSA_WITH_AES_256_CBC_SHA256\", \"TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384\", \"TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384\", \"TLS_DHE_RSA_WITH_AES_256_CBC_SHA256\", \"TLS_DHE_DSS_WITH_AES_256_CBC_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA\", \"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA\", \"TLS_RSA_WITH_AES_256_CBC_SHA\", \"TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA\", \"TLS_ECDH_RSA_WITH_AES_256_CBC_SHA\", \"TLS_DHE_RSA_WITH_AES_256_CBC_SHA\", \"TLS_DHE_DSS_WITH_AES_256_CBC_SHA\", \"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256\", \"TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_DHE_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_DHE_DSS_WITH_AES_128_CBC_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA\", \"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA\", \"TLS_RSA_WITH_AES_128_CBC_SHA\", \"TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA\", \"TLS_ECDH_RSA_WITH_AES_128_CBC_SHA\", \"TLS_DHE_RSA_WITH_AES_128_CBC_SHA\", \"TLS_DHE_DSS_WITH_AES_128_CBC_SHA\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_DHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_DHE_DSS_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_DHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_DHE_DSS_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA\", \"TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA\", \"SSL_RSA_WITH_3DES_EDE_CBC_SHA\", \"TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA\", \"TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA\", \"SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA\", \"SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA\", \"TLS_EMPTY_RENEGOTIATION_INFO_SCSV\"]\n cipherSuites = []\n # the protocols used by otoroshi TLS termination\n protocolsJDK11 = [\"TLSv1.3\", \"TLSv1.2\", \"TLSv1.1\", \"TLSv1\"]\n protocolsJDK8 = [\"SSLv2Hello\", \"TLSv1\", \"TLSv1.1\", \"TLSv1.2\"]\n protocols = []\n # the JDK cacert access\n cacert {\n path = \"$JAVA_HOME/lib/security/cacerts\"\n password = \"changeit\"\n }\n # the mtls mode\n fromOutside {\n clientAuth = \"None\"\n clientAuth = ${?SSL_OUTSIDE_CLIENT_AUTH}\n }\n # the default trust mode\n trust {\n all = false\n all = ${?OTOROSHI_SSL_TRUST_ALL}\n }\n # some initial cacert access, useful to include non standard CA when starting (file paths)\n initialCacert = ${?CLUSTER_WORKER_INITIAL_CACERT}\n initialCacert = ${?INITIAL_CACERT}\n initialCert = ${?CLUSTER_WORKER_INITIAL_CERT}\n initialCert = ${?INITIAL_CERT}\n initialCertKey = ${?CLUSTER_WORKER_INITIAL_CERT_KEY}\n initialCertKey = ${?INITIAL_CERT_KEY}\n # initialCerts = [] \n }\n}\n```\n\n\n### TLS termination settings\n\nIt is possible to adjust the behavior of the TLS termination from the `danger zone` at the `Tls Settings` section. Here you can either define that a non-matching SNI call will use a random TLS certtificate to reply or will use a default domain (the TLS certificate associated to this domain) to reply. Here you can also choose if you want to trust all the CAs trusted by your JDK when performing TLS calls `Trust JDK CAs (client)` or when receiving mTLS calls `Trust JDK CAs (server)`. If you disable the later, it is possible to select the list of CAs presented to the client during mTLS handshake.\n\n### Certificates auto generation\n\nit is also possible to generate non-existing certificate on the fly without losing the request. If you are interested by this feature, you can enable it in the `danger zone` at the `Auto Generate Certificates` section. Here you'll have to enable it and select the CA that will generate the certificate. Of course, the client will have to trust the selected CA. You can also add filters to choose which domain are allowed to generate certificates or not. The `Reply Nicely` flag is used to reply a nice error message (ie. human readable) telling that it's not possible to have an auto certficate for the current domain. \n\n## Backends TLS and mTLS calls\n\nFor any call to a backend, it is possible to customize the TLS behavior \n\n@@@ div { .centered-img }\n\n@@@\n\nhere you can define your level of trust (trust all, loose verification) or even select on or more CAs you will trust for the following backend calls. You can also select the client certificate that will be used for the following backend calls\n\n## Keypair for signing and verification\n\nIt is also possible to use the keypair contained in a certificate to sign and verificate JWT token signature. You can mark an existing certificate in otoroshi as a keypair using the `keypair` on the certificate page.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"tunnels.md","id":"/topics/tunnels.md","url":"/topics/tunnels.html","title":"Otoroshi tunnels","content":"# Otoroshi tunnels\n\n@@include[experimental.md](../includes/experimental.md) { .experimental-feature }\n\nSometimes, exposing apis that lives in our private network can be a nightmare, especially from a networking point of view. \nWith otoroshi tunnels, this is now trivial, as long as your internal otoroshi (that lives inside your private network) is able to contact an external otoroshi (exposed on the internet).\n\n@@@ warning { .margin-top-20 }\nYou have to enable cluster mode (Leader or Worker) to make this feature work. As this feature is experimental, we only support simple http request right now. Server Sent Event and Websocket request are not supported at the moment.\n@@@\n\n## How Otoroshi tunnels works\n\nthe main idea behind otoroshi tunnels is that the connection between your private network et the public network is initiated by the private network side. You don't have to expose a part of your private network, create a DMZ or whatever, you just have to authorize your private network otoroshi instance to contact your public network otoroshi instance.\n\n@@@ div { .centered-img }\n\n@@@\n\nonce the persistent tunnel has been created, you can create routes on the public otoroshi instance that uses the otoroshi `Remote tunnel calls` to target your remote routes through the designated tunnel instance \n\n\n@@@ div { .centered-img }\n\n@@@\n\n@@@ warning { .margin-top-20 }\nthis feature may introduce additional latency as the call passes through otoroshi tunnels\n@@@\n\n## Otoroshi tunnel example\n\nfirst you have to enable the tunnels feature in your otoroshi configuration (on both public and private instances)\n\n```conf\notoroshi {\n ...\n tunnels {\n enabled = true\n enabled = ${?OTOROSHI_TUNNELS_ENABLED}\n ...\n }\n}\n```\n\nthen you can setup a tunnel instance on your private instance to contact your public instance\n\n```conf\notoroshi {\n ...\n tunnels {\n enabled = true\n ...\n public-apis {\n id = \"public-apis\"\n name = \"public apis tunnel\"\n url = \"https://otoroshi-api.company.com:443\"\n host = \"otoroshi-api.company.com\"\n clientId = \"xxx\"\n clientSecret = \"xxxxxx\"\n # ipAddress = \"127.0.0.1\" # optional: ip address of the public instance admin api\n # tls { # optional: TLS settings to access the public instance admin api\n # ... \n # }\n # export-routes = true # optional: send routes information to remote otoroshi instance to facilitate remote route exposition\n # export-routes-tag = \"tunnel-exposed\" # optional: only send routes information if the route has this tag\n }\n }\n}\n```\n\nNow when your private otoroshi instance will boot, a persistent tunnel will be made between private and public instance. \nNow let say you have a private api exposed on `api-a.company.local` on your private otoroshi instance and you want to expose it on your public otoroshi instance. \n\nFirst create a new route exposed on `api-a.company.com` that targets `https://api-a.company.local:443`\n\n@@@ div { .centered-img }\n\n@@@\n\nthen add the `Remote tunnel calls` plugin to your route and set the tunnel id to `public-apis` to match the id you set in the otoroshi config file\n\n@@@ div { .centered-img }\n\n@@@\n\nadd all the plugin you need to secure this brand new public api and call it\n\n```sh\ncurl \"https://api-a.company.com/users\" | jq\n```\n\n## Easily expose your remote services\n\nyou can see all the connected tunnel instances on an otoroshi instance on the `Connected tunnels` (`Cog icon` / `Connected tunnels`). For each tunnel instance you will be able to check the tunnel health and also to easily expose all the routes available on the other end of the tunnel. Just clic on the `expose` button of the route you want to expose, and a new route will be created with the `Remote tunnel calls` plugin already setup.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"user-rights.md","id":"/topics/user-rights.md","url":"/topics/user-rights.html","title":"Otoroshi user rights","content":"# Otoroshi user rights\n\nIn Otoroshi, all users are considered **Administrators**. This choice is reinforced by the fact that Otoroshi is designed to be an administrator user interface and not an interface for users who simply want to view information. For this type of use, we encourage to use the admin API rather than giving access to the user interface.\n\nThe Otoroshi rights are split by a list of authorizations on **organizations** and **teams**. \n\nLet's taking an example where we want to authorize an administrator user on all organizations and teams.\n\nThe list of rights will be :\n\n```json\n[\n {\n \"tenant\": \"*:rw\", # (1)\n \"teams\": [\"*:rw\"] # (2)\n }\n]\n```\n\n* (1): this field, separated by a colon, indicates the name of the tenant and the associated rights. In our case, we set `*` to apply the rights to all tenants, and the `rw` to get the read and write access on them.\n* (2): the `teams` array field, represents the list of rights, applied by team. The behaviour is the same as the tenant field, we define the team or the wildcard, followed by the rights\n\nif you want to have an user that is administrator only for one organization, the rights will be :\n\n```json\n[\n {\n \"tenant\": \"orga-1:rw\",\n \"teams\": [\"*:rw\"]\n }\n]\n```\n\nif you want to have an user that is administrator only for two organization, the rights will be :\n\n```json\n[\n {\n \"tenant\": \"orga-1:rw\",\n \"teams\": [\"*:rw\"]\n },\n {\n \"tenant\": \"orga-2:rw\",\n \"teams\": [\"*:rw\"]\n }\n]\n```\n\nif you want to have an user that can only see 3 teams of one organization and one team in the other, the rights will be :\n\n```json\n[\n {\n \"tenant\": \"orga-1:rw\",\n \"teams\": [\n \"team-1:rw\",\n \"team-2:rw\",\n \"team-3:rw\",\n ]\n },\n {\n \"tenant\": \"orga-2:rw\",\n \"teams\": [\n \"team-4:rw\"\n ]\n }\n]\n```\n\nThe list of possible rights for an organization or a team is:\n\n* **r**: read access\n* **w**: write access\n* **not**: none access to the resource\n\nThe list of possible tenant and teams are your created tenants and teams, and the wildcard to define rights to all resources once.\n\nThe user rights is defined by the @ref:[authentication modules](../entities/auth-modules.md).\n"},{"name":"wasm-usage.md","id":"/topics/wasm-usage.md","url":"/topics/wasm-usage.html","title":"Otoroshi and WASM","content":"# Otoroshi and WASM\n\nWebAssembly (WASM) is a simple machine model and executable format with an extensive specification. It is designed to be portable, compact, and execute at or near native speeds. Otoroshi already supports the execution of WASM files by providing different plugins that can be applied on routes. These plugins are:\n\n- `WasmRouteMatcher`: useful to define if a route can handle a request\n- `WasmPreRoute`: useful to check request and extract useful stuff for the other plugins\n- `WasmAccessValidator`: useful to control access to a route (jump to the next section to learn more about it)\n- `WasmRequestTransformer`: transform the content of an incoming request (body, headers, etc ...)\n- `WasmBackend`: execute a WASM file as Otoroshi target. Useful to implement user defined logic and function at the edge\n- `WasmResponseTransformer`: transform the content of the response produced by the target\n- `WasmSink`: create a sink plugin to handle unmatched requests\n- `WasmRequestHandler`: create a plugin that can handle the whole request lifecycle\n- `WasmJob`: create a job backed by a wasm function\n\nTo simplify the process of WASM creation and usage, Otoroshi provides:\n\n- otoroshi ui integration: a full set of plugins that let you pick which WASM function to runtime at any point in a route\n- otoroshi `wasm-manager`: a code editor in the browser that let you write your plugin in `Rust`, `TinyGo`, `Javascript` or `Assembly Script` without having to think about compiling it to WASM (you can find a complete tutorial about it @ref:[here](../how-to-s/wasm-manager-installation.md))\n\n@@@ div { .centered-img }\n\n@@@\n\n## Available tutorials\n\nhere is the list of available tutorials about wasm in Otoroshi\n\n1. @ref:[install a wasm manager](../how-to-s/wasm-manager-installation.md)\n2. @ref:[use a wasm plugin](../how-to-s/wasm-usage.md)\n\n## Wasm plugins entities\n\nOtoroshi provides a dedicated entity for wasm plugins. Those entities makes it easy to declare a wasm plugin with specific configuration only once and use it in multiple places. \n\nYou can find wasm plugin entities at `/bo/dashboard/wasm-plugins`\n\nIn a wasm plugin entity, you can define the source of your wasm plugin. You can choose between\n\n- `base64`: a base64 encoded wasm script\n- `file`: the path to a wasm script file\n- `http`: the url to a wasm script file\n- `wasm-manager`: the name of a wasm script compiled by a wasm manager instance\n\nthen you can define the number of memory pages available for each plugin instanciation, the name of the function you want to invoke, the config. map of the VM and if you want to keep a wasm vm alive during the request lifecycle to be able to reuse it in different plugin steps\n\n@@@ div { .centered-img }\n\n@@@\n\n## Otoroshi plugins api\n\nthe following parts illustrates the apis for the different plugins. Otoroshi uses [Extism](https://extism.org/) to handle content sharing between the JVM and the wasm VM. All structures are sent to/from the plugins as json strings. \n\nfor instance, if we want to write a `WasmBackendCall` plugin using javascript, we could write something like\n\n```js\nfunction backend_call() {\n const input_str = Host.inputString(); // here we get the context passed by otoroshi as json string\n const backend_call_context = JSON.parse(input_str); // and parse it\n if (backend_call_context.path === '/hello') {\n Host.outputString(JSON.stringify({ // now we return a json string to otoroshi with the \"backend\" call result\n headers: { \n 'content-type': 'application/json' \n },\n body_json: { \n message: `Hello ${ctx.request.query.name[0]}!` \n },\n status: 200,\n }));\n } else {\n Host.outputString(JSON.stringify({ // now we return a json string to otoroshi with the \"backend\" call result\n headers: { \n 'content-type': 'application/json' \n },\n body_json: { \n error: \"not found\"\n },\n status: 404,\n }));\n }\n return 0; // we return 0 to tell otoroshi that everything went fine\n}\n```\n\nthe following examples are written in rust. the rust macros provided by extism makes the usage of `Host.inputString` and `Host.outputString` useless. Remember that it's still used under the hood and that the structures are passed as json strings.\n\ndo not forget to add the extism pdk library to your project to make it compile\n\nCargo.toml\n: @@snip [Cargo.toml](../../../../../tools/otoroshi-wasm-manager/server/templates/rust/Cargo.toml) \n\ngo.mod\n: @@snip [go.mod](../../../../../tools/otoroshi-wasm-manager/server/templates/go/go.mod) \n\npackage.json\n: @@snip [package.json](../../../../../tools/otoroshi-wasm-manager/server/templates/js/package.json) \n\n### WasmRouteMatcher\n\nA route matcher is a plugin that can help the otoroshi router to select a route instance based on your own custom predicate. Basically it's a function that returns a boolean answer.\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn matches_route(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmMatchRouteContext {\n pub snowflake: Option,\n pub route: Route,\n pub request: RawRequest,\n pub config: Value,\n pub attrs: Value,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmMatchRouteResponse {\n pub result: bool,\n}\n```\n\n### WasmPreRoute\n\nA pre-route plugin can be used to short-circuit a request or enrich it (maybe extracting your own kind of auth. token, etc) a the very beginning of the request handling process, just after the routing part, when a route has bee chosen by the otoroshi router.\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn pre_route(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmPreRouteContext {\n pub snowflake: Option,\n pub route: Route,\n pub request: RawRequest,\n pub config: Value,\n pub global_config: Value,\n pub attrs: Value,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmPreRouteResponse {\n pub error: bool,\n pub attrs: Option>,\n pub status: Option,\n pub headers: Option>,\n pub body_bytes: Option>,\n pub body_base64: Option,\n pub body_json: Option,\n pub body_str: Option,\n}\n```\n\n### WasmAccessValidator\n\nAn access validator plugin is typically used to verify if the request can continue or must be cancelled. For instance, the otoroshi apikey plugin is an access validator that check if the current apikey provided by the client is legit and authorized on the current route.\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn can_access(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmAccessValidatorContext {\n pub snowflake: Option,\n pub apikey: Option,\n pub user: Option,\n pub request: RawRequest,\n pub config: Value,\n pub global_config: Value,\n pub attrs: Value,\n pub route: Route,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmAccessValidatorError {\n pub message: String,\n pub status: u32,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmAccessValidatorResponse {\n pub result: bool,\n pub error: Option,\n}\n```\n\n### WasmRequestTransformer\n\nA request transformer plugin can be used to compose or transform the request that will be sent to the backend\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn transform_request(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmRequestTransformerContext {\n pub snowflake: Option,\n pub raw_request: OtoroshiRequest,\n pub otoroshi_request: OtoroshiRequest,\n pub backend: Backend,\n pub apikey: Option,\n pub user: Option,\n pub request: RawRequest,\n pub config: Value,\n pub global_config: Value,\n pub attrs: Value,\n pub route: Route,\n pub request_body_bytes: Option>,\n}\n```\n\n### WasmBackendCall\n\nA backend call plugin can be used to simulate a backend behavior in otoroshi. For instance the static backend of otoroshi return the content of a file\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn call_backend(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmBackendContext {\n pub snowflake: Option,\n pub backend: Backend,\n pub apikey: Option,\n pub user: Option,\n pub raw_request: RawRequest,\n pub config: Value,\n pub global_config: Value,\n pub attrs: Value,\n pub route: Route,\n pub request_body_bytes: Option>,\n pub request: OtoroshiRequest,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmBackendResponse {\n pub headers: Option>,\n pub body_bytes: Option>,\n pub body_base64: Option,\n pub body_json: Option,\n pub body_str: Option,\n pub status: u32,\n}\n```\n\n### WasmResponseTransformer\n\nA response transformer plugin can be used to compose or transform the response that will be sent back to the client\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn transform_response(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmResponseTransformerContext {\n pub snowflake: Option,\n pub raw_response: OtoroshiResponse,\n pub otoroshi_response: OtoroshiResponse,\n pub apikey: Option,\n pub user: Option,\n pub request: RawRequest,\n pub config: Value,\n pub global_config: Value,\n pub attrs: Value,\n pub route: Route,\n pub response_body_bytes: Option>,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmTransformerResponse {\n pub headers: HashMap,\n pub cookies: Value,\n pub body_bytes: Option>,\n pub body_base64: Option,\n pub body_json: Option,\n pub body_str: Option,\n}\n```\n\n### WasmSink\n\nA sink is a kind of plugin that can be used to respond to any unmatched request before otoroshi sends back a 404 response\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn sink_matches(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[plugin_fn]\npub fn sink_handle(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmSinkContext {\n pub snowflake: Option,\n pub request: RawRequest,\n pub config: Value,\n pub global_config: Value,\n pub attrs: Value,\n pub origin: String,\n pub status: u32,\n pub message: String,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmSinkMatchesResponse {\n pub result: bool,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmSinkHandleResponse {\n pub status: u32,\n pub headers: HashMap,\n pub body_bytes: Option>,\n pub body_base64: Option,\n pub body_json: Option,\n pub body_str: Option,\n}\n```\n\n### WasmRequestHandler\n\nA request handler is a very special kind of plugin that can bypass the otoroshi proxy engine on specific domains and completely handles the request/response lifecycle on it's own.\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn can_handle_request(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[plugin_fn]\npub fn handle_request(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmRequestHandlerContext {\n pub request: RawRequest\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmRequestHandlerResponse {\n pub status: u32,\n pub headers: HashMap,\n pub body_bytes: Option>,\n pub body_base64: Option,\n pub body_json: Option,\n pub body_str: Option,\n}\n```\n\n### WasmJob\n\nA job is a plugin that can run periodically an do whatever you want. Typically, the kubernetes plugins of otoroshi are jobs that periodically sync stuff between otoroshi and kubernetes using the kube-api\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn job_run(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmJobContext {\n pub attrs: Value,\n pub global_config: Value,\n pub snowflake: Option,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmJobResult {\n\n}\n```\n\n### Common types\n\n```rs\nuse serde::{Deserialize, Serialize};\nuse serde_json::Value;\nuse std::collections::HashMap;\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct Backend {\n pub id: String,\n pub hostname: String,\n pub port: u32,\n pub tls: bool,\n pub weight: u32,\n pub protocol: String,\n pub ip_address: Option,\n pub predicate: Value,\n pub tls_config: Value,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct Apikey {\n #[serde(alias = \"clientId\")]\n pub client_id: String,\n #[serde(alias = \"clientName\")]\n pub client_name: String,\n pub metadata: HashMap,\n pub tags: Vec,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct User {\n pub name: String,\n pub email: String,\n pub profile: Value,\n pub metadata: HashMap,\n pub tags: Vec,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct RawRequest {\n pub id: u32,\n pub method: String,\n pub headers: HashMap,\n pub cookies: Value,\n pub tls: bool,\n pub uri: String,\n pub path: String,\n pub version: String,\n pub has_body: bool,\n pub remote: String,\n pub client_cert_chain: Value,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct Frontend {\n pub domains: Vec,\n pub strict_path: Option,\n pub exact: bool,\n pub headers: HashMap,\n pub query: HashMap,\n pub methods: Vec,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct HealthCheck {\n pub enabled: bool,\n pub url: String,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct RouteBackend {\n pub targets: Vec,\n pub root: String,\n pub rewrite: bool,\n pub load_balancing: Value,\n pub client: Value,\n pub health_check: Option,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct Route {\n pub id: String,\n pub name: String,\n pub description: String,\n pub tags: Vec,\n pub metadata: HashMap,\n pub enabled: bool,\n pub debug_flow: bool,\n pub export_reporting: bool,\n pub capture: bool,\n pub groups: Vec,\n pub frontend: Frontend,\n pub backend: RouteBackend,\n pub backend_ref: Option,\n pub plugins: Value,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct OtoroshiResponse {\n pub status: u32,\n pub headers: HashMap,\n pub cookies: Value,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct OtoroshiRequest {\n pub url: String,\n pub method: String,\n pub headers: HashMap,\n pub version: String,\n pub client_cert_chain: Value,\n pub backend: Option,\n pub cookies: Value,\n}\n```\n\n## Otoroshi interop. with host functions\n\notoroshi provides some host function in order make wasm interact with otoroshi internals. You can\n\n- access wasi resources\n- access http resources\n- access otoroshi internal state\n- access otoroshi internal configuration\n- access otoroshi static configuration\n- access plugin scoped in-memory key/value storage\n- access global in-memory key/value storage\n- access plugin scoped persistent key/value storage\n- access global persistent key/value storage\n\n### authorizations\n\nall the previously listed host functions are enabled with specific authorizations to avoid security issues with third party plugins. You can enable/disable the host function from the wasm plugin entity\n\n@@@ div { .centered-img }\n\n@@@\n\n\n### host functions abi\n\nyou'll find here the raw signatures for the otoroshi host functions. we are currently in the process of writing higher level functions to hide the complexity.\n\nevery time you the the following signature: `(context: u64, size: u64) -> u64` it means that otoroshi is expecting for a pointer to the call context (which is a json string) and it's size. The return is a pointer to the response (which is a json string).\n\nthe signature `(unused: u64) -> u64` means that there is no need for a params but as we technically need one (and hope to don't need one in the future), you have to pass something like `0` as parameter.\n\n```rust\nextern \"C\" {\n // log messages in otoroshi (log levels are 0 to 6 for trace, debug, info, warn, error, critical, max)\n fn proxy_log(logLevel: i32, message: u64, size: u64) -> i32;\n // trigger an otoroshi wasm event that can be exported through data exporters\n fn proxy_log_event(context: u64, size: u64) -> u64;\n // an http client\n fn proxy_http_call(context: u64, size: u64) -> u64;\n // access the current otoroshi state containing a snapshot of all otoroshi entities\n fn proxy_state(context: u64) -> u64;\n fn proxy_state_value(context: u64, size: u64) -> u64;\n // access the current otoroshi cluster configuration\n fn proxy_cluster_state(context: u64) -> u64;\n fn proxy_cluster_state_value(context: u64, size: u64) -> u64;\n // access the current otoroshi static configuration\n fn proxy_global_config(unused: u64) -> u64;\n // access the current otoroshi dynamic configuration\n fn proxy_config(unused: u64) -> u64;\n // access a persistent key/value store shared by every wasm plugins\n fn proxy_datastore_keys(context: u64, size: u64) -> u64;\n fn proxy_datastore_get(context: u64, size: u64) -> u64;\n fn proxy_datastore_exists(context: u64, size: u64) -> u64;\n fn proxy_datastore_pttl(context: u64, size: u64) -> u64;\n fn proxy_datastore_setnx(context: u64, size: u64) -> u64;\n fn proxy_datastore_del(context: u64, size: u64) -> u64;\n fn proxy_datastore_incrby(context: u64, size: u64) -> u64;\n fn proxy_datastore_pexpire(context: u64, size: u64) -> u64;\n fn proxy_datastore_all_matching(context: u64, size: u64) -> u64;\n // access a persistent key/value store for the current plugin instance only\n fn proxy_plugin_datastore_keys(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_get(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_exists(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_pttl(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_setnx(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_del(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_incrby(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_pexpire(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_all_matching(context: u64, size: u64) -> u64;\n // access an in memory key/value store for the current plugin instance only\n fn proxy_plugin_map_set(context: u64, size: u64) -> u64;\n fn proxy_plugin_map_get(context: u64, size: u64) -> u64;\n fn proxy_plugin_map(unused: u64) -> u64;\n // access an in memory key/value store shared by every wasm plugins\n fn proxy_global_map_set(context: u64, size: u64) -> u64;\n fn proxy_global_map_get(context: u64, size: u64) -> u64;\n fn proxy_global_map(unused: u64) -> u64;\n}\n```\n\nright know, when using the wasm manager, a default idiomatic implementation is provided for `TinyGo` and `Rust`\n\nhost.rs\n: @@snip [host.rs](../snippets/wasm-manager/host.rs) \n\nhost.go\n: @@snip [host.go](../snippets/wasm-manager/host.go) \n"}] \ No newline at end of file +[{"name":"about.md","id":"/about.md","url":"/about.html","title":"About Otoroshi","content":"# About Otoroshi\n\nAt the beginning of 2017, we had the need to create a new environment to be able to create new \"digital\" products very quickly in an agile fashion at @link:[MAIF](https://www.maif.fr) { open=new }. Naturally we turned to PaaS solutions and chose the excellent @link:[Clever Cloud](https://www.clever-cloud.com) { open=new } product to run our apps. \n\nWe also chose that every feature team will have the freedom to choose its own technological stack to build its product. It was a nice move but it has also introduced some challenges in terms of homogeneity for traceability, security, logging, ... because we did not want to force library usage in the products. We could have used something like @link:[Service Mesh Pattern](http://philcalcado.com/2017/08/03/pattern_service_mesh.html) { open=new } but the deployement model of @link:[Clever Cloud](https://www.clever-cloud.com) { open=new } prevented us to do it.\n\nThe right solution was to use a reverse proxy or some kind of API Gateway able to provide tracability, logging, security with apikeys, quotas, DNS as a service locator, etc. We needed something easy to use, with a human friendly UI, a nice API to extends its features, true hot reconfiguration, able to generate internal events for third party usage. A couple of solutions were available at that time, but not one seems to fit our needs, there was always something missing, too complicated for our needs or not playing well with @link:[Clever Cloud](https://www.clever-cloud.com) { open=new } deployment model.\n\nAt some point, we tried to write a small prototype to explore what could be our dream reverse proxy. The design was very simple, there were some rough edges but every major feature needed was there waiting to be enhanced.\n\n**Otoroshi** was born and we decided to move ahead with our hairy monster :)\n\n## Philosophy \n\nEvery OSS product build at @link:[MAIF](https://www.maif.fr) { open=new } like the developer portal @link:[Daikoku](https://maif.github.io/daikoku) { open=new } or @link:[Izanami](https://maif.github.io/izanami) { open=new } follow a common philosophy. \n\n* the services or API provided should be **technology agnostic**.\n* **http first**: http is the right answer to the previous quote \n* **api First**: the UI is just another client of the api. \n* **secured**: the services exposed need authentication for both humans or machines \n* **event based**: the services should expose a way to get notified of what happened inside. \n"},{"name":"api.md","id":"/api.md","url":"/api.html","title":"Admin REST API","content":"# Admin REST API\n\nOtoroshi provides a fully featured REST admin API to perform almost every operation possible in the Otoroshi dashboard. The Otoroshi dashbaord is just a regular consumer of the admin API.\n\nUsing the admin API, you can do whatever you want and enhance your Otoroshi instances with a lot of features that will feet your needs.\n\n## Swagger descriptor\n\nThe Otoroshi admin API is described using OpenAPI format and is available at :\n\nhttps://maif.github.io/otoroshi/manual/code/openapi.json\n\nEvery Otoroshi instance provides its own embedded OpenAPI descriptor at :\n\nhttp://otoroshi.oto.tools:8080/api/openapi.json\n\n## Swagger documentation\n\nYou can read the OpenAPI descriptor in a more human friendly fashion using `Swagger UI`. The swagger UI documentation of the Otoroshi admin API is available at :\n\nhttps://maif.github.io/otoroshi/swagger-ui/index.html\n\nEvery Otoroshi instance provides its own embedded OpenAPI descriptor at :\n\nhttp://otoroshi.oto.tools:8080/api/swagger/ui\n\nYou can also read the swagger UI documentation of the Otoroshi admin API below :\n\n@@@ div { .swagger-frame }\n\n\n@@@\n"},{"name":"architecture.md","id":"/architecture.md","url":"/architecture.html","title":"Architecture","content":"# Architecture\n\nWhen we started the development of Otoroshi, we had several classical patterns in mind like `Service gateway`, `Service locator`, `Circuit breakers`, etc ...\n\nAt start we thought about providing a bunch of librairies that would be included in each microservice or app to perform these tasks. But the more we were thinking about it, the more it was feeling weird, unagile, etc, it also prevented us to use any technical stack we wanted to use. So we decided to change our approach to something more universal.\n\nWe chose to make Otoroshi the central part of our microservices system, something between a reverse-proxy, a service gateway and a service locator where each call to a microservice (even from another microservice) must pass through Otoroshi. There are multiple benefits to do that, each call can be logged, audited, monitored, integrated with a circuit breaker, etc without imposing libraries and technical stack. Any service is exposed through its own domain and we rely only on DNS to handle the service location part. Any access to a service is secured by default with an api key and is supervised by a circuit breaker to avoid cascading failures.\n\n@@@ div { .centered-img }\n\n@@@\n\nOtoroshi tries to embrace our @ref:[global philosophy](./about.md#philosophy) by providing a full featured REST admin api, a gorgeous admin dashboard written in @link:[React](https://reactjs.org) { open=new } that uses the api, by generating traffic events, alerts events, audit events that can be consumed by several channels. Otoroshi also supports a bunch of datastores to better match with different use cases.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"aws.md","id":"/deploy/aws.md","url":"/deploy/aws.html","title":"AWS - Elastic Beanstalk","content":"# AWS - Elastic Beanstalk\n\nNow you want to use Otoroshi on AWS. There are multiple options to deploy Otoroshi on AWS, \nfor instance :\n\n* You can deploy the @ref:[Docker image](../install/get-otoroshi.md#from-docker) on [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html)\n* You can create a basic [Amazon EC2](https://docs.aws.amazon.com/fr_fr/AWSEC2/latest/UserGuide/concepts.html), access it via SSH, then \ndeploy the @ref:[otoroshi.jar](../install/get-otoroshi.md#from-jar-file) \n* Or you can use [AWS Elastic Beanstalk](https://aws.amazon.com/fr/elasticbeanstalk)\n\nIn this section we are going to cover how to deploy Otoroshi on [AWS Elastic Beanstalk](https://aws.amazon.com/fr/elasticbeanstalk). \n\n## AWS Elastic Beanstalk Overview\nUnlike Clever Cloud, to deploy an application on AWS Elastic Beanstalk, you don't link your app to your VCS repository, push your code and expect it to be built and run.\n\nAWS Elastic Beanstalk does only the run part. So you have to handle your own build pipeline, upload a Zip file containing your runnable, then AWS Elastic Beanstalk will take it from there. \n \nEg: for apps running on the JVM (Scala/Java/Kotlin) a Zip with the jar inside would suffice, for apps running in a Docker container, a Zip with the DockerFile would be enough. \n\n\n## Prepare your deployment target\nActually, there are 2 options to build your target. \n\nEither you create a DockerFile from this @ref:[Docker image](../install/get-otoroshi.md#from-docker), build a zip, and do all the Otoroshi custom configuration using ENVs.\n\nOr you download the @ref:[otoroshi.jar](../install/get-otoroshi.md#from-jar-file), do all the Otoroshi custom configuration using your own otoroshi.conf, and create a DockerFile that runs the jar using your otoroshi.conf. \n\nFor the second option your DockerFile would look like this :\n\n```dockerfile\nFROM openjdk:11\nVOLUME /tmp\nEXPOSE 8080\nADD otoroshi.jar otoroshi.jar\nADD otoroshi.conf otoroshi.conf\nRUN sh -c 'touch /otoroshi.jar'\nENV JAVA_OPTS=\"\"\nENTRYPOINT [ \"sh\", \"-c\", \"java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -Dconfig.file=/otoroshi.conf -jar /otoroshi.jar\" ]\n``` \n \nI'd recommend the second option.\n \nNow Zip your target (Jar + Conf + DockerFile) and get ready for deployment. \n\n## Create an Otoroshi instance on AWS Elastic Beanstalk\nFirst, go to [AWS Elastic Beanstalk Console](https://eu-west-3.console.aws.amazon.com/elasticbeanstalk/home?region=eu-west-3#/welcome), don't forget to sign in and make sure that you are in the good region (eg : eu-west-3 for Paris).\n\nHit **Get started** \n\n@@@ div { .centered-img }\n\n@@@\n\nSpecify the **Application name** of your application, Otoroshi for example.\n\n@@@ div { .centered-img }\n\n@@@\n \nChoose the **Platform** of the application you want to create, in your case use Docker.\n\nFor **Application code** choose **Upload your code** then hit **Upload**.\n\n@@@ div { .centered-img }\n\n@@@\n\nBrowse the zip created in the [previous section](#prepare-your-deployment-target) from your machine. \n\nAs you can see in the image above, you can also choose an S3 location, you can imagine that at the end of your build pipeline you upload your Zip to S3, and then get it from there (I wouldn't recommend that though).\n \nWhen the upload is done, hit **Configure more options**.\n \n@@@ div { .centered-img }\n\n@@@ \n \nRight now an AWS Elastic Beanstalk application has been created, and by default an environment named Otoroshi-env is being created as well.\n\nAWS Elastic Beanstalk can manage multiple environments of the same application, for instance environments can be (prod, preprod, expriments...). \n\nOtoroshi is a bit particular, it doesn't make much sense to have multiple environments, since Otoroshi will handle all the requests from/to backend services regardless of the environment. \n \nAs you see in the image above, we are now configuring the Otoroshi-env, the one and only environment of Otoroshi.\n \nFor **Configuration presets**, choose custom configuration, now you have a load balancer for your environment with the capacity of at least one instance and at most four.\nI'd recommend at least 2 instances, to change that, on the **Capacity** card hit **Modify**. \n\n@@@ div { .centered-img }\n\n@@@\n\nChange the **Instances** to min 2, max 4 then hit **Save**. For the **Scaling triggers**, I'd keep the default values, but know that you can edit the capacity config any time you want, it only costs a redeploy, which will be done automatically by the way.\n \nInstances size is by default t2.micro, which is a bit small for running Otoroshi, I'd recommend a t2.medium. \nOn the **Instances** card hit **Modify**.\n\n@@@ div { .centered-img }\n\n@@@\n\nFor **Instance type** choose t2.medium, then hit **Save**, no need to change the volume size, unless you have a lot of http call faults, which means a lot more logs, in that case the default volume size may not be enough.\n\nThe default environment created for Otoroshi, for instance Otoroshi-env, is a web server environment which fits in your case, but the thing is that on AWS Elastic Beanstalk by default a web server environment for a docker-based application, runs behind an Nginx proxy.\nWe have to remove that proxy. So on the **Software** card hit **Modify**.\n \n@@@ div { .centered-img }\n\n@@@ \n \nFor **Proxy server** choose None then hit **Save**.\n\nAlso note that you can set Envs for Otoroshi in same page (see image below). \n\n@@@ div { .centered-img }\n\n@@@ \n\nTo finalise the creation process, hit **Create app** on the bottom right.\n\nThe Otoroshi app is now created, and it's running which is cool, but we still don't have neither a **datastore** nor **https**.\n \n## Create an Otoroshi datastore on AWS ElastiCache\n\nBy default Otoroshi uses non persistent memory to store it's data, Otoroshi supports many kinds of datastores. In this section we will be covering Redis datastore. \n\nBefore starting, using a datastore hosted by AWS is not at all mandatory, feel free to use your own if you like, but if you want to learn more about ElastiCache, this section may interest you, otherwise you can skip it.\n\nGo to [AWS ElastiCache](https://eu-west-3.console.aws.amazon.com/elasticache/home?region=eu-west-3#) and hit **Get Started Now**.\n\n@@@ div { .centered-img }\n\n@@@ \n\nFor **Cluster engine** keep Redis.\n\nChoose a **Name** for your datastore, for instance otoroshi-datastore.\n\nYou can keep all the other default values and hit **Create** on the bottom right of the page.\n\nOnce your Redis Cluster is created, it would look like the image below.\n\n@@@ div { .centered-img }\n\n@@@ \n\n\nFor applications in the same security group as your cluster, redis cluster is accessible via the **Primary Endpoint**. Don't worry the default security group is fine, you don't need any configuration to access the cluster from Otoroshi.\n\nTo make Otoroshi use the created cluster, you can either use Envs `APP_STORAGE=redis`, `REDIS_HOST` and `REDIS_PORT`, or set `otoroshi.storage=redis`, `otoroshi.redis.host` and `otoroshi.redis.port` in your otoroshi.conf.\n\n## Create SSL certificate and configure your domain\n\nOtoroshi has now a datastore, but not yet ready for use. \n\nIn order to get it ready you need to :\n\n* Configure Otoroshi with your domain \n* Create a wildcard SSL certificate for your domain\n* Configure Otoroshi AWS Elastic Beanstalk instance with the SSL certificate \n* Configure your DNS to redirect all traffic on your domain to Otoroshi \n \n### Configure Otoroshi with your domain\n\nYou can use ENVs or you can use a custom otoroshi.conf in your Docker container.\n\nFor the second option your otoroshi.conf would look like this :\n\n``` \n include \"application.conf\"\n http.port = 8080\n app {\n env = \"prod\"\n domain = \"mysubdomain.oto.tools\"\n rootScheme = \"https\"\n snowflake {\n seed = 0\n }\n events {\n maxSize = 1000\n }\n backoffice {\n subdomain = \"otoroshi\"\n session {\n exp = 86400000\n }\n }\n \n storage = \"redis\"\n redis {\n host=\"myredishost\"\n port=myredisport\n }\n \n privateapps {\n subdomain = \"privateapps\"\n }\n \n adminapi {\n targetSubdomain = \"otoroshi-admin-internal-api\"\n exposedSubdomain = \"otoroshi-api\"\n defaultValues {\n backOfficeGroupId = \"admin-api-group\"\n backOfficeApiKeyClientId = \"admin-client-id\"\n backOfficeApiKeyClientSecret = \"admin-client-secret\"\n backOfficeServiceId = \"admin-api-service\"\n }\n proxy {\n https = true\n local = false\n }\n }\n claim {\n sharedKey = \"myclaimsharedkey\"\n }\n }\n \n play.http {\n session {\n secure = false\n httpOnly = true\n maxAge = 2147483646\n domain = \".mysubdomain.oto.tools\"\n cookieName = \"oto-sess\"\n }\n }\n``` \n\n### Create a wildcard SSL certificate for your domain\n\nGo to [AWS Certificate Manager](https://eu-west-3.console.aws.amazon.com/acm/home?region=eu-west-3#/firstrun).\n\nBelow **Provision certificates** hit **Get started**.\n\n@@@ div { .centered-img }\n\n@@@ \n \nKeep the default selected value **Request a public certificate** and hit **Request a certificate**.\n \n@@@ div { .centered-img }\n\n@@@ \n\nPut your **Domain name**, use *. for wildcard, for instance *\\*.mysubdomain.oto.tools*, then hit **Next**.\n\n@@@ div { .centered-img }\n\n@@@ \n\nYou can choose between **Email validation** and **DNS validation**, I'd recommend **DNS validation**, then hit **Review**. \n \n@@@ div { .centered-img }\n\n@@@ \n \nVerify that you did put the right **Domain name** then hit **Confirm and request**. \n\n@@@ div { .centered-img }\n\n@@@\n \nAs you see in the image above, to let Amazon do the validation you have to add the `CNAME` record to your DNS configuration. Normally this operation takes around one day.\n \n### Configure Otoroshi AWS Elastic Beanstalk instance with the SSL certificate \n\nOnce the certificate is validated, you need to modify the configuration of Otoroshi-env to add the SSL certificate for HTTPS. \nFor that you need to go to [AWS Elastic Beanstalk applications](https://eu-west-3.console.aws.amazon.com/elasticbeanstalk/home?region=eu-west-3#/applications),\nhit **Otoroshi-env**, then on the left side hit **Configuration**, then on the **Load balancer** card hit **Modify**.\n\n@@@ div { .centered-img }\n\n@@@\n\nIn the **Application Load Balancer** section hit **Add listener**.\n\n@@@ div { .centered-img }\n\n@@@\n\nFill the popup as the image above, then hit **Add**. \n\nYou should now be seeing something like this : \n \n@@@ div { .centered-img }\n\n@@@ \n \n \nMake sure that your listener is enabled, and on the bottom right of the page hit **Apply**.\n\nNow you have **https**, so let's use Otoroshi.\n\n### Configure your DNS to redirect all traffic on your domain to Otoroshi\n \nIt's actually pretty simple, you just need to add a `CNAME` record to your DNS configuration, that redirects *\\*.mysubdomain.oto.tools* to the DNS name of Otoroshi's load balancer.\n\nTo find the DNS name of Otoroshi's load balancer go to [AWS Ec2](https://eu-west-3.console.aws.amazon.com/ec2/v2/home?region=eu-west-3#LoadBalancers:tag:elasticbeanstalk:environment-name=Otoroshi-env;sort=loadBalancerName)\n\nYou would find something like this : \n \n@@@ div { .centered-img }\n\n@@@ \n\nThere is your DNS name, so add your `CNAME` record. \n \nOnce all these steps are done, the AWS Elastic Beanstalk Otoroshi instance, would now be handling all the requests on your domain. ;) \n"},{"name":"clever-cloud.md","id":"/deploy/clever-cloud.md","url":"/deploy/clever-cloud.html","title":"Clever-Cloud","content":"# Clever-Cloud\n\nNow you want to use Otoroshi on Clever Cloud. Otoroshi has been designed and created to run on Clever Cloud and a lot of choices were made because of how Clever Cloud works.\n\n## Create an Otoroshi instance on CleverCloud\n\nIf you want to customize the configuration @ref:[use env. variables](../install/setup-otoroshi.md#configuration-with-env-variables), you can use [the example provided below](#example-of-clevercloud-env-variables)\n\nCreate a new CleverCloud app based on a clevercloud git repo (not empty) or a github project of your own (not empty).\n\n@@@ div { .centered-img }\n\n@@@\n\nThen choose what kind of app your want to create, for Otoroshi, choose `Java + Jar`\n\n@@@ div { .centered-img }\n\n@@@\n\nNext, set up choose instance size and auto-scalling. Otoroshi can run on small instances, especially if you just want to test it.\n\n@@@ div { .centered-img }\n\n@@@\n\nFinally, choose a name for your app\n\n@@@ div { .centered-img }\n\n@@@\n\nNow you just need to customize environnment variables\n\nat this point, you can also add other env. variables to configure Otoroshi like in [the example provided below](#example-of-clevercloud-env-variables)\n\n@@@ div { .centered-img }\n\n@@@\n\nYou can also use expert mode :\n\n@@@ div { .centered-img }\n\n@@@\n\nNow, your app is ready, don't forget to add a custom domains name on the CleverCloud app matching the Otoroshi app domain. \n\n## Example of CleverCloud env. variables\n\nYou can add more env variables to customize your Otoroshi instance like the following. Use the expert mode to copy/paste all the values in one shot. If you want an real datastore, create a redis addon on clevercloud, link it to your otoroshi app and change the `APP_STORAGE` variable to `redis`\n\n
\n\n
\n```\nADMIN_API_CLIENT_ID=xxxx\nADMIN_API_CLIENT_SECRET=xxxxx\nADMIN_API_GROUP=xxxxxx\nADMIN_API_SERVICE_ID=xxxxxxx\nCLAIM_SHAREDKEY=xxxxxxx\nOTOROSHI_INITIAL_ADMIN_LOGIN=youremailaddress\nOTOROSHI_INITIAL_ADMIN_PASSWORD=yourpassword\nPLAY_CRYPTO_SECRET=xxxxxx\nSESSION_NAME=oto-session\nAPP_DOMAIN=yourdomain.tech\nAPP_ENV=prod\nAPP_STORAGE=inmemory\nAPP_ROOT_SCHEME=https\nCC_PRE_BUILD_HOOK=curl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/${latest_otoroshi_version}/otoroshi.jar'\nCC_JAR_PATH=./otoroshi.jar\nCC_JAVA_VERSION=11\nPORT=8080\nSESSION_DOMAIN=.yourdomain.tech\nSESSION_MAX_AGE=604800000\nSESSION_SECURE_ONLY=true\nUSER_AGENT=otoroshi\nMAX_EVENTS_SIZE=1\nWEBHOOK_SIZE=100\nAPP_BACKOFFICE_SESSION_EXP=86400000\nAPP_PRIVATEAPPS_SESSION_EXP=86400000\nENABLE_METRICS=true\nOTOROSHI_ANALYTICS_PRESSURE_ENABLED=true\nUSE_CACHE=true\n```\n
"},{"name":"clustering.md","id":"/deploy/clustering.md","url":"/deploy/clustering.html","title":"Otoroshi clustering","content":"# Otoroshi clustering\n\nOtoroshi can work as a cluster by default as you can spin many Otoroshi servers using the same datastore or datastore cluster. In that case any instance is capable of serving services, Otoroshi admin UI, Otoroshi admin API, etc.\n\nBut sometimes, this is not enough. So Otoroshi provides an additional clustering model named `Leader / Workers` where there is a leader cluster ([control plane](https://en.wikipedia.org/wiki/Control_plane)), composed of Otoroshi instances backed by a datastore like Redis, PostgreSQL or Cassandra, that is in charge of all `writes` to the datastore through Otoroshi admin UI and API, and a worker cluster ([data plane](https://en.wikipedia.org/wiki/Forwarding_plane)) composed of horizontally scalable Otoroshi instances, backed by a super fast in memory datastore, with the sole purpose of routing traffic to your services based on data synced from the leader cluster. With this distributed Otoroshi version, you can reach your goals of high availability, scalability and security.\n\nOtoroshi clustering only uses http internally (right now) to make communications between leaders and workers instances so it is fully compatible with PaaS providers like [Clever-Cloud](https://www.clever-cloud.com/en/) that only provide one external port for http traffic.\n\n@@@ div { .centered-img }\n\n\n*Fig. 1: Simplified view*\n@@@\n\n@@@ div { .centered-img }\n\n\n*Fig. 2: Deployment view*\n@@@\n\n## Cluster configuration\n\n```hocon\notoroshi {\n cluster {\n mode = \"leader\" # can be \"off\", \"leader\", \"worker\"\n compression = 4 # compression of the data sent between leader cluster and worker cluster. From -1 (disabled) to 9\n leader {\n name = ${?CLUSTER_LEADER_NAME} # name of the instance, if none, it will be generated\n urls = [\"http://127.0.0.1:8080\"] # urls to contact the leader cluster\n host = \"otoroshi-api.oto.tools\" # host of the otoroshi api in the leader cluster\n clientId = \"apikey-id\" # otoroshi api client id\n clientSecret = \"secret\" # otoroshi api client secret\n cacheStateFor = 4000 # state is cached during (ms)\n }\n worker {\n name = ${?CLUSTER_WORKER_NAME} # name of the instance, if none, it will be generated\n retries = 3 # number of retries when calling leader cluster\n timeout = 2000 # timeout when calling leader cluster\n state {\n retries = ${otoroshi.cluster.worker.retries} # number of retries when calling leader cluster on state sync\n pollEvery = 10000 # interval of time (ms) between 2 state sync\n timeout = ${otoroshi.cluster.worker.timeout} # timeout when calling leader cluster on state sync\n }\n quotas {\n retries = ${otoroshi.cluster.worker.retries} # number of retries when calling leader cluster on quotas sync\n pushEvery = 2000 # interval of time (ms) between 2 quotas sync\n timeout = ${otoroshi.cluster.worker.timeout} # timeout when calling leader cluster on quotas sync\n }\n }\n }\n}\n```\n\nyou can also use many env. variables to configure Otoroshi cluster\n\n```hocon\notoroshi {\n cluster {\n mode = ${?CLUSTER_MODE}\n compression = ${?CLUSTER_COMPRESSION}\n leader {\n name = ${?CLUSTER_LEADER_NAME}\n host = ${?CLUSTER_LEADER_HOST}\n url = ${?CLUSTER_LEADER_URL}\n clientId = ${?CLUSTER_LEADER_CLIENT_ID}\n clientSecret = ${?CLUSTER_LEADER_CLIENT_SECRET}\n groupingBy = ${?CLUSTER_LEADER_GROUP_BY}\n cacheStateFor = ${?CLUSTER_LEADER_CACHE_STATE_FOR}\n stateDumpPath = ${?CLUSTER_LEADER_DUMP_PATH}\n }\n worker {\n name = ${?CLUSTER_WORKER_NAME}\n retries = ${?CLUSTER_WORKER_RETRIES}\n timeout = ${?CLUSTER_WORKER_TIMEOUT}\n state {\n retries = ${?CLUSTER_WORKER_STATE_RETRIES}\n pollEvery = ${?CLUSTER_WORKER_POLL_EVERY}\n timeout = ${?CLUSTER_WORKER_POLL_TIMEOUT}\n }\n quotas {\n retries = ${?CLUSTER_WORKER_QUOTAS_RETRIES}\n pushEvery = ${?CLUSTER_WORKER_PUSH_EVERY}\n timeout = ${?CLUSTER_WORKER_PUSH_TIMEOUT}\n }\n }\n }\n}\n```\n\n@@@ warning\nYou **should** use HTTPS exposition for the Otoroshi API that will be used for data sync as sensitive informations are exchanged between control plane and data plane.\n@@@\n\n@@@ warning\nYou **must** have the same cluster configuration on every Otoroshi instance (worker/leader) with only names and mode changed for each instance. Some things in leader/worker are computed using configuration of their counterpart worker/leader.\n@@@\n\n## Cluster UI\n\nOnce an Otoroshi instance is launcher as cluster Leader, a new row of live metrics tile will be available on the home page of Otoroshi admin UI.\n\n@@@ div { .centered-img }\n\n@@@\n\nyou can also access a more detailed view of the cluster at `Settings (cog icon) / Cluster View`\n\n@@@ div { .centered-img }\n\n@@@\n\n## Run examples\n\nfor leader \n\n```sh\njava -Dhttp.port=8091 -Dhttps.port=9091 -Dotoroshi.cluster.mode=leader -jar otoroshi.jar\n```\n\nfor worker\n\n```sh\njava -Dhttp.port=8092 -Dhttps.port=9092 -Dotoroshi.cluster.mode=worker \\\n -Dotoroshi.cluster.leader.urls.0=http://127.0.0.1:8091 -jar otoroshi.jar\n```\n\n## Setup a cluster by example\n\nif you want to see how to setup an otoroshi cluster, just check @ref:[the clustering tutorial](../how-to-s/setup-otoroshi-cluster.md)"},{"name":"index.md","id":"/deploy/index.md","url":"/deploy/index.html","title":"Deploy to production","content":"# Deploy to production\n\nNow it's time to deploy Otoroshi in production, in this chapter we will see what kind of things you can do.\n\nOtoroshi can run wherever you want, even on a raspberry pi (Cluster^^) ;)\n\n@@@div { .plugin .platform }\n\n## Clever Cloud\n\nOtoroshi provides an integration to create easily services based on application deployed on your Clever Cloud account.\n\n\n@ref:[Documentation](./clever-cloud.md)\n@@@\n\n@@@div { .plugin .platform } \n## Kubernetes\nStarting at version 1.5.0, Otoroshi provides a native Kubernetes support.\n\n\n\n@ref:[Documentation](./kubernetes.md)\n@@@\n\n@@@div { .plugin .platform } \n## AWS Elastic Beanstalk\n\nRun Otoroshi on AWS Elastic Beanstalk\n\n\n\n@ref:[Tutorial](./aws.md)\n@@@\n\n@@@div { .plugin .platform } \n## Amazon ECS\n\nDeploy the Otoroshi Docker image using Amazon Elastic Container Service\n\n\n\n@link:[Tutorial](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html)\n@ref:[Docker image](../install/get-otoroshi.md#from-docker)\n\n@@@\n\n@@@div { .plugin .platform }\n## GCE\n\nDeploy the Docker image using Google Compute Engine container integration\n\n\n\n@link:[Documentation](https://cloud.google.com/compute/docs/containers/deploying-containers)\n@ref:[Docker image](../install/get-otoroshi.md#from-docker)\n\n@@@\n\n@@@div { .plugin .platform } \n## Azure\n\nDeploy the Docker image using Azure Container Service\n\n\n\n@link:[Documentation](https://azure.microsoft.com/en-us/services/container-service/)\n@ref:[Docker image](../install/get-otoroshi.md#from-docker) \n@@@\n\n@@@div { .plugin .platform } \n## Heroku\n\nDeploy the Docker image using Docker integration\n\n\n\n@link:[Documentation](https://devcenter.heroku.com/articles/container-registry-and-runtime)\n@ref:[Docker image](../install/get-otoroshi.md#from-docker)\n@@@\n\n@@@div { .plugin .platform } \n## CloudFoundry\n\nDeploy the Docker image using -Docker integration\n\n\n\n@link:[Documentation](https://docs.cloudfoundry.org/adminguide/docker.html)\n@ref:[Docker image](../install/get-otoroshi.md#from-docker)\n@@@\n\n@@@div { .plugin .platform .platform-actions-column } \n## Your own infrastructure\n\nAs Otoroshi is a Play Framework application, you can read the doc about putting a `Play` app in production.\n\nDownload the latest Otoroshi distribution, unzip it, customize it and run it.\n\n@link:[Play Framework](https://www.playframework.com)\n@link:[Production Configuration](https://www.playframework.com/documentation/2.6.x/ProductionConfiguration)\n@ref:[Otoroshi distribution](../install/get-otoroshi.md#from-zip)\n@@@\n\n@@@div { .break }\n## Scaling and clustering in production\n@@@\n\n\n@@@div { .plugin .platform .dark-platform } \n## Clustering\n\nDeploy Otoroshi as a cluster of leaders and workers.\n\n\n@ref:[Documentation](./clustering.md)\n@@@\n\n@@@div { .plugin .platform .dark-platform } \n## Scaling Otoroshi\n\nOtoroshi is designed to be reasonably easy to scale and be highly available.\n\n\n@ref:[Documentation](./scaling.md) \n@@@\n\n@@@ index\n\n* [Clustering](./clustering.md)\n* [Kubernetes](./kubernetes.md)\n* [Clever Cloud](./clever-cloud.md)\n* [AWS - Elastic Beanstalk](./aws.md)\n* [Scaling](./scaling.md) \n\n@@@\n"},{"name":"kubernetes.md","id":"/deploy/kubernetes.md","url":"/deploy/kubernetes.html","title":"Kubernetes","content":"# Kubernetes\n\nStarting at version 1.5.0, Otoroshi provides a native Kubernetes support. Multiple otoroshi jobs (that are actually kubernetes controllers) are provided in order to\n\n- sync kubernetes secrets of type `kubernetes.io/tls` to otoroshi certificates\n- act as a standard ingress controller (supporting `Ingress` objects)\n- provide Custom Resource Definitions (CRDs) to manage Otoroshi entities from Kubernetes and act as an ingress controller with its own resources\n\n## Installing otoroshi on your kubernetes cluster\n\n@@@ warning\nYou need to have cluster admin privileges to install otoroshi and its service account, role mapping and CRDs on a kubernetes cluster. We also advise you to create a dedicated namespace (you can name it `otoroshi` for example) to install otoroshi\n@@@\n\nIf you want to deploy otoroshi into your kubernetes cluster, you can download the deployment descriptors from https://github.com/MAIF/otoroshi/tree/master/kubernetes and use kustomize to create your own overlay.\n\nYou can also create a `kustomization.yaml` file with a remote base\n\n```yaml\nbases:\n- github.com/MAIF/otoroshi/kubernetes/kustomize/overlays/simple/?ref=v16.5.0-dev\n```\n\nThen deploy it with `kubectl apply -k ./overlays/myoverlay`. \n\nYou can also use Helm to deploy a simple otoroshi cluster on your kubernetes cluster\n\n```sh\nhelm repo add otoroshi https://maif.github.io/otoroshi/helm\nhelm install my-otoroshi otoroshi/otoroshi\n```\n\nBelow, you will find example of deployment. Do not hesitate to adapt them to your needs. Those descriptors have value placeholders that you will need to replace with actual values like \n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: ${domain}\n```\n\nyou will have to edit it to make it look like\n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: 'apis.my.domain'\n```\n\nif you don't want to use placeholders and environment variables, you can create a secret containing the configuration file of otoroshi\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: otoroshi-config\ntype: Opaque\nstringData:\n oto.conf: >\n include \"application.conf\"\n app {\n storage = \"redis\"\n domain = \"apis.my.domain\"\n }\n```\n\nand mount it in the otoroshi container\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: otoroshi-deployment\nspec:\n selector:\n matchLabels:\n run: otoroshi-deployment\n template:\n metadata:\n labels:\n run: otoroshi-deployment\n spec:\n serviceAccountName: otoroshi-admin-user\n terminationGracePeriodSeconds: 60\n hostNetwork: false\n containers:\n - image: maif/otoroshi:16.5.0-dev\n imagePullPolicy: IfNotPresent\n name: otoroshi\n args: ['-Dconfig.file=/usr/app/otoroshi/conf/oto.conf']\n ports:\n - containerPort: 8080\n name: \"http\"\n protocol: TCP\n - containerPort: 8443\n name: \"https\"\n protocol: TCP\n volumeMounts:\n - name: otoroshi-config\n mountPath: \"/usr/app/otoroshi/conf\"\n readOnly: true\n volumes:\n - name: otoroshi-config\n secret:\n secretName: otoroshi-config\n ...\n```\n\nYou can also create several secrets for each placeholder, mount them to the otoroshi container then use their file path as value\n\n```yaml\n env:\n - name: APP_STORAGE_ROOT\n value: otoroshi\n - name: APP_DOMAIN\n value: 'file:///the/path/of/the/secret/file'\n```\n\nyou can use the same trick in the config. file itself\n\n### Note on bare metal kubernetes cluster installation\n\n@@@ note\nBare metal kubernetes clusters don't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples below.\n@@@\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@\n\n### Common manifests\n\nthe following manifests are always needed. They create otoroshi CRDs, tokens, role, etc. Redis deployment is not mandatory, it's just an example. You can use your own existing setup.\n\nrbac.yaml\n: @@snip [rbac.yaml](../snippets/kubernetes/kustomize/base/rbac.yaml) \n\ncrds.yaml\n: @@snip [crds.yaml](../snippets/kubernetes/kustomize/base/crds.yaml) \n\nredis.yaml\n: @@snip [redis.yaml](../snippets/kubernetes/kustomize/base/redis.yaml) \n\n\n### Deploy a simple otoroshi instanciation on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. We use a service of type `LoadBalancer` to expose otoroshi to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple/dns.example) \n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster\n\nHere we have 2 replicas connected to the same redis instance. Nothing fancy. The otoroshi instance are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal/dns.example) \n\n\n### Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster using a DaemonSet\n\nHere we have one otoroshi instance on each kubernetes node (with the `otoroshi-kind: instance` label) with redis persistance. The otoroshi instances are exposed as `hostPort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/deployment.yaml) \n\nhaproxy.example\n: @@snip [haproxy.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/haproxy.example) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/simple-baremetal-daemonset/dns.example) \n\n### Deploy an otoroshi cluster on a cloud provider managed kubernetes cluster\n\nHere we have 2 replicas of an otoroshi leader connected to a redis instance and 2 replicas of an otoroshi worker connected to the leader. We use a service of type `LoadBalancer` to expose otoroshi leader/worker to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the `LoadBalancer` external `CNAME` (see the example below)\n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster/deployment.yaml) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster\n\nHere we have 2 replicas of otoroshi leader connected to the same redis instance and 2 replicas for otoroshi worker. The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal/dns.example) \n\n### Deploy an otoroshi cluster on a bare metal kubernetes cluster using DaemonSet\n\nHere we have 1 otoroshi leader instance on each kubernetes node (with the `otoroshi-kind: leader` label) connected to the same redis instance and 1 otoroshi worker instance on each kubernetes node (with the `otoroshi-kind: worker` label). The otoroshi instances are exposed as `nodePort` so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below). \n\ndeployment.yaml\n: @@snip [deployment.yaml](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/deployment.yaml) \n\nnginx.example\n: @@snip [nginx.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/nginx.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\ndns.example\n: @@snip [dns.example](../snippets/kubernetes/kustomize/overlays/cluster-baremetal-daemonset/dns.example) \n\n## Using Otoroshi as an Ingress Controller\n\nIf you want to use Otoroshi as an [Ingress Controller](https://kubernetes.io/fr/docs/concepts/services-networking/ingress/), just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Ingress Controller`.\n\nThen add the following configuration for the job (with your own tweaks of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": true, // sync ingresses\n \"crds\": false, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities. can be \"default\" to use otoroshi default templates\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n \"organization\": {},\n \"team\": {},\n \"data-exporter\": {},\n \"routes\": {},\n \"route-compositions\": {},\n \"backends\": {}\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nNow you can deploy your first service ;)\n\n### Deploy an ingress route\n\nnow let's say you want to deploy an http service and route to the outside world through otoroshi\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: kennethreitz/httpbin\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 80\n name: \"http\"\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8080\n targetPort: http\n name: http\n selector:\n run: http-app-deployment\n---\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nonce deployed, otoroshi will sync with kubernetes and create the corresponding service to route your app. You will be able to access your app with\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get\n```\n\n### Support for Ingress Classes\n\nSince Kubernetes 1.18, you can use `IngressClass` type of manifest to specify which ingress controller you want to use for a deployment (https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#extended-configuration-with-ingress-classes). Otoroshi is fully compatible with this new manifest `kind`. To use it, configure the Ingress job to match your controller\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClasses\": [\"otoroshi.io/ingress-controller\"],\n ...\n }\n}\n```\n\nthen you have to deploy an `IngressClass` to declare Otoroshi as an ingress controller\n\n```yaml\napiVersion: \"networking.k8s.io/v1beta1\"\nkind: \"IngressClass\"\nmetadata:\n name: \"otoroshi-ingress-controller\"\nspec:\n controller: \"otoroshi.io/ingress-controller\"\n parameters:\n apiGroup: \"proxy.otoroshi.io/v1alpha\"\n kind: \"IngressParameters\"\n name: \"otoroshi-ingress-controller\"\n```\n\nand use it in your `Ingress`\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\nspec:\n ingressClassName: otoroshi-ingress-controller\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\n### Use multiple ingress controllers\n\nIt is of course possible to use multiple ingress controller at the same time (https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#using-multiple-ingress-controllers) using the annotation `kubernetes.io/ingress.class`. By default, otoroshi reacts to the class `otoroshi`, but you can make it the default ingress controller with the following config\n\n```json\n{\n \"KubernetesConfig\": {\n ...\n \"ingressClass\": \"*\",\n ...\n }\n}\n```\n\n### Supported annotations\n\nif you need to customize the service descriptor behind an ingress rule, you can use some annotations. If you need better customisation, just go to the CRDs part. The following annotations are supported :\n\n- `ingress.otoroshi.io/groups`\n- `ingress.otoroshi.io/group`\n- `ingress.otoroshi.io/groupId`\n- `ingress.otoroshi.io/name`\n- `ingress.otoroshi.io/targetsLoadBalancing`\n- `ingress.otoroshi.io/stripPath`\n- `ingress.otoroshi.io/enabled`\n- `ingress.otoroshi.io/userFacing`\n- `ingress.otoroshi.io/privateApp`\n- `ingress.otoroshi.io/forceHttps`\n- `ingress.otoroshi.io/maintenanceMode`\n- `ingress.otoroshi.io/buildMode`\n- `ingress.otoroshi.io/strictlyPrivate`\n- `ingress.otoroshi.io/sendOtoroshiHeadersBack`\n- `ingress.otoroshi.io/readOnly`\n- `ingress.otoroshi.io/xForwardedHeaders`\n- `ingress.otoroshi.io/overrideHost`\n- `ingress.otoroshi.io/allowHttp10`\n- `ingress.otoroshi.io/logAnalyticsOnServer`\n- `ingress.otoroshi.io/useAkkaHttpClient`\n- `ingress.otoroshi.io/useNewWSClient`\n- `ingress.otoroshi.io/tcpUdpTunneling`\n- `ingress.otoroshi.io/detectApiKeySooner`\n- `ingress.otoroshi.io/letsEncrypt`\n- `ingress.otoroshi.io/publicPatterns`\n- `ingress.otoroshi.io/privatePatterns`\n- `ingress.otoroshi.io/additionalHeaders`\n- `ingress.otoroshi.io/additionalHeadersOut`\n- `ingress.otoroshi.io/missingOnlyHeadersIn`\n- `ingress.otoroshi.io/missingOnlyHeadersOut`\n- `ingress.otoroshi.io/removeHeadersIn`\n- `ingress.otoroshi.io/removeHeadersOut`\n- `ingress.otoroshi.io/headersVerification`\n- `ingress.otoroshi.io/matchingHeaders`\n- `ingress.otoroshi.io/ipFiltering.whitelist`\n- `ingress.otoroshi.io/ipFiltering.blacklist`\n- `ingress.otoroshi.io/api.exposeApi`\n- `ingress.otoroshi.io/api.openApiDescriptorUrl`\n- `ingress.otoroshi.io/healthCheck.enabled`\n- `ingress.otoroshi.io/healthCheck.url`\n- `ingress.otoroshi.io/jwtVerifier.ids`\n- `ingress.otoroshi.io/jwtVerifier.enabled`\n- `ingress.otoroshi.io/jwtVerifier.excludedPatterns`\n- `ingress.otoroshi.io/authConfigRef`\n- `ingress.otoroshi.io/redirection.enabled`\n- `ingress.otoroshi.io/redirection.code`\n- `ingress.otoroshi.io/redirection.to`\n- `ingress.otoroshi.io/clientValidatorRef`\n- `ingress.otoroshi.io/transformerRefs`\n- `ingress.otoroshi.io/transformerConfig`\n- `ingress.otoroshi.io/accessValidator.enabled`\n- `ingress.otoroshi.io/accessValidator.excludedPatterns`\n- `ingress.otoroshi.io/accessValidator.refs`\n- `ingress.otoroshi.io/accessValidator.config`\n- `ingress.otoroshi.io/preRouting.enabled`\n- `ingress.otoroshi.io/preRouting.excludedPatterns`\n- `ingress.otoroshi.io/preRouting.refs`\n- `ingress.otoroshi.io/preRouting.config`\n- `ingress.otoroshi.io/issueCert`\n- `ingress.otoroshi.io/issueCertCA`\n- `ingress.otoroshi.io/gzip.enabled`\n- `ingress.otoroshi.io/gzip.excludedPatterns`\n- `ingress.otoroshi.io/gzip.whiteList`\n- `ingress.otoroshi.io/gzip.blackList`\n- `ingress.otoroshi.io/gzip.bufferSize`\n- `ingress.otoroshi.io/gzip.chunkedThreshold`\n- `ingress.otoroshi.io/gzip.compressionLevel`\n- `ingress.otoroshi.io/cors.enabled`\n- `ingress.otoroshi.io/cors.allowOrigin`\n- `ingress.otoroshi.io/cors.exposeHeaders`\n- `ingress.otoroshi.io/cors.allowHeaders`\n- `ingress.otoroshi.io/cors.allowMethods`\n- `ingress.otoroshi.io/cors.excludedPatterns`\n- `ingress.otoroshi.io/cors.maxAge`\n- `ingress.otoroshi.io/cors.allowCredentials`\n- `ingress.otoroshi.io/clientConfig.useCircuitBreaker`\n- `ingress.otoroshi.io/clientConfig.retries`\n- `ingress.otoroshi.io/clientConfig.maxErrors`\n- `ingress.otoroshi.io/clientConfig.retryInitialDelay`\n- `ingress.otoroshi.io/clientConfig.backoffFactor`\n- `ingress.otoroshi.io/clientConfig.connectionTimeout`\n- `ingress.otoroshi.io/clientConfig.idleTimeout`\n- `ingress.otoroshi.io/clientConfig.callAndStreamTimeout`\n- `ingress.otoroshi.io/clientConfig.callTimeout`\n- `ingress.otoroshi.io/clientConfig.globalTimeout`\n- `ingress.otoroshi.io/clientConfig.sampleInterval`\n- `ingress.otoroshi.io/enforceSecureCommunication`\n- `ingress.otoroshi.io/sendInfoToken`\n- `ingress.otoroshi.io/sendStateChallenge`\n- `ingress.otoroshi.io/secComHeaders.claimRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateRequestName`\n- `ingress.otoroshi.io/secComHeaders.stateResponseName`\n- `ingress.otoroshi.io/secComTtl`\n- `ingress.otoroshi.io/secComVersion`\n- `ingress.otoroshi.io/secComInfoTokenVersion`\n- `ingress.otoroshi.io/secComExcludedPatterns`\n- `ingress.otoroshi.io/secComSettings.size`\n- `ingress.otoroshi.io/secComSettings.secret`\n- `ingress.otoroshi.io/secComSettings.base64`\n- `ingress.otoroshi.io/secComUseSameAlgo`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.size`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeOtoToBack.base64`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.size`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.secret`\n- `ingress.otoroshi.io/secComAlgoChallengeBackToOto.base64`\n- `ingress.otoroshi.io/secComAlgoInfoToken.size`\n- `ingress.otoroshi.io/secComAlgoInfoToken.secret`\n- `ingress.otoroshi.io/secComAlgoInfoToken.base64`\n- `ingress.otoroshi.io/securityExcludedPatterns`\n\nfor more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n\nwith the previous example, the ingress does not define any apikey, so the route is public. If you want to enable apikeys on it, you can deploy the following descriptor\n\n```yaml\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: http-app-ingress\n annotations:\n kubernetes.io/ingress.class: otoroshi\n ingress.otoroshi.io/group: http-app-group\n ingress.otoroshi.io/forceHttps: 'true'\n ingress.otoroshi.io/sendOtoroshiHeadersBack: 'true'\n ingress.otoroshi.io/overrideHost: 'true'\n ingress.otoroshi.io/allowHttp10: 'false'\n ingress.otoroshi.io/publicPatterns: ''\nspec:\n tls:\n - hosts:\n - httpapp.foo.bar\n secretName: http-app-cert\n rules:\n - host: httpapp.foo.bar\n http:\n paths:\n - path: /\n backend:\n serviceName: http-app-service\n servicePort: 8080\n```\n\nnow you can use an existing apikey in the `http-app-group` to access your app\n\n```sh\ncurl -X GET https://httpapp.foo.bar/get -u existing-apikey-1:secret-1\n```\n\n## Use Otoroshi CRDs for a better/full integration\n\nOtoroshi provides some Custom Resource Definitions for kubernetes in order to manage Otoroshi related entities in kubernetes\n\n- `service-groups`\n- `service-descriptors`\n- `apikeys`\n- `certificates`\n- `global-configs`\n- `jwt-verifiers`\n- `auth-modules`\n- `scripts`\n- `tcp-services`\n- `data-exporters`\n- `admins`\n- `teams`\n- `organizations`\n\nusing CRDs, you will be able to deploy and manager those entities from kubectl or the kubernetes api like\n\n```sh\nsudo kubectl get apikeys --all-namespaces\nsudo kubectl get service-descriptors --all-namespaces\ncurl -X GET \\\n -H 'Authorization: Bearer eyJhbGciOiJSUzI....F463SrpOehQRaQ' \\\n -H 'Accept: application/json' -k \\\n https://127.0.0.1:6443/apis/proxy.otoroshi.io/v1/apikeys | jq\n```\n\nYou can see this as better `Ingress` resources. Like any `Ingress` resource can define which controller it uses (using the `kubernetes.io/ingress.class` annotation), you can chose another kind of resource instead of `Ingress`. With Otoroshi CRDs you can even define resources like `Certificate`, `Apikey`, `AuthModules`, `JwtVerifier`, etc. It will help you to use all the power of Otoroshi while using the deployment model of kubernetes.\n \n@@@ warning\nwhen using Otoroshi CRDs, Kubernetes becomes the single source of truth for the synced entities. It means that any value in the descriptors deployed will overrides the one in Otoroshi datastore each time it's synced. So be careful if you use the Otoroshi UI or the API, some changes in configuration may be overriden by CRDs sync job.\n@@@\n\n### Resources examples\n\ngroup.yaml\n: @@snip [group.yaml](../snippets/crds/group.yaml) \n\napikey.yaml\n: @@snip [apikey.yaml](../snippets/crds/apikey.yaml) \n\nservice-descriptor.yaml\n: @@snip [service.yaml](../snippets/crds/service-descriptor.yaml) \n\ncertificate.yaml\n: @@snip [cert.yaml](../snippets/crds/certificate.yaml) \n\njwt.yaml\n: @@snip [jwt.yaml](../snippets/crds/jwt.yaml) \n\nauth.yaml\n: @@snip [auth.yaml](../snippets/crds/auth.yaml) \n\norganization.yaml\n: @@snip [orga.yaml](../snippets/crds/organization.yaml) \n\nteam.yaml\n: @@snip [team.yaml](../snippets/crds/team.yaml) \n\n\n### Configuration\n\nTo configure it, just go to the danger zone, and in `Global scripts` add the job named `Kubernetes Otoroshi CRDs Controller`. Then add the following configuration for the job (with your own tweak of course)\n\n```json\n{\n \"KubernetesConfig\": {\n \"enabled\": true,\n \"crds\": true,\n \"endpoint\": \"https://127.0.0.1:6443\",\n \"token\": \"eyJhbGciOiJSUzI....F463SrpOehQRaQ\",\n \"namespaces\": [\n \"*\"\n ]\n }\n}\n```\n\nthe configuration can have the following values \n\n```javascript\n{\n \"KubernetesConfig\": {\n \"endpoint\": \"https://127.0.0.1:6443\", // the endpoint to talk to the kubernetes api, optional\n \"token\": \"xxxx\", // the bearer token to talk to the kubernetes api, optional\n \"userPassword\": \"user:password\", // the user password tuple to talk to the kubernetes api, optional\n \"caCert\": \"/etc/ca.cert\", // the ca cert file path to talk to the kubernetes api, optional\n \"trust\": false, // trust any cert to talk to the kubernetes api, optional\n \"namespaces\": [\"*\"], // the watched namespaces\n \"labels\": [\"label\"], // the watched namespaces\n \"ingressClasses\": [\"otoroshi\"], // the watched kubernetes.io/ingress.class annotations, can be *\n \"defaultGroup\": \"default\", // the group to put services in otoroshi\n \"ingresses\": false, // sync ingresses\n \"crds\": true, // sync crds\n \"kubeLeader\": false, // delegate leader election to kubernetes, to know where the sync job should run\n \"restartDependantDeployments\": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments\n \"templates\": { // template for entities that will be merged with kubernetes entities. can be \"default\" to use otoroshi default templates\n \"service-group\": {},\n \"service-descriptor\": {},\n \"apikeys\": {},\n \"global-config\": {},\n \"jwt-verifier\": {},\n \"tcp-service\": {},\n \"certificate\": {},\n \"auth-module\": {},\n \"data-exporter\": {},\n \"script\": {},\n \"organization\": {},\n \"team\": {},\n \"data-exporter\": {}\n }\n }\n}\n```\n\nIf `endpoint` is not defined, Otoroshi will try to get it from `$KUBERNETES_SERVICE_HOST` and `$KUBERNETES_SERVICE_PORT`.\nIf `token` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/token`.\nIf `caCert` is not defined, Otoroshi will try to get it from the file at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.\nIf `$KUBECONFIG` is defined, `endpoint`, `token` and `caCert` will be read from the current context of the file referenced by it.\n\nyou can find a more complete example of the configuration object [here](https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/plugins/jobs/kubernetes/config.scala#L134-L163)\n\n### Note about `apikeys` and `certificates` resources\n\nApikeys and Certificates are a little bit different than the other resources. They have ability to be defined without their secret part, but with an export setting so otoroshi will generate the secret parts and export the apikey or the certificate to kubernetes secret. Then any app will be able to mount them as volumes (see the full example below)\n\nIn those resources you can define \n\n```yaml\nexportSecret: true \nsecretName: the-secret-name\n```\n\nand omit `clientSecret` for apikey or `publicKey`, `privateKey` for certificates. For certificate you will have to provide a `csr` for the certificate in order to generate it\n\n```yaml\ncsr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n - httpapps.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n```\n\nwhen apikeys are exported as kubernetes secrets, they will have the type `otoroshi.io/apikey-secret` with values `clientId` and `clientSecret`\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: apikey-1\ntype: otoroshi.io/apikey-secret\ndata:\n clientId: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n clientSecret: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n```\n\nwhen certificates are exported as kubernetes secrets, they will have the type `kubernetes.io/tls` with the standard values `tls.crt` (the full cert chain) and `tls.key` (the private key). For more convenience, they will also have a `cert.crt` value containing the actual certificate without the ca chain and `ca-chain.crt` containing the ca chain without the certificate.\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: certificate-1\ntype: kubernetes.io/tls\ndata:\n tls.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n tls.key: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n cert.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA==\n ca-chain.crt: TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdA== \n```\n\n## Full CRD example\n\nthen you can deploy the previous example with better configuration level, and using mtls, apikeys, etc\n\nLet say the app looks like :\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\n// here we read the apikey to access http-app-2 from files mounted from secrets\nconst clientId = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientId').toString('utf8')\nconst clientSecret = fs.readFileSync('/var/run/secrets/kubernetes.io/apikeys/clientSecret').toString('utf8')\n\nconst backendKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/tls.key').toString('utf8')\nconst backendCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/cert.crt').toString('utf8')\nconst backendCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/backend/ca-chain.crt').toString('utf8')\n\nconst clientKey = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/tls.key').toString('utf8')\nconst clientCert = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/cert.crt').toString('utf8')\nconst clientCa = fs.readFileSync('/var/run/secrets/kubernetes.io/certs/client/ca-chain.crt').toString('utf8')\n\nfunction callApi2() {\n return new Promise((success, failure) => {\n const options = { \n // using the implicit internal name (*.global.otoroshi.mesh) of the other service descriptor passing through otoroshi\n hostname: 'http-app-service-descriptor-2.global.otoroshi.mesh', \n port: 433, \n path: '/', \n method: 'GET',\n headers: {\n 'Accept': 'application/json',\n 'Otoroshi-Client-Id': clientId,\n 'Otoroshi-Client-Secret': clientSecret,\n },\n cert: clientCert,\n key: clientKey,\n ca: clientCa\n }; \n let data = '';\n const req = https.request(options, (res) => { \n res.on('data', (d) => { \n data = data + d.toString('utf8');\n }); \n res.on('end', () => { \n success({ body: JSON.parse(data), res });\n }); \n res.on('error', (e) => { \n failure(e);\n }); \n }); \n req.end();\n })\n}\n\nconst options = { \n key: backendKey, \n cert: backendCert, \n ca: backendCa, \n // we want mtls behavior\n requestCert: true, \n rejectUnauthorized: true\n}; \nhttps.createServer(options, (req, res) => { \n res.writeHead(200, {'Content-Type': 'application/json'});\n callApi2().then(resp => {\n res.write(JSON.stringify{ (\"message\": `Hello to ${req.socket.getPeerCertificate().subject.CN}`, api2: resp.body })); \n });\n}).listen(433);\n```\n\nthen, the descriptors will be :\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: http-app-deployment\nspec:\n selector:\n matchLabels:\n run: http-app-deployment\n replicas: 1\n template:\n metadata:\n labels:\n run: http-app-deployment\n spec:\n containers:\n - image: foo/http-app\n imagePullPolicy: IfNotPresent\n name: otoroshi\n ports:\n - containerPort: 443\n name: \"https\"\n volumeMounts:\n - name: apikey-volume\n # here you will be able to read apikey from files \n # - /var/run/secrets/kubernetes.io/apikeys/clientId\n # - /var/run/secrets/kubernetes.io/apikeys/clientSecret\n mountPath: \"/var/run/secrets/kubernetes.io/apikeys\"\n readOnly: true\n volumeMounts:\n - name: backend-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/backend/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/backend/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/backend\"\n readOnly: true\n - name: client-cert-volume\n # here you will be able to read app cert from files \n # - /var/run/secrets/kubernetes.io/certs/client/tls.crt\n # - /var/run/secrets/kubernetes.io/certs/client/tls.key\n mountPath: \"/var/run/secrets/kubernetes.io/certs/client\"\n readOnly: true\n volumes:\n - name: apikey-volume\n secret:\n # here we reference the secret name from apikey http-app-2-apikey-1\n secretName: secret-2\n - name: backend-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-backend\n secretName: http-app-certificate-backend-secret\n - name: client-cert-volume\n secret:\n # here we reference the secret name from cert http-app-certificate-client\n secretName: http-app-certificate-client-secret\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: http-app-service\nspec:\n ports:\n - port: 8443\n targetPort: https\n name: https\n selector:\n run: http-app-deployment\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ServiceGroup\nmetadata:\n name: http-app-group\n annotations:\n otoroshi.io/id: http-app-group\nspec:\n description: a group to hold services about the http-app\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ApiKey\nmetadata:\n name: http-app-apikey-1\n# this apikey can be used to access the app\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-1\n authorizedEntities: \n - group_http-app-group\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-1\n# this apikey can be used to access another app in a different group\nspec:\n # a secret name secret-1 will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: secret-2\n authorizedEntities: \n - group_http-app-2-group\n---\napiVersion: proxy.otoroshi.io/v1\nkind: Certificate\nmetadata:\n name: http-app-certificate-frontend\nspec:\n description: certificate for the http-app on otorshi frontend\n autoRenew: true\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - httpapp.foo.bar\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-front, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1\nkind: Certificate\nmetadata:\n name: http-app-certificate-backend\nspec:\n description: certificate for the http-app deployed on pods\n autoRenew: true\n # a secret name http-app-certificate-backend-secret will be created by otoroshi and can be used by containers\n exportSecret: true \n secretName: http-app-certificate-backend-secret\n csr:\n issuer: CN=Otoroshi Root\n hosts: \n - http-app-service \n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-back, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1\nkind: Certificate\nmetadata:\n name: http-app-certificate-client\nspec:\n description: certificate for the http-app\n autoRenew: true\n secretName: http-app-certificate-client-secret\n csr:\n issuer: CN=Otoroshi Root\n key:\n algo: rsa\n size: 2048\n subject: UID=httpapp-client, O=OtoroshiApps\n client: false\n ca: false\n duration: 31536000000\n signatureAlg: SHA256WithRSAEncryption\n digestAlg: SHA-256\n---\napiVersion: proxy.otoroshi.io/v1\nkind: ServiceDescriptor\nmetadata:\n name: http-app-service-descriptor\nspec:\n description: the service descriptor for the http app\n groups: \n - http-app-group\n forceHttps: true\n hosts:\n - httpapp.foo.bar # hostname exposed oustide of the kubernetes cluster\n # - http-app-service-descriptor.global.otoroshi.mesh # implicit internal name inside the kubernetes cluster \n matchingRoot: /\n targets:\n - url: https://http-app-service:8443\n # alternatively, you can use serviceName and servicePort to use pods ip addresses\n # serviceName: http-app-service\n # servicePort: https\n mtlsConfig:\n # use mtls to contact the backend\n mtls: true\n certs: \n # reference the DN for the client cert\n - UID=httpapp-client, O=OtoroshiApps\n trustedCerts: \n # reference the DN for the CA cert \n - CN=Otoroshi Root\n sendOtoroshiHeadersBack: true\n xForwardedHeaders: true\n overrideHost: true\n allowHttp10: false\n publicPatterns:\n - /health\n additionalHeaders:\n x-foo: bar\n# here you can specify everything supported by otoroshi like jwt-verifiers, auth config, etc ... for more informations about it, just go to https://maif.github.io/otoroshi/swagger-ui/index.html\n```\n\nnow with this descriptor deployed, you can access your app with a command like \n\n```sh\nCLIENT_ID=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientId}\" | base64 --decode`\nCLIENT_SECRET=`kubectl get secret secret-1 -o jsonpath=\"{.data.clientSecret}\" | base64 --decode`\ncurl -X GET https://httpapp.foo.bar/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n## Expose Otoroshi to outside world\n\nIf you deploy Otoroshi on a kubernetes cluster, the Otoroshi service is deployed as a loadbalancer (service type: `LoadBalancer`). You'll need to declare in your DNS settings any name that can be routed by otoroshi going to the loadbalancer endpoint (CNAME or ip addresses) of your kubernetes distribution. If you use a managed kubernetes cluster from a cloud provider, it will work seamlessly as they will provide external loadbalancers out of the box. However, if you use a bare metal kubernetes cluster, id doesn't come with support for external loadbalancers (service of type `LoadBalancer`). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like [MetalLB](https://metallb.universe.tf/) that provide software `LoadBalancer` services to bare metal clusters or you can use and customize examples in the installation section.\n\n@@@ warning\nWe don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.\n@@@ \n\n## Access a service from inside the k8s cluster\n\n### Using host header overriding\n\nYou can access any service referenced in otoroshi, through otoroshi from inside the kubernetes cluster by using the otoroshi service name (if you use a template based on https://github.com/MAIF/otoroshi/tree/master/kubernetes/base deployed in the otoroshi namespace) and the host header with the service domain like :\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET -H 'Host: httpapp.foo.bar' https://otoroshi-service.otoroshi.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using dedicated services\n\nit's also possible to define services that targets otoroshi deployment (or otoroshi workers deployment) and use then as valid hosts in otoroshi services \n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: my-awesome-service\nspec:\n selector:\n # run: otoroshi-deployment\n # or in cluster mode\n run: otoroshi-worker-deployment\n ports:\n - port: 8080\n name: \"http\"\n targetPort: \"http\"\n - port: 8443\n name: \"https\"\n targetPort: \"https\"\n```\n\nand access it like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-namspace.svc.cluster.local:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using coredns integration\n\nYou can also enable the coredns integration to simplify the flow. You can use the the following keys in the plugin config :\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"coreDnsIntegration\": true, // enable coredns integration for intra cluster calls\n \"kubeSystemNamespace\": \"kube-system\", // the namespace where coredns is deployed\n \"corednsConfigMap\": \"coredns\", // the name of the coredns configmap\n \"otoroshiServiceName\": \"otoroshi-service\", // the name of the otoroshi service, could be otoroshi-workers-service\n \"otoroshiNamespace\": \"otoroshi\", // the namespace where otoroshi is deployed\n \"clusterDomain\": \"cluster.local\", // the domain for cluster services\n ...\n }\n}\n```\n\notoroshi will patch coredns config at startup then you can call your services like\n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh` or `${service-name}.${service-namespace}.svc.otoroshi.local`\n\n### Using coredns with manual patching\n\nyou can also patch the coredns config manually\n\n```sh\nkubectl edit configmaps coredns -n kube-system # or your own custom config map\n```\n\nand change the `Corefile` data to add the following snippet in at the end of the file\n\n```yaml\notoroshi.mesh:53 {\n errors\n health\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.arpa\n }\n rewrite name regex (.*)\\.otoroshi\\.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}\n```\n\nyou can also define simpler rewrite if it suits you use case better\n\n```\nrewrite name my-service.otoroshi.mesh otoroshi-worker-service.otoroshi.svc.cluster.local\n```\n\ndo not hesitate to change `otoroshi-worker-service.otoroshi` according to your own setup. If otoroshi is not in cluster mode, change it to `otoroshi-service.otoroshi`. If otoroshi is not deployed in the `otoroshi` namespace, change it to `otoroshi-service.the-namespace`, etc.\n\nBy default, all services created from CRDs service descriptors are exposed as `${service-name}.${service-namespace}.otoroshi.mesh`\n\nthen you can call your service like \n\n```sh\nCLIENT_ID=\"xxx\"\nCLIENT_SECRET=\"xxx\"\n\ncurl -X GET https://my-awesome-service.my-awesome-service-namespace.otoroshi.mesh:8443/get -u \"$CLIENT_ID:$CLIENT_SECRET\"\n```\n\n### Using old kube-dns system\n\nif your stuck with an old version of kubernetes, it uses kube-dns that is not supported by otoroshi, so you will have to provide your own coredns deployment and declare it as a stubDomain in the old kube-dns system. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the kube-dns integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"kubeDnsOperatorIntegration\": true, // enable kube-dns integration for intra cluster calls\n \"kubeDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"kubeDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"kubeDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\n### Using Openshift DNS operator\n\nOpenshift DNS operator does not allow to customize DNS configuration a lot, so you will have to provide your own coredns deployment and declare it as a stub in the Openshift DNS operator. \n\nHere is an example of coredns deployment with otoroshi domain config\n\ncoredns.yaml\n: @@snip [coredns.yaml](../snippets/kubernetes/kustomize/base/coredns.yaml)\n\nthen you can enable the Openshift DNS operator integration in the otoroshi kubernetes job\n\n```javascript\n{\n \"KubernetesConfig\": {\n ...\n \"openshiftDnsOperatorIntegration\": true, // enable openshift dns operator integration for intra cluster calls\n \"openshiftDnsOperatorCoreDnsNamespace\": \"otoroshi\", // namespace where coredns is installed\n \"openshiftDnsOperatorCoreDnsName\": \"otoroshi-dns\", // name of the coredns service\n \"openshiftDnsOperatorCoreDnsPort\": 5353, // port of the coredns service\n ...\n }\n}\n```\n\ndon't forget to update the otoroshi `ClusterRole`\n\n```yaml\n- apiGroups:\n - operator.openshift.io\n resources:\n - dnses\n verbs:\n - get\n - list\n - watch\n - update\n```\n\n## CRD validation in kubectl\n\nIn order to get CRD validation before manifest deployments right inside kubectl, you can deploy a validation webhook that will do the trick. Also check that you have `otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator` request sink enabled.\n\nvalidation-webhook.yaml\n: @@snip [validation-webhook.yaml](../snippets/kubernetes/kustomize/base/validation-webhook.yaml)\n\n## Easier integration with otoroshi-sidecar\n\nOtoroshi can help you to easily use existing services without modifications while gettings all the perks of otoroshi like apikeys, mTLS, exchange protocol, etc. To do so, otoroshi will inject a sidecar container in the pod of your deployment that will handle call coming from otoroshi and going to otoroshi. To enable otoroshi-sidecar, you need to deploy the following admission webhook. Also check that you have `otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector` request sink enabled.\n\nsidecar-webhook.yaml\n: @@snip [sidecar-webhook.yaml](../snippets/kubernetes/kustomize/base/sidecar-webhook.yaml)\n\nthen it's quite easy to add the sidecar, just add the following label to your pod `otoroshi.io/sidecar: inject` and some annotations to tell otoroshi what certificates and apikeys to use.\n\n```yaml\nannotations:\n otoroshi.io/sidecar-apikey: backend-apikey\n otoroshi.io/sidecar-backend-cert: backend-cert\n otoroshi.io/sidecar-client-cert: oto-client-cert\n otoroshi.io/token-secret: secret\n otoroshi.io/expected-dn: UID=oto-client-cert, O=OtoroshiApps\n```\n\nnow you can just call you otoroshi handled apis from inside your pod like `curl http://my-service.namespace.otoroshi.mesh/api` without passing any apikey or client certificate and the sidecar will handle everything for you. Same thing for call from otoroshi to your pod, everything will be done in mTLS fashion with apikeys and otoroshi exchange protocol\n\nhere is a full example\n\nsidecar.yaml\n: @@snip [sidecar.yaml](../snippets/kubernetes/kustomize/base/sidecar.yaml)\n\n@@@ warning\nPlease avoid to use port `80` for your pod as it's the default port to access otoroshi from your pod and the call will be redirect to the sidecar via an iptables rule\n@@@\n\n## Daikoku integration\n\nIt is possible to easily integrate daikoku generated apikeys without any human interaction with the actual apikey secret. To do that, create a plan in Daikoku and setup the integration mode to `Automatic`\n\n@@@ div { .centered-img }\n\n@@@\n\nthen when a user subscribe for an apikey, he will only see an integration token\n\n@@@ div { .centered-img }\n\n@@@\n\nthen just create an ApiKey manifest with this token and your good to go \n\n```yaml\napiVersion: proxy.otoroshi.io/v1\nkind: ApiKey\nmetadata:\n name: http-app-2-apikey-3\nspec:\n exportSecret: true \n secretName: secret-3\n daikokuToken: RShQrvINByiuieiaCBwIZfGFgdPu7tIJEN5gdV8N8YeH4RI9ErPYJzkuFyAkZ2xy\n```\n\n"},{"name":"scaling.md","id":"/deploy/scaling.md","url":"/deploy/scaling.html","title":"Scaling Otoroshi","content":"# Scaling Otoroshi\n\n## Using multiple instances with a front load balancer\n\nOtoroshi has been designed to work with multiple instances. If you already have an infrastructure using frontal load balancing, you just have to declare Otoroshi instances as the target of all domain names handled by Otoroshi\n\n## Using master / workers mode of Otoroshi\n\nYou can read everything about it in @ref:[the clustering section](../deploy/clustering.md) of the documentation.\n\n## Using IPVS\n\nYou can use [IPVS](https://en.wikipedia.org/wiki/IP_Virtual_Server) to load balance layer 4 traffic directly from the Linux Kernel to multiple instances of Otoroshi. You can find example of configuration [here](http://www.linuxvirtualserver.org/VS-DRouting.html) \n\n## Using DNS Round Robin\n\nYou can use [DNS round robin technique](https://en.wikipedia.org/wiki/Round-robin_DNS) to declare multiple A records under the domain names handled by Otoroshi.\n\n## Using software L4/L7 load balancers\n\nYou can use software L4 load balancers like NGINX or HAProxy to load balance layer 4 traffic directly from the Linux Kernel to multiple instances of Otoroshi.\n\nNGINX L7\n: @@snip [nginx-http.conf](../snippets/nginx-http.conf) \n\nNGINX L4\n: @@snip [nginx-tcp.conf](../snippets/nginx-tcp.conf) \n\nHA Proxy L7\n: @@snip [haproxy-http.conf](../snippets/haproxy-http.conf) \n\nHA Proxy L4\n: @@snip [haproxy-tcp.conf](../snippets/haproxy-tcp.conf) \n\n## Using a custom TCP load balancer\n\nYou can also use any other TCP load balancer, from a hardware box to a small js file like\n\ntcp-proxy.js\n: @@snip [tcp-proxy.js](../snippets/tcp-proxy.js) \n\ntcp-proxy.rs\n: @@snip [tcp-proxy.rs](../snippets/proxy.rs) \n\n"},{"name":"dev.md","id":"/dev.md","url":"/dev.html","title":"Developing Otoroshi","content":"# Developing Otoroshi\n\nIf you want to play with Otoroshis code, here are some tips\n\n## The tools\n\nYou will need\n\n* git\n* JDK >= 11\n* SBT >= 1.3.x\n* Node 13 + yarn 1.x\n\n## Clone the repository\n\n```sh\ngit clone https://github.com/MAIF/otoroshi.git\n```\n\nor fork otoroshi and clone your own repository.\n\n## Run otoroshi in dev mode\n\nto run otoroshi in dev mode, you'll need to run two separate process to serve the javascript UI and the server part.\n\n### Javascript side\n\njust go to `/otoroshi/javascript` and install the dependencies with\n\n```sh\nyarn install\n# or\nnpm install\n```\n\nthen run the dev server with\n\n```sh\nyarn start\n# or\nnpm run start\n```\n\n### Server side\n\nsetup SBT opts with\n\n```sh\nexport SBT_OPTS=\"-Xmx2G -Xss6M\"\n```\n\nthen just go to `/otoroshi` and run the sbt console with \n\n```sh\nsbt\n```\n\nthen in the sbt console run the following command\n\n```sh\n~reStart\n# to pass jvm args, you can use: ~reStart --- -Dotoroshi.storage=memory ...\n```\n\nyou can now access your otoroshi instance at `http://otoroshi.oto.tools:9999`\n\n## Test otoroshi\n\nto run otoroshi test just go to `/otoroshi` and run the main test suite with\n\n```sh\nsbt 'testOnly OtoroshiTests'\n```\n\n## Create a release\n\njust go to `/otoroshi/javascript` and then build the UI\n\n```sh\nyarn install\nyarn build\n```\n\nthen go to `/otoroshi` and build the otoroshi distribution\n\n```sh\nsbt ';clean;compile;dist;assembly'\n```\n\nthe otoroshi build is waiting for you in `/otoroshi/target/scala-2.12/otoroshi.jar` or `/otoroshi/target/universal/otoroshi-1.x.x.zip`\n\n## Build the documentation\n\nfrom the root of your repository run\n\n```sh\nsh ./scripts/doc.sh all\n```\n\nThe documentation is located at `manual/target/paradox/site/main/`\n\n## Format the sources\n\nfrom the root of your repository run\n\n```sh\nsh ./scripts/fmt.sh\n```\n"},{"name":"apikeys.md","id":"/entities/apikeys.md","url":"/entities/apikeys.html","title":"Apikeys","content":"# Apikeys\n\nAn API key is a unique identifier used to connect to, or perform, an route call. \n\n@@@ div { .centered-img }\n\n@@@\n\nYou can found a concrete example @ref:[here](../how-to-s/secure-with-apikey.md)\n\n* `ApiKey Id`: the id is a unique random key that will represent this API key\n* `ApiKey Secret`: the secret is a random key used to validate the API key\n* `ApiKey Name`: a name for the API key, used for debug purposes\n* `ApiKey description`: a useful description for this apikey\n* `Valid until`: auto disable apikey after this date\n* `Enabled`: if the API key is disabled, then any call using this API key will fail\n* `Read only`: if the API key is in read only mode, every request done with this api key will only work for GET, HEAD, OPTIONS verbs\n* `Allow pass by clientid only`: here you allow client to only pass client id in a specific header in order to grant access to the underlying api\n* `Constrained services only`: this apikey can only be used on services using apikey routing constraints\n* `Authorized on`: the groups/services linked to this api key\n\n### Metadata and tags\n\n* `Tags`: tags attached to the api key\n* `Metadata`: metadata attached to the api key\n\n### Automatic secret rotation\n\nAPI can handle automatic secret rotation by themselves. When enabled, the rotation changes the secret every `Rotation every` duration. During the `Grace period` both secret will be usable.\n \n* `Enabled`: enabled automatic apikey secret rotation\n* `Rotation every`: rotate secrets every\n* `Grace period`: period when both secrets can be used\n* `Next client secret`: display the next generated client secret\n\n### Restrictions\n\n* `Enabled`: enable restrictions\n* `Allow last`: Otoroshi will test forbidden and notFound paths before testing allowed paths\n* `Allowed`: allowed paths\n* `Forbidden`: forbidden paths\n* `Not Found`: not found paths\n\n### Call examples\n\n* `Curl Command`: simple request with the api key passed by header\n* `Basic Auth. Header`: authorization Header with the api key as base64 encoded format\n* `Curl Command with Basic Auth. Header`: simple request with api key passed in the Authorization header as base64 format\n\n### Quotas\n\n* `Throttling quota`: the authorized number of calls per second\n* `Daily quota`: the authorized number of calls per day\n* `Monthly quota`: the authorized number of calls per month\n\n@@@ warning\n\nDaily and monthly quotas are based on the following rules :\n\n* daily quota is computed between 00h00:00.000 and 23h59:59.999 of the current day\n* monthly qutoas is computed between the first day of the month at 00h00:00.000 and the last day of the month at 23h59:59.999\n@@@\n\n### Quotas consumption\n\n* `Consumed daily calls`: the number of calls consumed today\n* `Remaining daily calls`: the remaining number of calls for today\n* `Consumed monthly calls`: the number of calls consumed this month\n* `Remaining monthly calls`: the remaining number of calls for this month\n\n"},{"name":"auth-modules.md","id":"/entities/auth-modules.md","url":"/entities/auth-modules.html","title":"Authentication modules","content":"# Authentication modules\n\nThe authentication modules manage the access to Otoroshi UI and can protect a route.\n\nA `private Otoroshi app` is an Otoroshi route with the Authentication plugin enabled.\n\nThe list of supported authentication are :\n\n* `OAuth 2.0/2.1` : an authorization standard that allows a user to grant limited access to their resources on one site to another site, without having to expose their credentials\n* `OAuth 1.0a` : the original standard for access delegation\n* `In memory` : create users directly in Otoroshi with rights and metadata\n* `LDAP : Lightweight Directory Access Protocol` : connect users using a set of LDAP servers\n* `SAML V2 - Security Assertion Markup Language` : an open-standard, XML-based data format that allows businesses to communicate user authentication and authorization information to partner companies and enterprise applications their employees may use.\n\nAll authentication modules have a unique `id`, a `name` and a `description`.\n\nEach module has also the following fields : \n\n* `Tags`: list of tags associated to the module\n* `Metadata`: list of metadata associated to the module\n* `HttpOnly`: if enabled, the cookie cannot be accessed through client side script, prevent cross-site scripting (XSS) by not revealing the cookie to a third party\n* `Secure`: if enabled, avoid to include cookie in an HTTP Request without secure channel, typically HTTPs.\n* `Session max. age`: duration until the session expired\n* `User validators`: a list of validator that will check if, a user that successfully logged in has the right to actually, pass otoroshi based on the content of it's profile. A validator is composed of a [JSONPath](https://goessner.net/articles/JsonPath/) that will tell what to check and a value that is the expected value. The JSONPath will be applied on a document that will look like\n\n```javascript\n{\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"randomId\": \"xxxxx\",\n \"name\": \"john.doe@otoroshi.io\",\n \"email\": \"john.doe@otoroshi.io\",\n \"authConfigId\": \"xxxxxxxx\",\n \"profile\": { // the profile shape depends heavily on the identity provider\n \"sub\": \"xxxxxx\",\n \"nickname\": \"john.doe\",\n \"name\": \"john.doe@otoroshi.io\",\n \"picture\": \"https://foo.bar/avatar.png\",\n \"updated_at\": \"2022-04-20T12:57:39.723Z\",\n \"email\": \"john.doe@otoroshi.io\",\n \"email_verified\": true,\n \"rights\": [\"one\", \"two\"]\n },\n \"token\": { // the token shape depends heavily on the identity provider\n \"access_token\": \"xxxxxx\",\n \"refresh_token\": \"yyyyyy\",\n \"id_token\": \"zzzzzz\",\n \"scope\": \"openid profile email address phone offline_access\",\n \"expires_in\": 86400,\n \"token_type\": \"Bearer\"\n },\n \"realm\": \"global-oauth-xxxxxxx\",\n \"otoroshiData\": {\n ...\n },\n \"createdAt\": 1650459462650,\n \"expiredAt\": 1650545862652,\n \"lastRefresh\": 1650459462650,\n \"metadata\": {},\n \"tags\": []\n}\n```\n\nthe expected value support some syntax tricks like \n\n* `Not(value)` on a string to check if the current value does not equals another value\n* `Regex(regex)` on a string to check if the current value matches the regex\n* `RegexNot(regex)` on a string to check if the current value does not matches the regex\n* `Wildcard(*value*)` on a string to check if the current value matches the value with wildcards\n* `WildcardNot(*value*)` on a string to check if the current value does not matches the value with wildcards\n* `Contains(value)` on a string to check if the current value contains a value\n* `ContainsNot(value)` on a string to check if the current value does not contains a value\n* `Contains(Regex(regex))` on an array to check if one of the item of the array matches the regex\n* `ContainsNot(Regex(regex))` on an array to check if one of the item of the array does not matches the regex\n* `Contains(Wildcard(*value*))` on an array to check if one of the item of the array matches the wildcard value\n* `ContainsNot(Wildcard(*value*))` on an array to check if one of the item of the array does not matches the wildcard value\n* `Contains(value)` on an array to check if the array contains a value\n* `ContainsNot(value)` on an array to check if the array does not contains a value\n\nfor instance to check if the current user has the right `two`, you can write the following validator\n\n```js\n{\n \"path\": \"$.profile.rights\",\n \"value\": \"Contains(two)\"\n}\n```\n\n## OAuth 2.0 / OIDC provider\n\nIf you want to secure an app or your Otoroshi UI with this provider, you can check these tutorials : @ref[Secure an app with keycloak](../how-to-s/secure-app-with-keycloak.md) or @ref[Secure an app with auth0](../how-to-s/secure-app-with-auth0.md)\n\n* `Use cookie`: If your OAuth2 provider does not support query param in redirect uri, you can use cookies instead\n* `Use json payloads`: the access token, sended to retrieve the user info, will be pass in body as JSON. If disabled, it will sended as Map.\n* `Enabled PKCE flow`: This way, a malicious attacker can only intercept the Authorization Code, and they cannot exchange it for a token without the Code Verifier.\n* `Disable wildcard on redirect URIs`: As of OAuth 2.1, query parameters on redirect URIs are no longer allowed\n* `Refresh tokens`: Automatically refresh access token using the refresh token if available\n* `Read profile from token`: if enabled, the user profile will be read from the access token, otherwise the user profile will be retrieved from the user information url\n* `Super admins only`: All logged in users will have super admins rights\n* `Client ID`: a public identifier of your app\n* `Client Secret`: a secret known only to the application and the authorization server\n* `Authorize URL`: used to interact with the resource owner and get the authorization to access the protected resource\n* `Token URL`: used by the application in order to get an access token or a refresh token\n* `Introspection URL`: used to validate access tokens\n* `Userinfo URL`: used to retrieve the profile of the user\n* `Login URL`: used to redirect user to the login provider page\n* `Logout URL`: redirect uri used by the identity provider to redirect user after logging out\n* `Callback URL`: redirect uri sended to the identity provider to redirect user after successfully connecting\n* `Access token field name`: field used to search access token in the response body of the token URL call\n* `Scope`: presented scopes to the user in the consent screen. Scopes are space-separated lists of identifiers used to specify what access privileges are being requested\n* `Claims`: asked name/values pairs that contains information about a user.\n* `Name field name`: Retrieve name from token field\n* `Email field name`: Retrieve email from token field\n* `Otoroshi metadata field name`: Retrieve metadata from token field\n* `Otoroshi rights field name`: Retrieve user rights from user profile\n* `Extra metadata`: merged with the user metadata\n* `Data override`: merged with extra metadata when a user connects to a `private app`\n* `Rights override`: useful when you want erase the rights of an user with only specific rights. This field is the last to be applied on the user rights.\n* `Api key metadata field name`: used to extract api key metadata from the OIDC access token \n* `Api key tags field name`: used to extract api key tags from the OIDC access token \n* `Proxy host`: host of proxy behind the identify provider\n* `Proxy port`: port of proxy behind the identify provider\n* `Proxy principal`: user of proxy \n* `Proxy password`: password of proxy\n* `OIDC config url`: URI of the openid-configuration used to discovery documents. By convention, this URI ends with `.well-known/openid-configuration`\n* `Token verification`: What kind of algorithm you want to use to verify/sign your JWT token with\n* `SHA Size`: Word size for the SHA-2 hash function used\n* `Hmac secret`: The Hmac secret\n* `Base64 encoded secret`: Is the secret encoded with base64\n* `Custom TLS Settings`: TLS settings for JWKS fetching\n* `TLS loose`: if enabled, will block all untrustful ssl configs\n* `Trust all`: allows any server certificates even the self-signed ones\n* `Client certificates`: list of client certificates used to communicate with JWKS server\n* `Trusted certificates`: list of trusted certificates received from JWKS server\n\n## OAuth 1.0a provider\n\nIf you want to secure an app or your Otoroshi UI with this provider, you can check this tutorial : @ref[Secure an app with OAuth 1.0a](../how-to-s/secure-with-oauth1-client.md)\n\n* `Http Method`: method used to get request token and the access token \n* `Consumer key`: the identifier portion of the client credentials (equivalent to a username)\n* `Consumer secret`: the identifier portion of the client credentials (equivalent to a password)\n* `Request Token URL`: url to retrieve the request token\n* `Authorize URL`: used to redirect user to the login page\n* `Access token URL`: used to retrieve the access token from the server\n* `Profile URL`: used to get the user profile\n* `Callback URL`: used to redirect user when successfully connecting\n* `Rights override`: override the rights of the connected user. With JSON format, each authenticated user, using email, can be associated to a list of rights on tenants and Otoroshi teams.\n\n## LDAP Authentication provider\n\nIf you want to secure an app or your Otoroshi UI with this provider, you can check this tutorial : @ref[Secure an app with LDAP](../how-to-s/secure-app-with-ldap.md)\n\n* `Basic auth.`: if enabled, user and password will be extract from the `Authorization` header as a Basic authentication. It will skipped the login Otoroshi page \n* `Allow empty password`: LDAP servers configured by default with the possibility to connect without password can be secured by this module to ensure that user provides a password\n* `Super admins only`: All logged in users will have super admins rights\n* `Extract profile`: extract LDAP profile in the Otoroshi user\n* `LDAP Server URL`: list of LDAP servers to join. Otoroshi use this list in sequence and swap to the next server, each time a server breaks in timeout\n* `Search Base`: used to global filter\n* `Users search base`: concat with search base to search users in LDAP\n* `Mapping group filter`: map LDAP groups with Otoroshi rights\n* `Search Filter`: used to filter users. *\\${username}* is replace by the email of the user and compare to the given field\n* `Admin username (bind DN)`: holds the name of the environment property for specifying the identity of the principal for authenticating the caller to the service\n* `Admin password`: holds the name of the environment property for specifying the credentials of the principal for authenticating the caller to the service\n* `Extract profile filters attributes in`: keep only attributes which are matching the regex\n* `Extract profile filters attributes not in`: keep only attributes which are not matching the regex\n* `Name field name`: Retrieve name from LDAP field\n* `Email field name`: Retrieve email from LDAP field\n* `Otoroshi metadata field name`: Retrieve metadata from LDAP field\n* `Extra metadata`: merged with the user metadata\n* `Data override`: merged with extra metadata when a user connects to a `private app`\n* `Additional rights group`: list of virtual groups. A virtual group is composed of a list of users and a list of rights for each teams/organizations.\n* `Rights override`: useful when you want erase the rights of an user with only specific rights. This field is the last to be applied on the user rights.\n\n## In memory provider\n\n* `Basic auth.`: if enabled, user and password will be extract from the `Authorization` header as a Basic authentication. It will skipped the login Otoroshi page \n* `Login with WebAuthn` : enabled logging by WebAuthn\n* `Users`: list of users with *name*, *email* and *metadata*. The default password is *password*. The edit button is useful when you want to change the password of the user. The reset button reinitialize the password. \n* `Users raw`: show the registered users with their profile and their rights. You can edit directly each field, especially the rights of the user.\n\n## SAML v2 provider\n\n* `Single sign on URL`: the Identity Provider Single Sign-On URL\n* `The protocol binding for the login request`: the protocol binding for the login request\n* `Single Logout URL`: a SAML flow that allows the end-user to logout from a single session and be automatically logged out of all related sessions that were established during SSO\n* `The protocol binding for the logout request`: the protocol binding for the logout request\n* `Sign documents`: Should SAML Request be signed by Otoroshi ?\n* `Validate Assertions Signature`: Enable/disable signature validation of SAML assertions\n* `Validate assertions with Otoroshi certificate`: validate assertions with Otoroshi certificate. If disabled, the `Encryption Certificate` and `Encryption Private Key` fields can be used to pass a certificate and a private key to validate assertions.\n* `Encryption Certificate`: certificate used to verify assertions\n* `Encryption Private Key`: privaye key used to verify assertions\n* `Signing Certificate`: certicate used to sign documents\n* `Signing Private Key`: private key to sign documents\n* `Signature al`: the signature algorithm to use to sign documents\n* `Canonicalization Method`: canonicalization method for XML signatures \n* `Encryption KeyPair`: the keypair used to sign/verify assertions\n* `Name ID Format`: SP and IdP usually communicate each other about a subject. That subject should be identified through a NAME-IDentifier, which should be in some format so that It is easy for the other party to identify it based on the Format\n* `Use NameID format as email`: use NameID format as email. If disabled, the email will be search from the attributes\n* `URL issuer`: provide the URL to the IdP's who will issue the security token\n* `Validate Signature`: enable/disable signature validation of SAML responses\n* `Validate Assertions Signature`: should SAML Assertions to be decrypted ?\n* `Validating Certificates`: the certificate in PEM format that must be used to check for signatures.\n\n## Special routes\n\nwhen using private apps with auth. modules, you can access special routes that can help you \n\n```sh \nGET 'http://xxxxxxxx.xxxx.xx/.well-known/otoroshi/logout' # trigger logout for the current auth. module\nGET 'http://xxxxxxxx.xxxx.xx/.well-known/otoroshi/me' # get the current logged user profile (do not forget to pass cookies)\n```\n\n## Related pages\n* @ref[Secure an app with auth0](../how-to-s/secure-app-with-auth0.md)\n* @ref[Secure an app with keycloak](../how-to-s/secure-app-with-keycloak.md)\n* @ref[Secure an app with LDAP](../how-to-s/secure-app-with-ldap.md)\n* @ref[Secure an app with OAuth 1.0a](../how-to-s/secure-with-oauth1-client.md)"},{"name":"backends.md","id":"/entities/backends.md","url":"/entities/backends.html","title":"Backends","content":"# Backends\n\nA backend represent a list of server to target in a route and its client settings, load balancing, etc.\n\nThe backends can be define directly on the route designer or on their dedicated page in order to be reusable.\n\n## UI page\n\nYou can find all backends [here](http://otoroshi.oto.tools:8080/bo/dashboard/backends)\n\n## Global Properties\n\n* `Targets root path`: the path to add to each request sent to the downstream service \n* `Full path rewrite`: When enabled, the path of the uri will be totally stripped and replaced by the value of `Targets root path`. If this value contains expression language expressions, they will be interpolated before forwading the request to the backend. When combined with things like named path parameters, it is possible to perform a ful url rewrite on the target path like\n\n* input: `subdomain.domain.tld/api/users/$id<[0-9]+>/bills`\n* output: `target.domain.tld/apis/v1/basic_users/${req.pathparams.id}/all_bills`\n\n## Targets\n\nThe list of target that Otoroshi will proxy and expose through the subdomain defined before. Otoroshi will do round-robin load balancing between all those targets with circuit breaker mecanism to avoid cascading failures.\n\n* `id`: unique id of the target\n* `Hostname`: the hostname of the target without scheme\n* `Port`: the port of the target\n* `TLS`: call the target via https\n* `Weight`: the weight of the target. This valus is used by the load balancing strategy to dispatch the traffic between all targets\n* `Predicate`: a function to filter targets from the target list based on a predefined predicate\n* `Protocol`: protocol used to call the target, can be only equals to `HTTP/1.0`, `HTTP/1.1`, `HTTP/2.0` or `HTTP/3.0`\n* `IP address`: the ip address of the target\n* `TLS Settings`:\n * `Enabled`: enable this section\n * `TLS loose`: if enabled, will block all untrustful ssl configs\n * `TrustAll`: allows any server certificates even the self-signed ones\n * `Client certificates`: list of client certificates used to communicate with the downstream service\n * `Trusted certificates`: list of trusted certificates received from the downstream service\n\n\n## Heatlh check\n\n* `Enabled`: if enabled, the health check URL will be called at regular intervals\n* `URL`: the URL to call to run the health check\n\n## Load balancing\n\n* `Type`: the load balancing algorithm used\n\n## Client settings\n\n* `backoff factor`: specify the factor to multiply the delay for each retry (default value 2)\n* `retries`: specify how many times the client will retry to fetch the result of the request after an error before giving up. (default value 1)\n* `max errors`: specify how many errors can pass before opening the circuit breaker (default value 20)\n* `global timeout`: specify how long the global call (with retries) should last at most in milliseconds. (default value 30000)\n* `connection timeout`: specify how long each connection should last at most in milliseconds. (default value 10000)\n* `idle timeout`: specify how long each connection can stay in idle state at most in milliseconds (default value 60000)\n* `call timeout`: Specify how long each call should last at most in milliseconds. (default value 30000)\n* `call and stream timeout`: specify how long each call should last at most in milliseconds for handling the request and streaming the response. (default value 120000)\n* `initial delay`: delay after which first retry will happens if needed (default value 50)\n* `sample interval`: specify the delay between two retries. Each retry, the delay is multiplied by the backoff factor (default value 2000)\n* `cache connection`: try to keep tcp connection alive between requests (default value false)\n* `cache connection queue size`: queue size for an open tcp connection (default value 2048)\n* `custom timeouts` (list): \n * `Path`: the path on which the timeout will be active\n * `Client connection timeout`: specify how long each connection should last at most in milliseconds.\n * `Client idle timeout`: specify how long each connection can stay in idle state at most in milliseconds.\n * `Client call and stream timeout`: specify how long each call should last at most in milliseconds for handling the request and streaming the response.\n * `Call timeout`: Specify how long each call should last at most in milliseconds.\n * `Client global timeout`: specify how long the global call (with retries) should last at most in milliseconds.\n\n## Proxy\n\n* `host`: host of proxy behind the identify provider\n* `port`: port of proxy behind the identify provider\n* `protocol`: protocol of proxy behind the identify provider\n* `principal`: user of proxy \n* `password`: password of proxy\n"},{"name":"certificates.md","id":"/entities/certificates.md","url":"/entities/certificates.html","title":"Certificates","content":"# Certificates\n\nAll generated and imported certificates are listed in the `https://otoroshi.xxxx/bo/dashboard/certificates` page. All those certificates can be used to serve traffic with TLS, perform mTLS calls, sign and verify JWT tokens.\n\nThe list of available actions are:\n\n* `Add item`: redirects the user on the certificate creation page. It's useful when you already had a certificate (like a pem file) and that you want to load it in Otoroshi.\n* `Let's Encrypt certificate`: asks a certificate matching a given host to Let's encrypt \n* `Create certificate`: issues a certificate with an existing Otoroshi certificate as CA.\n* `Import .p12 file`: loads a p12 file as certificate\n\n## Add item\n\n* `Id`: the generated unique id of the certificate\n* `Name`: the name of the certificate\n* `Description`: the description of the certificate\n* `Auto renew cert.`: certificate will be issued when it will be expired. Only works with a CA from Otoroshi and a known private key\n* `Client cert.`: the certificate generated will be used to identicate a client to a server\n* `Keypair`: the certificate entity will be a pair of public key and private key.\n* `Public key exposed`: if true, the public key will be exposed on `http://otoroshi-api.your-domain/.well-known/jwks.json`\n* `Certificate status`: the current status of the certificate. It can be valid if the certificate is not revoked and not expired, or equal to the reason of the revocation\n* `Certificate full chain`: list of certificates used to authenticate a client or a server\n* `Certificate private key`: the private key of the certificate or nothing if wanted. You can omit it if you want just add a certificte full chain to trust them.\n* `Private key password`: the password to protect the private key\n* `Certificate tags`: the tags attached to the certificate\n* `Certaificate metadata`: the metadata attached to the certificate\n\n## Let's Encrypt certificate\n\n* `Let's encrypt`: if enabled, the certificate will be generated by Let's Encrypt. If disabled, the user will be redirect to the `Create certificate` page\n* `Host`: the host send to Let's encrypt to issue the certificate\n\n## Create certificate view\n\n* `Issuer`: the CA used to sign your certificate\n* `CA certificate`: if enabled, the certificate will be used as an authority certificate. Once generated, it will be use as CA to sign the new certificates\n* `Let's Encrypt`: redirects to the Let's Encrypt page to request a certificate\n* `Client certificate`: the certificate generated will be used to identicate a client to a server\n* `Include A.I.A`: include authority information access urls in the certificate\n* `Key Type`: the type of the private key\n* `Key Size`: the size of the private key\n* `Signature Algorithm`: the signature algorithm used to sign the certificate\n* `Digest Algorithm`: the digest algorithm used\n* `Validity`: how much time your certificate will be valid\n* `Subject DN`: the subject DN of your certificate\n* `Hosts`: the hosts of your certificate\n\n"},{"name":"data-exporters.md","id":"/entities/data-exporters.md","url":"/entities/data-exporters.html","title":"Data exporters","content":"# Data exporters\n\nThe data exporters are the way to export alerts and events from Otoroshi to an external storage.\n\nTo try them, you can folllow @ref[this tutorial](../how-to-s/export-alerts-using-mailgun.md).\n\n## Common fields\n\n* `Type`: the type of event exporter\n* `Enabled`: enabled or not the exporter\n* `Name`: given name to the exporter\n* `Description`: the data exporter description\n* `Tags`: list of tags associated to the module\n* `Metadata`: list of metadata associated to the module\n\nAll exporters are split in three parts. The first and second parts are common and the last are specific by exporter.\n\n* `Filtering and projection` : section to filter the list of sent events and alerts. The projection field allows you to export only certain event fields and reduce the size of exported data. It's composed of `Filtering` and `Projection` fields. To get a full usage of this elements, read @ref:[this section](#matching-and-projections)\n* `Queue details`: set of fields to adjust the workers of the exporter. \n * `Buffer size`: if elements are pushed onto the queue faster than the source is consumed the overflow will be handled with a strategy specified by the user. Keep in memory the number of events.\n * `JSON conversion workers`: number of workers used to transform events to JSON format in paralell\n * `Send workers`: number of workers used to send transformed events\n * `Group size`: chunk up this stream into groups of elements received within a time window (the time window is the next field)\n * `Group duration`: waiting time before sending the group of events. If the group size is reached before the group duration, the events will be instantly sent\n \nFor the last part, the `Exporter configuration` will be detail individually.\n\n## Matching and projections\n\n**Filtering** is used to **include** or **exclude** some kind of events and alerts. For each include and exclude field, you can add a list of key-value. \n\nLet's say we only want to keep Otoroshi alerts\n```json\n{ \"include\": [{ \"@type\": \"AlertEvent\" }] }\n```\n\nOtoroshi provides a list of rules to keep only events with specific values. We will use the following event to illustrate.\n\n```json\n{\n \"foo\": \"bar\",\n \"type\": \"AlertEvent\",\n \"alert\": \"big-alert\",\n \"status\": 200,\n \"codes\": [\"a\", \"b\"],\n \"inner\": {\n \"foo\": \"bar\",\n \"bar\": \"foo\"\n }\n}\n```\n\nThe rules apply with the previous example as event.\n\n@@@div { #filtering }\n \n@@@\n\n\n\n**Projection** is a list of fields to export. In the case of an empty list, all the fields of an event will be exported. In other case, **only** the listed fields will be exported.\n\nLet's say we only want to keep Otoroshi alerts and only type, timestamp and id of each exported events\n```json\n{\n \"@type\": true,\n \"@timestamp\": true,\n \"@id\": true\n}\n```\n\nAn other possibility is to **rename** the exported field. This value will be the same but the exported field will have a different name.\n\nLet's say we want to rename all `@id` field with `unique-id` as key\n\n```json\n{ \"@id\": \"unique-id\" }\n```\n\nThe last possiblity is to retrieve a sub-object of an event. Let's say we want to get the name of each exported user of events.\n\n```json\n{ \"user\": { \"name\": true } }\n```\n\nYou can also expand the entire source object with \n\n```json\n{\n \"$spread\": true\n}\n```\n\nand the remove fields you don't want with \n\n```json\n{\n \"fieldthatidontwant\": false\n}\n```\n\n## Elastic\n\nWith this kind of exporter, every matching event will be sent to an elastic cluster (in batch). It is quite useful and can be used in combination with [elastic read in global config](./global-config.html#analytics-elastic-dashboard-datasource-read-)\n\n* `Cluster URI`: Elastic cluster URI\n* `Index`: Elastic index \n* `Type`: Event type (not needed for elasticsearch above 6.x)\n* `User`: Elastic User (optional)\n* `Password`: Elastic password (optional)\n* `Version`: Elastic version (optional, if none provided it will be fetched from cluster)\n* `Apply template`: Automatically apply index template\n* `Check Connection`: Button to test the configuration. It will displayed a modal with checked point, and if the case of it's successfull, it will displayed the found version of the Elasticsearch and the index used\n* `Manually apply index template`: try to put the elasticsearch template by calling the api of elasticsearch\n* `Show index template`: try to retrieve the current index template presents in elasticsearch\n* `Client side temporal indexes handling`: When enabled, Otoroshi will manage the creation of indexes. When it's disabled, Otoroshi will push in the same index\n* `One index per`: When the previous field is enabled, you can choose the interval of time between the creation of a new index in elasticsearch \n* `Custom TLS Settings`: Enable the TLS configuration for the communication with Elasticsearch\n * `TLS loose`: if enabled, will block all untrustful ssl configs\n * `TrustAll`: allows any server certificates even the self-signed ones\n * `Client certificates`: list of client certificates used to communicate with elasticsearch\n * `Trusted certificates`: list of trusted certificates received from elasticsearch\n\n## Webhook \n\nWith this kind of exporter, every matching event will be sent to a URL (in batch) using a POST method and an JSON array body.\n\n* `Alerts hook URL`: url used to post events\n* `Hook Headers`: headers add to the post request\n* `Custom TLS Settings`: Enable the TLS configuration for the communication with Elasticsearch\n * `TLS loose`: if enabled, will block all untrustful ssl configs\n * `TrustAll`: allows any server certificates even the self-signed ones\n * `Client certificates`: list of client certificates used to communicate with elasticsearch\n * `Trusted certificates`: list of trusted certificates received from elasticsearch\n\n\n## Pulsar \n\nWith this kind of exporter, every matching event will be sent to an [Apache Pulsar topic](https://pulsar.apache.org/)\n\n\n* `Pulsar URI`: URI of the pulsar server\n* `Custom TLS Settings`: Enable the TLS configuration for the communication with Elasticsearch\n * `TLS loose`: if enabled, will block all untrustful ssl configs\n * `TrustAll`: allows any server certificates even the self-signed ones\n * `Client certificates`: list of client certificates used to communicate with elasticsearch\n * `Trusted certificates`: list of trusted certificates received from elasticsearch\n* `Pulsar tenant`: tenant on the pulsar server\n* `Pulsar namespace`: namespace on the pulsar server\n* `Pulsar topic`: topic on the pulsar server\n\n## Kafka \n\nWith this kind of exporter, every matching event will be sent to an [Apache Kafka topic](https://kafka.apache.org/). You can find few @ref[tutorials](../how-to-s/communicate-with-kafka.md) about the connection between Otoroshi and Kafka based on docker images.\n\n* `Kafka Servers`: the list of servers to contact to connect the Kafka client with the Kafka cluster\n* `Kafka topic`: the topic on which Otoroshi alerts will be sent\n\nBy default, Kafka is installed with no authentication. Otoroshi supports the following authentication mechanisms and protocols for Kafka brokers.\n\n### SASL\n\nThe Simple Authentication and Security Layer (SASL) [RFC4422] is a\nmethod for adding authentication support to connection-based\nprotocols.\n\n* `SASL username`: the client username \n* `SASL password`: the client username \n* `SASL Mechanism`: \n * `PLAIN`: SASL/PLAIN uses a simple username and password for authentication.\n * `SCRAM-SHA-256` and `SCRAM-SHA-512`: SASL/SCRAM uses usernames and passwords stored in ZooKeeper. Credentials are created during installation.\n\n### SSL \n\n* `Kafka keypass`: the keystore password if you use a keystore/truststore to connect to Kafka cluster\n* `Kafka keystore path`: the keystore path on the server if you use a keystore/truststore to connect to Kafka cluster\n* `Kafka truststore path`: the truststore path on the server if you use a keystore/truststore to connect to Kafka cluster\n* `Custom TLS Settings`: enable the TLS configuration for the communication with Elasticsearch\n * `TLS loose`: if enabled, will block all untrustful ssl configs\n * `TrustAll`: allows any server certificates even the self-signed ones\n * `Client certificates`: list of client certificates used to communicate with elasticsearch\n * `Trusted certificates`: list of trusted certificates received from elasticsearch\n\n### SASL + SSL\n\nThis mechanism uses the SSL configuration and the SASL configuration.\n\n## Mailer \n\nWith this kind of exporter, every matching event will be sent in batch as an email (using one of the following email provider)\n\nOtoroshi supports 5 exporters of email type.\n\n### Console\n\nNothing to add. The events will be write on the standard output.\n\n### Generic\n\n* `Mailer url`: URL used to push events\n* `Headers`: headers add to the push requests\n* `Email addresses`: recipients of the emails\n\n### Mailgun\n\n* `EU`: is EU server ? if enabled, *https://api.eu.mailgun.net/* will be used, otherwise, the US URL will be used : *https://api.mailgun.net/*\n* `Mailgun api key`: API key of the mailgun account\n* `Mailgun domain`: domain name of the mailgun account\n* `Email addresses`: recipients of the emails\n\n### Mailjet\n\n* `Public api key`: public key of the mailjet account\n* `Private api key`: private key of the mailjet account\n* `Email addresses`: recipients of the emails\n\n### Sendgrid\n\n* `Sendgrid api key`: api key of the sendgrid account\n* `Email addresses`: recipients of the emails\n\n## File \n\n* `File path`: path where the logs will be write \n* `Max file size`: when size is reached, Otoroshi will create a new file postfixed by the current timestamp\n\n## GoReplay file\n\nWith this kind of exporter, every matching event will be sent to a `.gor` file compatible with [GoReplay](https://goreplay.org/). \n\n@@@ warning\nthis exporter will only be able to catch `TrafficCaptureEvent`. Those events are created when a route (or the global config) of the @ref:[new proxy engine](../topics/engine.md) is setup to capture traffic using the `capture` flag.\n@@@\n\n* `File path`: path where the logs will be write \n* `Max file size`: when size is reached, Otoroshi will create a new file postfixed by the current timestamp\n* `Capture requests`: capture http requests in the `.gor` file\n* `Capture responses`: capture http responses in the `.gor` file\n\n## Console \n\nNothing to add. The events will be write on the standard output.\n\n## Custom \n\nThis type of exporter let you the possibility to write your own exporter with your own rules. To create an exporter, we need to navigate to the plugins page, and to create a new item of type exporter.\n\nWhen it's done, the exporter will be visible in this list.\n\n* `Exporter config.`: the configuration of the custom exporter.\n\n## Metrics \n\nThis plugin is useful to rewrite the metric labels exposed on the `/metrics` endpoint.\n\n* `Labels`: list of metric labels. Each pair contains an existing field name and the new name."},{"name":"global-config.md","id":"/entities/global-config.md","url":"/entities/global-config.html","title":"Global config","content":"# Global config\n\nThe global config, named `Danger zone` in Otoroshi, is the place to configure Otoroshi globally. \n\n> Warning: In this page, the configuration is really sensitive and affects the global behaviour of Otoroshi.\n\n\n### Misc. Settings\n\n\n* `Maintenance mode` : It passes every single service in maintenance mode. If a user calls a service, the maintenance page will be displayed\n* `No OAuth login for BackOffice` : Forces admins to login only with user/password or user/password/u2F device\n* `API Read Only`: Freeze Otoroshi datastore in read only mode. Only people with access to the actual underlying datastore will be able to disable this.\n* `Auto link default` : When no group is specified on a service, it will be assigned to default one\n* `Use circuit breakers` : Use circuit breaker on all services\n* `Use new http client as the default Http client` : All http calls will use the new http client by default\n* `Enable live metrics` : Enable live metrics in the Otoroshi cluster. Performs a lot of writes in the datastore\n* `Digitus medius` : Use middle finger emoji as a response character for endless HTTP responses (see [IP address filtering settings](#ip-address-filtering-settings)).\n* `Limit conc. req.` : Limit the number of concurrent request processed by Otoroshi to a certain amount. Highly recommended for resilience\n* `Use X-Forwarded-* headers for routing` : When evaluating routing of a request, X-Forwarded-* headers will be used if presents\n* `Max conc. req.` : Maximum number of concurrent requests processed by otoroshi.\n* `Max HTTP/1.0 resp. size` : Maximum size of an HTTP/1.0 response in bytes. After this limit, response will be cut and sent as is. The best value here should satisfy (maxConcurrentRequests * maxHttp10ResponseSize) < process.memory for worst case scenario.\n* `Max local events` : Maximum number of events stored.\n* `Lines` : *deprecated* \n\n### IP address filtering settings\n\n* `IP allowed list`: Only IP addresses that will be able to access Otoroshi exposed services\n* `IP blocklist`: IP addresses that will be refused to access Otoroshi exposed services\n* `Endless HTTP Responses`: IP addresses for which each request will return around 128 Gb of 0s\n\n\n### Quotas settings\n\n* `Global throttling`: The max. number of requests allowed per second globally on Otoroshi\n* `Throttling per IP`: The max. number of requests allowed per second per IP address globally on Otoroshi\n\n### Analytics: Elastic dashboard datasource (read)\n\n* `Cluster URI`: Elastic cluster URI\n* `Index`: Elastic index \n* `Type`: Event type (not needed for elasticsearch above 6.x)\n* `User`: Elastic User (optional)\n* `Password`: Elastic password (optional)\n* `Version`: Elastic version (optional, if none provided it will be fetched from cluster)\n* `Apply template`: Automatically apply index template\n* `Check Connection`: Button to test the configuration. It will displayed a modal with a connection checklist, if connection is successfull, it will display the found version of the Elasticsearch and the index used\n* `Manually apply index template`: try to put the elasticsearch template by calling the api of elasticsearch\n* `Show index template`: try to retrieve the current index template present in elasticsearch\n* `Client side temporal indexes handling`: When enabled, Otoroshi will manage the creation of indexes over time. When it's disabled, Otoroshi will push in the same index\n* `One index per`: When the previous field is enabled, you can choose the interval of time between the creation of a new index in elasticsearch \n* `Custom TLS Settings`: Enable the TLS configuration for the communication with Elasticsearch\n* `TLS loose`: if enabled, will block all untrustful ssl configs\n* `TrustAll`: allows any server certificates even the self-signed ones\n* `Client certificates`: list of client certificates used to communicate with elasticsearch\n* `Trusted certificates`: list of trusted certificates received from elasticsearch\n\n\n### Statsd settings\n\n* `Datadog agent`: The StatsD agent is a Datadog agent\n* `StatsD agent host`: The host on which StatsD agent is listening\n* `StatsD agent port`: The port on which StatsD agent is listening (default is 8125)\n\n\n### Backoffice auth. settings\n\n* `Backoffice auth. config`: the authentication module used in front of Otoroshi. It will be used to connect to Otoroshi on the login page\n\n### Let's encrypt settings\n\n* `Enabled`: when enabled, Otoroshi will have the possiblity to sign certificate from let's encrypt notably in the SSL/TSL Certificates page \n* `Server URL`: ACME endpoint of let's encrypt \n* `Email addresses`: (optional) list of addresses used to order the certificates \n* `Contact URLs`: (optional) list of addresses used to order the certificates \n* `Public Key`: used to ask a certificate to let's encrypt, generated by Otoroshi \n* `Private Key`: used to ask a certificate to let's encrypt, generated by Otoroshi \n\n\n### CleverCloud settings\n\nOnce configured, you can register one clever cloud app of your organization directly as an Otoroshi service.\n\n* `CleverCloud consumer key`: consumer key of your clever cloud OAuth 1.0 app\n* `CleverCloud consumer secret`: consumer secret of your clever cloud OAuth 1.0 app\n* `OAuth Token`: oauth token of your clever cloud OAuth 1.0 app\n* `OAuth Secret`: oauth token secret of your clever cloud OAuth 1.0 app \n* `CleverCloud orga. Id`: id of your clever cloud organization\n\n### Global scripts\n\nGlobal scripts will be deprecated soon, please use global plugins instead (see the next section)!\n\n### Global plugins\n\n* `Enabled`: enable the use of global plugins\n* `Plugins on new Otoroshi engine`: list of plugins used by the new Otoroshi engine\n* `Plugins on old Otoroshi engine`: list of plugins used by the old Otoroshi engine\n* `Plugin configuration`: the overloaded configuration of plugins\n\n### Proxies\n\nIn this section, you can add a list of proxies for :\n\n* Proxy for alert emails (mailgun)\n* Proxy for alert webhooks\n* Proxy for Clever-Cloud API access\n* Proxy for services access\n* Proxy for auth. access (OAuth, OIDC)\n* Proxy for client validators\n* Proxy for JWKS access\n* Proxy for elastic access\n\nEach proxy has the following fields \n\n* `Proxy host`: host of proxy\n* `Proxy port`: port of proxy\n* `Proxy principal`: user of proxy\n* `Proxy password`: password of proxy\n* `Non proxy host`: IP address that can access the service\n\n### Quotas alerting settings\n\n* `Enable quotas exceeding alerts`: When apikey quotas is almost exceeded, an alert will be sent \n* `Daily quotas threshold`: The percentage of daily calls before sending alerts\n* `Monthly quotas threshold`: The percentage of monthly calls before sending alerts\n\n### User-Agent extraction settings\n\n* `User-Agent extraction`: Allow user-agent details extraction. Can have impact on consumed memory. \n\n### Geolocation extraction settings\n\nExtract a geolocation for each call to Otoroshi.\n\n### Tls Settings\n\n* `Use random cert.`: Use the first available cert when none matches the current domain\n* `Default domain`: When the SNI domain cannot be found, this one will be used to find the matching certificate \n* `Trust JDK CAs (server)`: Trust JDK CAs. The CAs from the JDK CA bundle will be proposed in the certificate request when performing TLS handshake \n* `Trust JDK CAs (trust)`: Trust JDK CAs. The CAs from the JDK CA bundle will be used as trusted CAs when calling HTTPS resources \n* `Trusted CAs (server)`: Select the trusted CAs you want for TLS terminaison. Those CAs only will be proposed in the certificate request when performing TLS handshake \n\n\n### Auto Generate Certificates\n\n* `Enabled`: Generate certificates on the fly when they don't exist\n* `Reply Nicely`: When receiving request from a not allowed domain name, accept connection and display a nice error message \n* `CA`: certificate CA used to generate missing certificate\n* `Allowed domains`: Allowed domains\n* `Not allowed domains`: Not allowed domains\n \n\n### Global metadata\n\n* `Tags`: tags attached to the global config\n* `Metadata`: metadata attached to the global config\n\n### Actions at the bottom of the page\n\n* `Recover from a full export file`: Load global configuration from a previous export\n* `Full export`: Export with all created entities\n* `Full export (ndjson)`: Export your full state of database to ndjson format\n* `JSON`: Get the global config at JSON format \n* `YAML`: Get the global config at YAML format \n* `Enable Panic Mode`: Log out all users from UI and prevent any changes to the database by setting the admin Otoroshi api to read-only. The only way to exit of this mode is to disable this mode directly in the database. "},{"name":"index.md","id":"/entities/index.md","url":"/entities/index.html","title":"","content":"\n# Main entities\n\nIn this section, we will pass through all the main Otoroshi entities. Otoroshi entities are the main items stored in otoroshi datastore that will be used to configure routing, authentication, etc.\n\nAny entity has the following properties\n\n* `location` or `_loc`: the location of the entity (organization and team)\n* `id`: the id of the entity (except for apikeys)\n* `name`: the name of the entity\n* `description`: the description of the entity (optional)\n* `tags`: free tags that you can put on any entity to help you manage it, automate it, etc.\n* `metadata`: free key/value tuples that you can put on any entity to help you manage it, automate it, etc.\n\n@@@div { .plugin .entities }\n\n
\nRoutes\nProxy your applications with routes\n
\n@ref:[View](./routes.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nBackends\nReuse route targets\n
\n@ref:[View](./backends.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nApikeys\nAdd security to your services using apikeys\n
\n@ref:[View](./apikeys.md)\n@@@\n\n\n@@@div { .plugin .entities }\n\n
\nOrganizations\nThis the most high level for grouping resources.\n
\n@ref:[View](./organizations.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nTeams\nOrganize your resources by teams\n
\n@ref:[View](./teams.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nService groups\nGroup your services\n
\n@ref:[View](./service-groups.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nJWT verifiers\nVerify and forge token by services.\n
\n@ref:[View](./jwt-verifiers.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nGlobal Config\nThe danger zone of Otoroshi\n
\n@ref:[View](./global-config.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nTCP services\n\n
\n@ref:[View](./tcp-services.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nAuth. modules\nSecure the Otoroshi UI and your web apps\n
\n@ref:[View](./auth-modules.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nCertificates\nAdd secure communication between Otoroshi, clients and services\n
\n@ref:[View](./certificates.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nData exporters\nExport alerts, events ands logs\n
\n@ref:[View](./data-exporters.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nScripts\n\n
\n@ref:[View](./scripts.md)\n@@@\n\n@@@div { .plugin .entities }\n\n
\nService descriptors\nProxy your applications with service descriptors\n
\n@ref:[View](./service-descriptors.md)\n@@@\n\n@@@ index\n\n* [Routes](./routes.md)\n* [Backends](./backends.md)\n* [Organizations](./organizations.md)\n* [Teams](./teams.md)\n* [Global Config](./global-config.md)\n* [Apikeys](./apikeys.md)\n* [Service groups](./service-groups.md)\n* [Auth. modules](./auth-modules.md)\n* [Certificates](./certificates.md)\n* [JWT verifiers](./jwt-verifiers.md)\n* [Data exporters](./data-exporters.md)\n* [Scripts](./scripts.md)\n* [TCP services](./tcp-services.md)\n* [Service descriptors](./service-descriptors.md)\n\n@@@\n"},{"name":"jwt-verifiers.md","id":"/entities/jwt-verifiers.md","url":"/entities/jwt-verifiers.html","title":"JWT verifiers","content":"# JWT verifiers\n\nSometimes, it can be pretty useful to verify Jwt tokens coming from other provider on some services. Otoroshi provides a tool to do that per service.\n\n* `Name`: name of the JWT verifier\n* `Description`: a simple description\n* `Strict`: if not strict, request without JWT token will be allowed to pass. This option is helpful when you want to force the presence of tokens in each request on a specific service \n* `Tags`: list of tags associated to the module\n* `Metadata`: list of metadata associated to the module\n\nEach JWT verifier is configurable in three steps : the `location` where find the token in incoming requests, the `validation` step to check the signature and the presence of claims in tokens, and the last step, named `Strategy`.\n\n## Token location\n\nAn incoming token can be found in three places.\n\n#### In query string\n\n* `Source`: JWT token location in query string\n* `Query param name`: the name of the query param where JWT is located\n\n#### In a header\n\n* `Source`: JWT token location in a header\n* `Header name`: the name of the header where JWT is located\n* `Remove value`: when the token is read, this value will be remove of header value (example: if the header value is *Bearer xxxx*, the *remove value* could be Bearer  don't forget the space at the end of the string)\n\n#### In a cookie\n\n* `Source`: JWT token location in a cookie\n* `Cookie name`: the name of the cookie where JWT is located\n\n## Token validation\n\nThis section is used to verify the extracted token from specified location.\n\n* `Algo.`: What kind of algorithm you want to use to verify/sign your JWT token with\n\nAccording to the selected algorithm, the validation form will change.\n\n#### Hmac + SHA\n* `SHA Size`: Word size for the SHA-2 hash function used\n* `Hmac secret`: used to verify the token\n* `Base64 encoded secret`: if enabled, the extracted token will be base64 decoded before it is verifier\n\n#### RSASSA-PKCS1 + SHA\n* `SHA Size`: Word size for the SHA-2 hash function used\n* `Public key`: the RSA public key\n* `Private key`: the RSA private key that can be empty if not used for JWT token signing\n\n#### ECDSA + SHA\n* `SHA Size`: Word size for the SHA-2 hash function used\n* `Public key`: the ECDSA public key\n* `Private key`: the ECDSA private key that can be empty if not used for JWT token signing\n\n#### RSASSA-PKCS1 + SHA from KeyPair\n* `SHA Size`: Word size for the SHA-2 hash function used\n* `KeyPair`: used to sign/verify token. The displayed list represents the key pair registered in the Certificates page\n \n#### ECDSA + SHA from KeyPair\n* `SHA Size`: Word size for the SHA-2 hash function used\n* `KeyPair`: used to sign/verify token. The displayed list represents the key pair registered in the Certificates page\n\n#### Otoroshi KeyPair from token kid (only for verification)\n* `Use only exposed keypairs`: if enabled, Otoroshi will only use the key pairs that are exposed on the well-known. If disabled, it will search on any registered key pairs.\n\n#### JWK Set (only for verification)\n\n* `URL`: the JWK set URL where the public keys are exposed\n* `HTTP call timeout`: timeout for fetching the keyset\n* `TTL`: cache TTL for the keyset\n* `HTTP Headers`: the HTTP headers passed\n* `Key type`: type of the key searched in the jwks\n\n*TLS settings for JWKS fetching*\n\n* `Custom TLS Settings`: TLS settings for JWKS fetching\n* `TLS loose`: if enabled, will block all untrustful ssl configs\n* `Trust all`: allows any server certificates even the self-signed ones\n* `Client certificates`: list of client certificates used to communicate with JWKS server\n* `Trusted certificates`: list of trusted certificates received from JWKS server\n\n*Proxy*\n\n* `Proxy host`: host of proxy behind the identify provider\n* `Proxy port`: port of proxy behind the identify provider\n* `Proxy principal`: user of proxy \n* `Proxy password`: password of proxy\n\n## Strategy\n\nThe first step is to select the verifier strategy. Otoroshi supports 4 types of JWT verifiers:\n\n* `Default JWT token` will add a token if no present. \n* `Verify JWT token` will only verifiy token signing and fields values if provided. \n* `Verify and re-sign JWT token` will verify the token and will re-sign the JWT token with the provided algo. settings. \n* `Verify, re-sign and transform JWT token` will verify the token, re-sign and will be able to transform the token.\n\nAll verifiers has the following properties: \n\n* `Verify token fields`: when the JWT token is checked, each field specified here will be verified with the provided value\n* `Verify token array value`: when the JWT token is checked, each field specified here will be verified if the provided value is contained in the array\n\n\n#### Default JWT token\n\n* `Strict`: if token is already present, the call will fail\n* `Default value`: list of claims of the generated token. These fields support raw values or language expressions. See the documentation about @ref:[the expression language](../topics/expression-language.md)\n\n#### Verify JWT token\n\nNo specific values needed. This kind of verifier needs only the two fields `Verify token fields` and `Verify token array value`.\n\n#### Verify and re-sign JWT token\n\nWhen `Verify and re-sign JWT token` is chosen, the `Re-sign settings` appear. All fields of `Re-sign settings` are the same of the `Token validation` section. The only difference is that the values are used to sign the new token and not to validate the token.\n\n\n#### Verify, re-sign and transform JWT token\n\nWhen `Verify, re-sign and transform JWT token` is chosen, the `Re-sign settings` and `Transformation settings` appear.\n\nThe `Re-sign settings` are used to sign the new token and has the same fields than the `Token validation` section.\n\nFor the `Transformation settings` section, the fields are:\n\n* `Token location`: the location where to find/set the JWT token\n* `Header name`: the name of the header where JWT is located\n* `Prepend value`: remove a value inside the header value\n* `Rename token fields`: when the JWT token is transformed, it is possible to change a field name, just specify origin field name and target field name\n* `Set token fields`: when the JWT token is transformed, it is possible to add new field with static values, just specify field name and value\n* `Remove token fields`: when the JWT token is transformed, it is possible to remove fields"},{"name":"organizations.md","id":"/entities/organizations.md","url":"/entities/organizations.html","title":"Organizations","content":"# Organizations\n\nThe resources of Otoroshi are grouped by `Organization`. This the highest level for grouping resources.\n\nAn organization have a unique `id`, a `name` and a `description`. As all Otoroshi resources, an Organization have a list of tags and metadata associated.\n\nFor example, you can use the organizations as a mean of :\n\n* to seperate resources by services or entities in your enterprise\n* to split internal and external usage of the resources (it's useful when you have a list of services deployed in your company and another one deployed by your partners)\n\n@@@ div { .centered-img }\n\n@@@\n\n## Access to the list of organizations\n\nTo visualize and edit the list of organizations, you can navigate to your instance on the `https://otoroshi.xxxxxx/bo/dashboard/organizations` route or click on the cog icon and select the organizations button.\n\nOnce on the page, you can create a new item, edit an existing organization or delete an existing one.\n\n> When an organization is deleted, the resources associated are not deleted. On the other hand, the organization and the team of associated resources are let empty.\n\n## Entities location\n\nAny otoroshi entity has a location property (`_loc` when serialized to json) explaining where and by whom the entity can be seen. \n\nAn entity can be part of one organization (`tenant` in the json document)\n\n```javascript\n{\n \"_loc\": {\n \"tenant\": \"tenant-1\",\n \"teams\": ...\n }\n ...\n}\n```\n\nor all organizations\n\n```javascript\n{\n \"_loc\": {\n \"tenant\": \"*\",\n \"teams\": ...\n }\n ...\n}\n```\n\n"},{"name":"routes.md","id":"/entities/routes.md","url":"/entities/routes.html","title":"Routes","content":"# Routes\n\nA route is an unique routing rule based on hostname, path, method and headers that will execute a bunch of plugins and eventually forward the request to the backend application.\n\n## UI page\n\nYou can find all routes [here](http://otoroshi.oto.tools:8080/bo/dashboard/routes)\n\n## Global Properties\n\n* `location`: the location of the entity\n* `id`: the id of the route\n* `name`: the name of the route\n* `description`: the description of the route\n* `tags`: the tags of the route. can be useful for api automation\n* `metadata`: the metadata of the route. can be useful for api automation. There are a few reserved metadata used by otorshi that can be found @ref[below](./routes.md#reserved-metadata)\n* `enabled`: is the route enabled ? if not, the router will not consider this route\n* `debugFlow`: the debug flag. If enabled, the execution report for this route will contain all input/output values through steps of the proxy engine. For more informations, check the @ref[engine documentation](../topics/engine.md#reporting)\n* `capture`: if enabled, otoroshi will generate events containing the whole content of each request. Use with caution ! For more informations, check the @ref[engine documentation](../topics/engine.md#http-traffic-capture)\n* `exportReporting`: if enabled, execution reports of the proxy engine will be generated for each request. Those reports are exportable using @ref[data exporters](./data-exporters.md) . For more informations, check the @ref[engine documentation](../topics/engine.md#reporting)\n* `groups`: each route is attached to a group. A group can have one or more services/routes. Each API key is linked to groups/routes/services and allow access to every entities in the groups.\n\n### Reserved metadata\n\nsome metadata are reserved for otoroshi usage. Here is the list of reserved metadata\n\n* `otoroshi-core-user-facing`: is this a user facing app for the snow monkey\n* `otoroshi-core-use-akka-http-client`: use the pure akka http client\n* `otoroshi-core-use-netty-http-client`: use the pure netty http client\n* `otoroshi-core-use-akka-http-ws-client`: use the modern websocket client\n* `otoroshi-core-issue-lets-encrypt-certificate`: enabled let's encrypt certificate issue for this route. true or false\n* `otoroshi-core-issue-certificate`: enabled certificate issue for this route. true or false\n* `otoroshi-core-issue-certificate-ca`: the id of the CA cert to generate the certificate for this route\n* `otoroshi-core-openapi-url`: the openapi url for this route\n* `otoroshi-core-env`: the env for this route. here for legacy reasons\n* `otoroshi-deployment-providers`: in the case of relay routing, the providers for this route\n* `otoroshi-deployment-regions`: in the case of relay routing, the network regions for this route\n* `otoroshi-deployment-zones`: in the case of relay routing, the network zone for this route \n* `otoroshi-deployment-dcs`: in the case of relay routing, the datacenter for this route \n* `otoroshi-deployment-racks`: in the case of relay routing, the rack for this route \n\n## Frontend configuration\n\n* `frontend`: the frontend of the route. It's the configuration that will configure how otoroshi router will match this route. A frontend has the following shape. \n\n```javascript\n{\n \"domains\": [ // the matched domains and paths\n \"new-route.oto.tools/path\" // here you can use wildcard in domain and path, also you can use named path params\n ],\n \"strip_path\": true, // is the matched path stripped in the forwarded request\n \"exact\": false, // perform exact matching on path, if not, will be matched on /path*\n \"headers\": {}, // the matched http headers. if none provided, any header will be matched\n \"query\": {}, // the matched http query params. if none provided, any query params will be matched\n \"methods\": [] // the matched http methods. if none provided, any method will be matched\n}\n```\n\nFor more informations about routing, check the @ref[engine documentation](../topics/engine.md#routing)\n\n## Backend configuration\n\n* `backend`: a backend to forward requests to. For more informations, go to the @ref[backend documentation](./backends.md)\n* `backendRef`: a reference to an existing backend id\n\n## Plugins\n\nthe liste of plugins used on this route. Each plugin definition has the following shape:\n\n```javascript\n{\n \"enabled\": false, // is the plugin enabled\n \"debug\": false, // is debug enabled of this specific plugin\n \"plugin\": \"cp:otoroshi.next.plugins.Redirection\", // the id of the plugin\n \"include\": [], // included paths. if none, all paths are included\n \"exclude\": [], // excluded paths. if none, none paths are excluded\n \"config\": { // the configuration of the plugin\n \"code\": 303,\n \"to\": \"https://www.otoroshi.io\"\n },\n \"plugin_index\": { // the position of the plugin. if none provided, otoroshi will use the order in the plugin array\n \"pre_route\": 0\n }\n}\n```\n\nfor more informations about the available plugins, go @ref[here](../plugins/built-in-plugins.md)\n\n\n"},{"name":"scripts.md","id":"/entities/scripts.md","url":"/entities/scripts.html","title":"Scripts","content":"# Scripts\n\nScript are a way to create plugins for otoroshi without deploying them as jar files. With scripts, you just have to store the scala code of your plugins inside the otoroshi datastore and otoroshi will compile and deploy them at startup. You can find all your scripts in the UI at `cog icon / Plugins`. You can find all the documentation about plugins @ref:[here](../plugins/index.md)\n\n@@@ warning\nThe compilation of your plugins can be pretty long and resources consuming. As the compilation happens during otoroshi boot sequence, your instance will be blocked until all plugins have compiled. This behavior can be disabled. If so, the plugins will not work until they have been compiled. Any service using a plugin that is not compiled yet will fail\n@@@\n\nLike any entity, the script has has the following properties\n\n* `id`\n* `plugin name`\n* `plugin description`\n* `tags`\n* `metadata`\n\nAnd you also have\n\n* `type`: the kind of plugin you are building with this script\n* `plugin code`: the code for your plugin\n\n## Compile\n\nYou can use the compile button to check if the code you write in `plugin code` is valid. It will automatically save your script and try to compile. As mentionned earlier, script compilation is quite resource intensive. It will affect your CPU load and your memory consumption. Don't forget to adjust your VM settings accordingly.\n"},{"name":"service-descriptors.md","id":"/entities/service-descriptors.md","url":"/entities/service-descriptors.html","title":"Service descriptors","content":"# Service descriptors\n\nServices or service descriptor, let you declare how to proxy a call from a domain name to another domain name (or multiple domain names). \n\n@@@ div { .centered-img }\n\n@@@\n\nLet’s say you have an API exposed on http://192.168.0.42 and I want to expose it on https://my.api.foo. Otoroshi will proxy all calls to https://my.api.foo and forward them to http://192.168.0.42. While doing that, it will also log everyhting, control accesses, etc.\n\n\n* `Id`: a unique random string to identify your service\n* `Groups`: each service descriptor is attached to a group. A group can have one or more services. Each API key is linked to a group and allow access to every service in the group.\n* `Create a new group`: you can create a new group to host this descriptor\n* `Create dedicated group`: you can create a new group with an auto generated name to host this descriptor\n* `Name`: the name of your service. Only for debug and human readability purposes.\n* `Description`: the description of your service. Only for debug and human readability purposes.\n* `Service enabled`: activate or deactivate your service. Once disabled, users will get an error page saying the service does not exist.\n* `Read only mode`: authorize only GET, HEAD, OPTIONS calls on this service\n* `Maintenance mode`: display a maintainance page when a user try to use the service\n* `Construction mode`: display a construction page when a user try to use the service\n* `Log analytics`: Log analytics events for this service on the servers\n* `Use new http client`: will use Akka Http Client for every request\n* `Detect apikey asap`: If the service is public and you provide an apikey, otoroshi will detect it and validate it. Of course this setting may impact performances because of useless apikey lookups.\n* `Send Otoroshi headers back`: when enabled, Otoroshi will send headers to consumer like request id, client latency, overhead, etc ...\n* `Override Host header`: when enabled, Otoroshi will automatically set the Host header to corresponding target host\n* `Send X-Forwarded-* headers`: when enabled, Otoroshi will send X-Forwarded-* headers to target\n* `Force HTTPS`: will force redirection to `https://` if not present\n* `Allow HTTP/1.0 requests`: will return an error on HTTP/1.0 request\n* `Use new WebSocket client`: will use the new websocket client for every websocket request\n* `TCP/UDP tunneling`: with this setting enabled, otoroshi will not proxy http requests anymore but instead will create a secured tunnel between a cli on your machine and otoroshi to proxy any tcp connection with all otoroshi security features enabled\n\n### Service exposition settings\n\n* `Exposed domain`: the domain used to expose your service. Should follow pattern: `(http|https)://subdomain?.env?.domain.tld?/root?` or regex `(http|https):\\/\\/(.*?)\\.?(.*?)\\.?(.*?)\\.?(.*)\\/?(.*)`\n* `Legacy domain`: use `domain`, `subdomain`, `env` and `matchingRoot` for routing in addition to hosts, or just use hosts.\n* `Strip path`: when matching, strip the matching prefix from the upstream request URL. Defaults to true\n* `Issue Let's Encrypt cert.`: automatically issue and renew let's encrypt certificate based on domain name. Only if Let's Encrypt enabled in global config.\n* `Issue certificate`: automatically issue and renew a certificate based on domain name\n* `Possible hostnames`: all the possible hostnames for your service\n* `Possible matching paths`: all the possible matching paths for your service\n\n### Redirection\n\n* `Redirection enabled`: enabled the redirection. If enabled, a call to that service will redirect to the chosen URL\n* `Http redirection code`: type of redirection used\n* `Redirect to`: URL used to redirect user when the service is called\n\n### Service targets\n\n* `Redirect to local`: if you work locally with Otoroshi, you may want to use that feature to redirect one specific service to a local host. For example, you can relocate https://foo.preprod.bar.com to http://localhost:8080 to make some tests\n* `Load balancing`: the load balancing algorithm used\n* `Targets`: the list of target that Otoroshi will proxy and expose through the subdomain defined before. Otoroshi will do round-robin load balancing between all those targets with circuit breaker mecanism to avoid cascading failures\n* `Targets root`: Otoroshi will append this root to any target choosen. If the specified root is `/api/foo`, then a request to https://yyyyyyy/bar will actually hit https://xxxxxxxxx/api/foo/bar\n\n### URL Patterns\n\n* `Make service a 'public ui'`: add a default pattern as public routes\n* `Make service a 'private api'`: add a default pattern as private routes\n* `Public patterns`: by default, every services are private only and you'll need an API key to access it. However, if you want to expose a public UI, you can define one or more public patterns (regex) to allow access to anybody. For example if you want to allow anybody on any URL, just use `/.*`\n* `Private patterns`: if you define a public pattern that is a little bit too much, you can make some of public URL private again\n\n### Restrictions\n\n* `Enabled`: enable restrictions\n* `Allow last`: Otoroshi will test forbidden and notFound paths before testing allowed paths\n* `Allowed`: allowed paths\n* `Forbidden`: forbidden paths\n* `Not Found`: not found paths\n\n### Otoroshi exchange protocol\n\n* `Enabled`: when enabled, Otoroshi will try to exchange headers with backend service to ensure no one else can use the service from outside.\n* `Send challenge`: when disbaled, Otoroshi will not check if target service respond with sent random value.\n* `Send info. token`: when enabled, Otoroshi add an additional header containing current call informations\n* `Challenge token version`: version the otoroshi exchange protocol challenge. This option will be set to V2 in a near future.\n* `Info. token version`: version the otoroshi exchange protocol info token. This option will be set to Latest in a near future.\n* `Tokens TTL`: the number of seconds for tokens (state and info) lifes\n* `State token header name`: the name of the header containing the state token. If not specified, the value will be taken from the configuration (otoroshi.headers.comm.state)\n* `State token response header name`: the name of the header containing the state response token. If not specified, the value will be taken from the configuration (otoroshi.headers.comm.stateresp)\n* `Info token header name`: the name of the header containing the info token. If not specified, the value will be taken from the configuration (otoroshi.headers.comm.claim)\n* `Excluded patterns`: by default, when security is enabled, everything is secured. But sometimes you need to exlude something, so just add regex to matching path you want to exlude.\n* `Use same algo.`: when enabled, all JWT token in this section will use the same signing algorithm. If `use same algo.` is disabled, three more options will be displayed to select an algorithm for each step of the calls :\n * Otoroshi to backend\n * Backend to otoroshi\n * Info. token\n\n* `Algo.`: What kind of algorithm you want to use to verify/sign your JWT token with\n* `SHA Size`: Word size for the SHA-2 hash function used\n* `Hmac secret`: used to verify the token\n* `Base64 encoded secret`: if enabled, the extracted token will be base64 decoded before it is verifier\n\n### Authentication\n\n* `Enforce user authentication`: when enabled, user will be allowed to use the service (UI) only if they are registered users of the chosen authentication module.\n* `Auth. config`: authentication module used to protect the service\n* `Create a new auth config.`: navigate to the creation of authentication module page\n* `all auth config.`: navigate to the authentication pages\n\n* `Excluded patterns`: by default, when security is enabled, everything is secured. But sometimes you need to exlude something, so just add regex to matching path you want to exlude.\n* `Strict mode`: strict mode enabled\n\n### Api keys constraints\n\n* `From basic auth.`: you can pass the api key in Authorization header (ie. from 'Authorization: Basic xxx' header)\n* `Allow client id only usage`: you can pass the api key using client id only (ie. from Otoroshi-Token header)\n* `From custom headers`: you can pass the api key using custom headers (ie. Otoroshi-Client-Id and Otoroshi-Client-Secret headers)\n* `From JWT token`: you can pass the api key using a JWT token (ie. from 'Authorization: Bearer xxx' header)\n\n#### Basic auth. Api Key\n\n* `Custom header name`: the name of the header to get Authorization\n* `Custom query param name`: the name of the query param to get Authorization\n\n#### Client ID only Api Key\n\n* `Custom header name`: the name of the header to get the client id\n* `Custom query param name`: the name of the query param to get the client id\n\n#### Custom headers Api Key\n\n* `Custom client id header name`: the name of the header to get the client id\n* `Custom client secret header name`: the name of the header to get the client secret\n\n#### JWT Token Api Key\n\n* `Secret signed`: JWT can be signed by apikey secret using HMAC algo.\n* `Keypair signed`: JWT can be signed by an otoroshi managed keypair using RSA/EC algo.\n* `Include Http request attrs.`: if enabled, you have to put the following fields in the JWT token corresponding to the current http call (httpPath, httpVerb, httpHost)\n* `Max accepted token lifetime`: the maximum number of second accepted as token lifespan\n* `Custom header name`: the name of the header to get the jwt token\n* `Custom query param name`: the name of the query param to get the jwt token\n* `Custom cookie name`: the name of the cookie to get the jwt token\n\n### Routing constraints\n\n* `All Tags in` : have all of the following tags\n* `No Tags in` : not have one of the following tags\n* `One Tag in` : have at least one of the following tags\n* `All Meta. in` : have all of the following metadata entries\n* `No Meta. in` : not have one of the following metadata entries\n* `One Meta. in` : have at least one of the following metadata entries\n* `One Meta key in` : have at least one of the following key in metadata\n* `All Meta key in` : have all of the following keys in metadata\n* `No Meta key in` : not have one of the following keys in metadata\n\n### CORS support\n\n* `Enabled`: if enabled, CORS header will be check for each incoming request\n* `Allow credentials`: if enabled, the credentials will be sent. Credentials are cookies, authorization headers, or TLS client certificates.\n* `Allow origin`: if enabled, it will indicates whether the response can be shared with requesting code from the given\n* `Max age`: response header indicates how long the results of a preflight request (that is the information contained in the Access-Control-Allow-Methods and Access-Control-Allow-Headers headers) can be cached.\n* `Expose headers`: response header allows a server to indicate which response headers should be made available to scripts running in the browser, in response to a cross-origin request.\n* `Allow headers`: response header is used in response to a preflight request which includes the Access-Control-Request-Headers to indicate which HTTP headers can be used during the actual request.\n* `Allow methods`: response header specifies one or more methods allowed when accessing a resource in response to a preflight request.\n* `Excluded patterns`: by default, when cors is enabled, everything has cors. But sometimes you need to exlude something, so just add regex to matching path you want to exlude.\n\n#### Related documentations\n\n* @link[Access-Control-Allow-Credentials](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Credentials) { open=new }\n* @link[Access-Control-Allow-Origin](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin) { open=new }\n* @link[Access-Control-Max-Age](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Max-Age) { open=new }\n* @link[Access-Control-Allow-Methods](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Methods) { open=new }\n* @link[Access-Control-Allow-Headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers) { open=new }\n\n### JWT tokens verification\n\n* `Verifiers`: list of selected verifiers to apply on the service\n* `Enabled`: if enabled, Otoroshi will enabled each verifier of the previous list\n* `Excluded patterns`: list of routes where the verifiers will not be apply\n\n### Pre Routing\n\nThis part has been deprecated and moved to the plugin section.\n\n### Access validation\nThis part has been deprecated and moved to the plugin section.\n\n### Gzip support\n\n* `Mimetypes allowed list`: gzip only the files that are matching to a format in the list\n* `Mimetypes blocklist`: will not gzip files matching to a format in the list. A possible way is to allowed all format by default by setting a `*` on the `Mimetypes allowed list` and to add the unwanted format in this list.\n* `Compression level`: the compression level where 9 gives us maximum compression but at the slowest speed. The default compression level is 5 and is a good compromise between speed and compression ratio.\n* `Buffer size`: chunking up a stream of bytes into limited size\n* `Chunk threshold`: if the content type of a request reached over the threshold, the response will be chunked\n* `Excluded patterns`: by default, when gzip is enabled, everything has gzip. But sometimes you need to exlude something, so just add regex to matching path you want to exlude.\n\n### Client settings\n\n* `Use circuit breaker`: use a circuit breaker to avoid cascading failure when calling chains of services. Highly recommended !\n* `Cache connections`: use a cache at host connection level to avoid reconnection time\n* `Client attempts`: specify how many times the client will retry to fetch the result of the request after an error before giving up.\n* `Client call timeout`: specify how long each call should last at most in milliseconds.\n* `Client call and stream timeout`: specify how long each call should last at most in milliseconds for handling the request and streaming the response.\n* `Client connection timeout`: specify how long each connection should last at most in milliseconds.\n* `Client idle timeout`: specify how long each connection can stay in idle state at most in milliseconds.\n* `Client global timeout`: specify how long the global call (with retries) should last at most in milliseconds.\n* `C.breaker max errors`: specify how many errors can pass before opening the circuit breaker\n* `C.breaker retry delay`: specify the delay between two retries. Each retry, the delay is multiplied by the backoff factor\n* `C.breaker backoff factor`: specify the factor to multiply the delay for each retry\n* `C.breaker window`: specify the sliding window time for the circuit breaker in milliseconds, after this time, error count will be reseted\n\n#### Custom timeout settings (list)\n\n* `Path`: the path on which the timeout will be active\n* `Client connection timeout`: specify how long each connection should last at most in milliseconds.\n* `Client idle timeout`: specify how long each connection can stay in idle state at most in milliseconds.\n* `Client call and stream timeout`: specify how long each call should last at most in milliseconds for handling the request and streaming the response.\n* `Call timeout`: Specify how long each call should last at most in milliseconds.\n* `Client global timeout`: specify how long the global call (with retries) should last at most in milliseconds.\n\n#### Proxy settings\n\n* `Proxy host`: host of proxy behind the identify provider\n* `Proxy port`: port of proxy behind the identify provider\n* `Proxy principal`: user of proxy \n* `Proxy password`: password of proxy\n\n### HTTP Headers\n\n* `Additional Headers In`: specify headers that will be added to each client request (from Otoroshi to target). Useful to add authentication.\n* `Additional Headers Out`: specify headers that will be added to each client response (from Otoroshi to client).\n* `Missing only Headers In`: specify headers that will be added to each client request (from Otoroshi to target) if not in the original request.\n* `Missing only Headers Out`: specify headers that will be added to each client response (from Otoroshi to client) if not in the original response.\n* `Remove incoming headers`: remove headers in the client request (from client to Otoroshi).\n* `Remove outgoing headers`: remove headers in the client response (from Otoroshi to client).\n* `Security headers`:\n* `Utility headers`:\n* `Matching Headers`: specify headers that MUST be present on client request to route it (pre routing). Useful to implement versioning.\n* `Headers verification`: verify that some headers has a specific value (post routing)\n\n### Additional settings \n\n* `OpenAPI`: specify an open API descriptor. Useful to display the documentation\n* `Tags`: specify tags for the service\n* `Metadata`: specify metadata for the service. Useful for analytics\n* `IP allowed list`: IP address that can access the service\n* `IP blocklist`: IP address that cannot access the service\n\n### Canary mode\n\n* `Enabled`: Canary mode enabled\n* `Traffic split`: Ratio of traffic that will be sent to canary targets. For instance, if traffic is at 0.2, for 10 request, 2 request will go on canary targets and 8 will go on regular targets.\n* `Targets`: The list of target that Otoroshi will proxy and expose through the subdomain defined before. Otoroshi will do round-robin load balancing between all those targets with circuit breaker mecanism to avoid cascading failures\n * `Target`:\n * `Targets root`: Otoroshi will append this root to any target choosen. If the specified root is '/api/foo', then a request to https://yyyyyyy/bar will actually hit https://xxxxxxxxx/api/foo/bar\n* `Campaign stats`:\n* `Use canary targets as standard targets`:\n\n### Healthcheck settings\n\n* `HealthCheck enabled`: to help failing fast, you can activate healthcheck on a specific URL.\n* `HealthCheck url`: the URL to check. Should return an HTTP 200 response. You can also respond with an 'Opun-Health-Check-Logic-Test-Result' header set to the value of the 'Opun-Health-Check-Logic-Test' request header + 42. to make the healthcheck complete.\n\n### Fault injection\n\n* `User facing app.`: if service is set as user facing, Snow Monkey can be configured to not being allowed to create outage on them.\n* `Chaos enabled`: activate or deactivate chaos setting on this service descriptor.\n\n### Custom errors template\n\n* `40x template`: html template displayed when 40x error occurred\n* `50x template`: html template displayed when 50x error occurred\n* `Build mode template`: html template displayed when the build mode is enabled\n* `Maintenance mode template`: html template displayed when the maintenance mode is enabled\n* `Custom messages`: override error message one by one\n\n### Request transformation\n\nThis part has been deprecated and moved to the plugin section.\n\n### Plugins\n\n* `Plugins`:\n \n * `Inject default config`: injects, if present, the default configuration of a selected plugin in the configuration object\n * `Documentation`: link to the documentation website of the plugin\n * `show/hide config. panel`: shows and hides the plugin panel which contains the plugin description and configuration\n* `Excluded patterns`: by default, when plugins are enabled, everything pass in. But sometimes you need to exclude something, so just add regex to matching path you want to exlude.\n* `Configuration`: the configuration of each enabled plugin, split by names and grouped in the same configuration object."},{"name":"service-groups.md","id":"/entities/service-groups.md","url":"/entities/service-groups.html","title":"Service groups","content":"# Service groups\n\nA service group is composed of an unique `id`, a `Group name`, a `Group description`, an `Organization` and a `Team`. As all Otoroshi resources, a service group have a list of tags and metadata associated.\n\n@@@ div { .centered-img }\n\n@@@\n\nThe first instinctive usage of service group is to group a list of services. \n\nWhen it's done, you can authorize an api key on a specific group. Instead of authorize an api key for each service, you can regroup a list of services together, and give authorization on the group (read the page on the api keys and the usage of the `Authorized on.` field).\n\n## Access to the list of service groups\n\nTo visualize and edit the list of groups, you can navigate to your instance on the `https://otoroshi.xxxxx/bo/dashboard/groups` route or click on the cog icon and select the Service groups button.\n\nOnce on the page, you can create a new item, edit an existing service group or delete an existing one.\n\n> When a service group is deleted, the resources associated are not deleted. On the other hand, the service group of associated resources is let empty.\n\n"},{"name":"tcp-services.md","id":"/entities/tcp-services.md","url":"/entities/tcp-services.html","title":"TCP services","content":"# TCP services\n\nTCP service are special kind of otoroshi services meant to proxy pure TCP connections (ssh, database, http, etc)\n\n## Global information\n\n* `Id`: generated unique identifier\n* `TCP service name`: the name of your TCP service\n* `Enabled`: enable and disable the service\n* `TCP service port`: the listening port\n* `TCP service interface`: network interface listen by the service\n* `Tags`: list of tags associated to the service\n* `Metadata`: list of metadata associated to the service\n\n## TLS\n\nthis section controls the TLS exposition of the service\n\n* `TLS mode`\n * `Disabled`: no TLS\n * `PassThrough`: as the target exposes TLS, the call will pass through otoroshi and use target TLS\n * `Enabled`: the service will be exposed using TLS and will chose certificate based on SNI\n* `Client Auth.`\n * `None` no mTLS needed to pass\n * `Want` pass with or without mTLS\n * `Need` need mTLS to pass\n\n## Server Name Indication (SNI)\n\nthis section control how SNI should be treated\n\n* `SNI routing enabled`: if enabled, the server will use the SNI hostname to determine which certificate to present to the client\n* `Forward to target if no SNI match`: if enabled, a call without any SNI match will be forward to the target\n* `Target host`: host of the target called if no SNI\n* `Target ip address`: ip of the target called if no SNI\n* `Target port`: port of the target called if no SNI\n* `TLS call`: encrypt the communication with TLS\n\n## Rules\n\nfor any listening TCP proxy, it is possible to route to multiple targets based on SNI or extracted http host (if proxying http)\n\n* `Matching domain name`: regex used to filter the list of domains where the rule will be applied\n* `Target host`: host of the target\n* `Target ip address`: ip of the target\n* `Target port`: port of the target\n* `TLS call`: enable this flag if the target is exposed using TLS\n"},{"name":"teams.md","id":"/entities/teams.md","url":"/entities/teams.html","title":"Teams","content":"# Teams\n\nIn Otoroshi, all resources are attached to an `Organization` and a `Team`. \n\nA team is composed of an unique `id`, a `name`, a `description` and an `Organization`. As all Otoroshi resources, a Team have a list of tags and metadata associated.\n\nA team have an unique organization and can be use on multiples resources (services, api keys, etc ...).\n\nA connected user on Otoroshi UI has a list of teams and organizations associated. It can be helpful when you want restrict the rights of a connected user.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Access to the list of teams\n\nTo visualize and edit the list of teams, you can navigate to your instance on the `https://otoroshi.xxxxxx/bo/dashboard/teams` route or click on the cog icon and select the teams button.\n\nOnce on the page, you can create a new item, edit an existing team or delete an existing one.\n\n> When a team is deleted, the resources associated are not deleted. On the other hand, the team of associated resources is let empty.\n\n## Entities location\n\nAny otoroshi entity has a location property (`_loc` when serialized to json) explaining where and by whom the entity can be seen. \n\nAn entity can be part of multiple teams in an organization\n\n```javascript\n{\n \"_loc\": {\n \"tenant\": \"tenant-1\",\n \"teams\": [\n \"team-1\",\n \"team-2\"\n ]\n }\n ...\n}\n```\n\nor all teams\n\n```javascript\n{\n \"_loc\": {\n \"tenant\": \"tenant-1\",\n \"teams\": [\n \"*\"\n ]\n }\n ...\n}\n```"},{"name":"features.md","id":"/features.md","url":"/features.html","title":"Features","content":"# Features\n\n**Traffic Management**\n\n* Can proxy any HTTP(s) service (apis, webapps, websocket, etc)\n* Can proxy any TCP service (app, database, etc)\n* Can proxy any GRPC service\n* Multiple load-balancing options: \n * RoundRobin\n * Random, Sticky\n * Ip address hash\n * Best Response Time\n* Distributed in-flight request limiting\t\n* Distributed rate limiting \n* End-to-end HTTP/1.1 support\n* End-to-end H2 support\n* End-to-end H3 support\n* Traffic mirroring\n* Traffic capture\n* Canary deployments\n* Relay routing \n* Tunnels for easier network exposition\n* Error templates\n\n**Routing**\n\n* Router can support ten of thousands of concurrent routes\n* Router support path params extraction (can be regex validated)\n* Routing based on \n * method\n * hostname (exact, wildcard)\n * path (exact, wildcard)\n * header values (exact, regex, wildcard)\n * query param values (exact, regex, wildcard)\n* Support full url rewriting\n\n**Routes customization**\n\n* Dozens of built-in middlewares (policies/plugins) \n * circuit breakers\n * automatic retries\n * buffering\n * gzip\n * headers manipulation\n * cors\n * body transformation\n * graphql gateway\n * etc \n* Support middlewares compiled to WASM (using extism)\n* Support Open Policy Agent policies for traffic control\n* Write your own custom middlewares\n * in scala deployed as jar files\n * in whatever language you want that can be compiled to WASM\n\n**Routes Monitoring**\n\n* Active healthchecks\n* Route state for the last 90 days\n* Calls tracing using W3C trace context\n* Export alerts and events to external database\n * file\n * S3\n * elastic\n * pulsar\n * kafka\n * webhook\n * mailer\n * logger\n* Real-time traffic metrics\n* Real-time traffic metrics (Datadog, Prometheus, StatsD)\n\n**Services discovery**\n\n* through DNS\n* through Eureka 2\n* through Kubernetes API\n* through custom otoroshi protocol\n\n**API security**\n\n* Access management with apikeys and quotas\n* Automatic apikeys secrets rotation\n* HTTPS and TLS\n* End-to-end mTLS calls \n* Routing constraints\n* Routing restrictions\n* JWT tokens validation and manipulation\n * can support multiple validator on the same routes\n\n**Administration UI**\n\n* Manage and organize all resources\n* Secured users access with Authentication module\n* Audited users actions\n* Dynamic changes at runtime without full reload\n* Test your routes without any external tools\n\n**Webapp authentication and security**\n\n* OAuth2.0/2.1 authentication\n* OpenID Connect (OIDC) authentication\n* LDAP authentication\n* JWT authentication\n* OAuth 1.0a authentication\n* SAML V2 authentication\n* Internal users management\n* Secret vaults support\n * Environment variables\n * Hashicorp Vault\n * Azure key vault\n * AWS secret manager\n * Google secret manager\n * Kubernetes secrets\n * Izanami\n * Spring Cloud Config\n * Http\n * Local\n\n**Certificates management**\n\n* Dynamic TLS certificates store \n* Dynamic TLS termination\n* Internal PKI\n * generate self signed certificates/CAs\n * generate/sign certificates/CAs/subCAs\n * AIA\n * OCSP responder\n * import P12/certificate bundles\n* ACME / Let's Encrypt support\n* On-the-fly certificate generation based on a CA certificate without request loss\n* JWKS exposition for public keypair\n* Default certificate\n* Customize mTLS trusted CAs in the TLS handshake\n\n**Clustering**\n\n* based on a control plane/data plane pattern\n* encrypted communication\n* backup capabilities to allow data plane to start without control plane reachable to improve resilience\n* relay routing to forward traffic from one network zone to others\n* distributed web authentication accross nodes\n\n**Performances and testing**\n\n* Chaos engineering\n* Horizontal Scalability or clustering\n* Canary testing\n* Http client in UI\n* Request debugging\n* Traffic capture\n\n**Kubernetes integration**\n\n* Standard Ingress controller\n* Custom Ingress controller\n * Manage Otoroshi resources from Kubernetes\n* Validation of resources via webhook\n* Service Mesh for easy service-to-service communication (based on Kubernetes sidecars)\n\n**Organize**\n\n* multi-organizations\n* multi-teams\n* routes groups\n\n**Developpers portal**\n\n* Using @link:[Daikoku](https://maif.github.io/daikoku/manual/index.html) { open=new }\n"},{"name":"getting-started.md","id":"/getting-started.md","url":"/getting-started.html","title":"Getting Started","content":"# Getting Started\n\n- [Protect your service with Otoroshi ApiKey](#protect-your-service-with-otoroshi-apikey)\n- [Secure your web app in 2 calls with an authentication](#secure-your-web-app-in-2-calls-with-an-authentication)\n\nDownload the latest jar of Otoroshi\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar'\n```\n\nOnce downloading, run Otoroshi.\n```sh\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n```\n\nYes, that command is all it took to start it up.\n\n## Protect your service with Otoroshi ApiKey\n\nCreate a new route, exposed on `http://myapi.oto.tools:8080`, which will forward all requests to the mirror `https://mirror.otoroshi.io`.\n\n```sh\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/routes \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"myapi.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.ApikeyCalls\",\n \"enabled\": true,\n \"config\": {\n \"validate\": true,\n \"mandatory\": true,\n \"update_quotas\": true\n }\n }\n ]\n}\nEOF\n```\n\nNow that we have created our route, let’s see if our request reaches our upstream service. \nYou should receive an error from Otoroshi about a missing api key in our request.\n\n```sh\ncurl 'http://myapi.oto.tools:8080'\n```\n\nIt looks like we don’t have access to it. Create your first api key with a quota of 10 calls by day and month.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/apikeys' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"clientId\": \"my-first-apikey-id\",\n \"clientSecret\": \"my-first-apikey-secret\",\n \"clientName\": \"my-first-apikey\",\n \"description\": \"my-first-apikey-description\",\n \"authorizedGroup\": \"default\",\n \"enabled\": true,\n \"throttlingQuota\": 10,\n \"dailyQuota\": 10,\n \"monthlyQuota\": 10\n}\nEOF\n```\n\nCall your api with the generated apikey.\n\n```sh\ncurl 'http://myapi.oto.tools:8080' -u my-first-apikey-id:my-first-apikey-secret\n```\n\n```json\n{\n \"method\": \"GET\",\n \"path\": \"/\",\n \"headers\": {\n \"host\": \"mirror.otoroshi.io\",\n \"accept\": \"*/*\",\n \"user-agent\": \"curl/7.64.1\",\n \"authorization\": \"Basic bXktZmlyc3QtYXBpLWtleS1pZDpteS1maXJzdC1hcGkta2V5LXNlY3JldA==\",\n \"otoroshi-request-id\": \"1465298507974836306\",\n \"otoroshi-proxied-host\": \"myapi.oto.tools:8080\",\n \"otoroshi-request-timestamp\": \"2021-11-29T13:36:02.888+01:00\",\n },\n \"body\": \"\"\n}\n```\n\nCheck your remaining quotas\n\n```sh\ncurl 'http://myapi.oto.tools:8080' -u my-first-apikey-id:my-first-apikey-secret --include\n```\n\nThis should output these following Otoroshi headers\n\n```json\nOtoroshi-Daily-Calls-Remaining: 6\nOtoroshi-Monthly-Calls-Remaining: 6\n```\n\nKeep calling the api and confirm that Otoroshi is sending you an apikey exceeding quota error\n\n\n```json\n{ \n \"Otoroshi-Error\": \"You performed too much requests\"\n}\n```\n\nWell done, you have secured your first api with the apikeys system with limited call quotas.\n\n## Secure your web app in 2 calls with an authentication\n\nCreate an in-memory authentication module, with one registered user, to protect your service.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/auths' \\\n-H \"Otoroshi-Client-Id: admin-api-apikey-id\" \\\n-H \"Otoroshi-Client-Secret: admin-api-apikey-secret\" \\\n-H 'Content-Type: application/json; charset=utf-8' \\\n-d @- <<'EOF'\n{\n \"type\":\"basic\",\n \"id\":\"auth_mod_in_memory_auth\",\n \"name\":\"in-memory-auth\",\n \"desc\":\"in-memory-auth\",\n \"users\":[\n {\n \"name\":\"User Otoroshi\",\n \"password\":\"$2a$10$oIf4JkaOsfiypk5ZK8DKOumiNbb2xHMZUkYkuJyuIqMDYnR/zXj9i\",\n \"email\":\"user@foo.bar\",\n \"metadata\":{\n \"username\":\"roger\"\n },\n \"tags\":[\"foo\"],\n \"webauthn\":null,\n \"rights\":[{\n \"tenant\":\"*:r\",\n \"teams\":[\"*:r\"]\n }]\n }\n ],\n \"sessionCookieValues\":{\n \"httpOnly\":true,\n \"secure\":false\n }\n}\nEOF\n```\n\nThen create a service secure by the previous authentication module, which proxies `google.fr` on `webapp.oto.tools`.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/routes' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"myapi.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"google.fr\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.AuthModule\",\n \"enabled\": true,\n \"config\": {\n \"pass_with_apikey\": false,\n \"auth_module\": null,\n \"module\": \"auth_mod_in_memory_auth\"\n }\n }\n ]\n}\nEOF\n```\n\nNavigate to http://webapp.oto.tools:8080, login with `user@foo.bar/password` and check that you're redirect to `google` page.\n\nWell done! You completed the discovery tutorial."},{"name":"communicate-with-kafka.md","id":"/how-to-s/communicate-with-kafka.md","url":"/how-to-s/communicate-with-kafka.html","title":"Communicate with Kafka","content":"# Communicate with Kafka\n\nEvery matching event can be sent to an [Apache Kafka topic](https://kafka.apache.org/).\n\n### SASL mechanism\n\nCreate a `docker-compose.yml` with the following content\n\n````yml\nversion: \"2\"\n\nservices:\n zookeeper:\n image: docker.io/bitnami/zookeeper:3.8\n ports:\n - \"2181:2181\"\n environment:\n - ALLOW_ANONYMOUS_LOGIN=yes\n kafka:\n image: docker.io/bitnami/kafka:3.2\n ports:\n - \"9092:9092\"\n environment:\n - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181\n - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:PLAINTEXT,CLIENT:SASL_PLAINTEXT\n - ALLOW_PLAINTEXT_LISTENER=yes\n - KAFKA_CFG_LISTENERS=INTERNAL://:9093,CLIENT://:9092\n - KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka:9093,CLIENT://kafka:9092\n - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL\n - KAFKA_CLIENT_USERS=user\n - KAFKA_CLIENT_PASSWORDS=password\n\n depends_on:\n - zookeeper\n````\n\nLaunch the command to create the zookeeper and kafka containers\n\n````bash\ndocker-compose up -d\n````\n\nCreate a new exporter on your Otoroshi instance with the following values\n\n@@@ div { .centered-img }\n\n@@@\n\n### PLAINTEXT mechanism\n\nCreate a `docker-compose.yml` with the following content\n\n````yml\nversion: \"2\"\n\nservices:\n zookeeper:\n image: docker.io/bitnami/zookeeper:3.8\n ports:\n - \"2181:2181\"\n environment:\n - ALLOW_ANONYMOUS_LOGIN=yes\n kafka:\n image: docker.io/bitnami/kafka:3.2\n ports:\n - \"9092:9092\"\n environment:\n - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181\n - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT\n - ALLOW_PLAINTEXT_LISTENER=yes\n - KAFKA_CFG_LISTENERS=INTERNAL://:9093,CLIENT://:9092\n - KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka:9093,CLIENT://kafka:9092\n - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL\n\n depends_on:\n - zookeeper\n````\n\nLaunch the command to create the zookeeper and kafka containers\n\n````bash\ndocker-compose up -d\n````\n\nCreate a new exporter on your Otoroshi instance with the following values\n\n@@@ div { .centered-img }\n\n@@@\n\n### SSL mechanism\n\n````bash\nwget https://raw.githubusercontent.com/confluentinc/confluent-platform-security-tools/master/kafka-generate-ssl.sh\n````\n\n````bash\nchmod +x kafka-generate-ssl.sh\n````\n\nCreate a `docker-compose.yml` with the following content\n\n````yml\nversion: '3.5'\n\nservices:\n\n zookeeper:\n image: \"wurstmeister/zookeeper:latest\"\n ports:\n - \"2181:2181\"\n\n kafka:\n image: wurstmeister/kafka:2.12-2.2.0\n depends_on:\n - zookeeper\n ports:\n - \"9092:9092\"\n environment:\n KAFKA_ADVERTISED_LISTENERS: 'SSL://kafka:9092'\n KAFKA_LISTENERS: 'SSL://0.0.0.0:9092'\n KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'\n KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'\n KAFKA_SSL_KEYSTORE_LOCATION: '/keystore/kafka.keystore.jks'\n KAFKA_SSL_KEYSTORE_PASSWORD: 'otoroshi'\n KAFKA_SSL_KEY_PASSWORD: 'otoroshi'\n KAFKA_SSL_TRUSTSTORE_LOCATION: '/truststore/kafka.truststore.jks'\n KAFKA_SSL_TRUSTSTORE_PASSWORD: 'otoroshi'\n KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ''\n KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ''\n KAFKA_SECURITY_INTER_BROKER_PROTOCOL: 'SSL'\n volumes:\n - ./truststore:/truststore\n - ./keystore:/keystore\n````\n\nLaunch the command to create the zookeeper and kafka containers\n\n````bash\ndocker-compose up -d\n````\n\nCreate a new exporter on your Otoroshi instance with the following values\n\n@@@ div { .centered-img }\n\n@@@\n\n### SASL_SSL mechanism\n\nGenerate the TLS certificates for the Kafka broker.\n\nCreate a file `generate.sh` with the following content and run the command\n\n````bash\nchmod +x generate.sh && ./generate.sh\n````\n\n````bash\n# Content of the generate.sh file\n\nversion: '3.5'\n\nservices:\n\n zookeeper:\n image: \"bitnami/zookeeper:latest\"\n ports:\n - \"2181:2181\"\n environment:\n - ALLOW_ANONYMOUS_LOGIN=yes\n\n kafka:\n image: bitnami/kafka:latest\n depends_on:\n - zookeeper\n ports:\n - '9092:9092'\n environment:\n ALLOW_PLAINTEXT_LISTENER: 'yes'\n KAFKA_ZOOKEEPER_PROTOCOL: 'PLAINTEXT'\n KAFKA_CFG_ZOOKEEPER_CONNECT: 'zookeeper:2181'\n KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: 'INTERNAL:PLAINTEXT,CLIENT:SASL_SSL'\n KAFKA_CFG_LISTENERS: 'INTERNAL://:9093,CLIENT://:9092'\n KAFKA_INTER_BROKER_LISTENER_NAME: 'INTERNAL'\n KAFKA_CFG_ADVERTISED_LISTENERS: 'INTERNAL://kafka:9093,CLIENT://kafka:9092'\n KAFKA_CLIENT_USERS: 'user'\n KAFKA_CLIENT_PASSWORDS: 'password'\n KAFKA_CERTIFICATE_PASSWORD: 'otoroshi'\n KAFKA_TLS_TYPE: 'JKS'\n KAFKA_OPTS: \"-Djava.security.auth.login.config=/opt/kafka/kafka_server_jaas.conf\"\n volumes:\n - ./secrets/kafka_server_jaas.conf:/opt/kafka/kafka_server_jaas.conf\n - ./truststore/kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro\n - ./keystore/kafka.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro\n 79966b@PMP00131 î‚° ~/Downloads/kafka_ssl_setup-master î‚°\n 79966b@PMP00131 î‚° ~/Downloads/kafka_ssl_setup-master î‚° cat generate.sh\n#!/usr/bin/env bash\n\nset -e\n\nKEYSTORE_FILENAME=\"kafka.keystore.jks\"\nVALIDITY_IN_DAYS=3650\nDEFAULT_TRUSTSTORE_FILENAME=\"kafka.truststore.jks\"\nTRUSTSTORE_WORKING_DIRECTORY=\"truststore\"\nKEYSTORE_WORKING_DIRECTORY=\"keystore\"\nCA_CERT_FILE=\"ca-cert\"\nKEYSTORE_SIGN_REQUEST=\"cert-file\"\nKEYSTORE_SIGN_REQUEST_SRL=\"ca-cert.srl\"\nKEYSTORE_SIGNED_CERT=\"cert-signed\"\n\nfunction file_exists_and_exit() {\n echo \"'$1' cannot exist. Move or delete it before\"\n echo \"re-running this script.\"\n exit 1\n}\n\nif [ -e \"$KEYSTORE_WORKING_DIRECTORY\" ]; then\n file_exists_and_exit $KEYSTORE_WORKING_DIRECTORY\nfi\n\nif [ -e \"$CA_CERT_FILE\" ]; then\n file_exists_and_exit $CA_CERT_FILE\nfi\n\nif [ -e \"$KEYSTORE_SIGN_REQUEST\" ]; then\n file_exists_and_exit $KEYSTORE_SIGN_REQUEST\nfi\n\nif [ -e \"$KEYSTORE_SIGN_REQUEST_SRL\" ]; then\n file_exists_and_exit $KEYSTORE_SIGN_REQUEST_SRL\nfi\n\nif [ -e \"$KEYSTORE_SIGNED_CERT\" ]; then\n file_exists_and_exit $KEYSTORE_SIGNED_CERT\nfi\n\necho\necho \"Welcome to the Kafka SSL keystore and truststore generator script.\"\n\necho\necho \"First, do you need to generate a trust store and associated private key,\"\necho \"or do you already have a trust store file and private key?\"\necho\necho -n \"Do you need to generate a trust store and associated private key? [yn] \"\nread generate_trust_store\n\ntrust_store_file=\"\"\ntrust_store_private_key_file=\"\"\n\nif [ \"$generate_trust_store\" == \"y\" ]; then\n if [ -e \"$TRUSTSTORE_WORKING_DIRECTORY\" ]; then\n file_exists_and_exit $TRUSTSTORE_WORKING_DIRECTORY\n fi\n\n mkdir $TRUSTSTORE_WORKING_DIRECTORY\n echo\n echo \"OK, we'll generate a trust store and associated private key.\"\n echo\n echo \"First, the private key.\"\n echo\n echo \"You will be prompted for:\"\n echo \" - A password for the private key. Remember this.\"\n echo \" - Information about you and your company.\"\n echo \" - NOTE that the Common Name (CN) is currently not important.\"\n\n openssl req -new -x509 -keyout $TRUSTSTORE_WORKING_DIRECTORY/ca-key \\\n -out $TRUSTSTORE_WORKING_DIRECTORY/$CA_CERT_FILE -days $VALIDITY_IN_DAYS\n\n trust_store_private_key_file=\"$TRUSTSTORE_WORKING_DIRECTORY/ca-key\"\n\n echo\n echo \"Two files were created:\"\n echo \" - $TRUSTSTORE_WORKING_DIRECTORY/ca-key -- the private key used later to\"\n echo \" sign certificates\"\n echo \" - $TRUSTSTORE_WORKING_DIRECTORY/$CA_CERT_FILE -- the certificate that will be\"\n echo \" stored in the trust store in a moment and serve as the certificate\"\n echo \" authority (CA). Once this certificate has been stored in the trust\"\n echo \" store, it will be deleted. It can be retrieved from the trust store via:\"\n echo \" $ keytool -keystore -export -alias CARoot -rfc\"\n\n echo\n echo \"Now the trust store will be generated from the certificate.\"\n echo\n echo \"You will be prompted for:\"\n echo \" - the trust store's password (labeled 'keystore'). Remember this\"\n echo \" - a confirmation that you want to import the certificate\"\n\n keytool -keystore $TRUSTSTORE_WORKING_DIRECTORY/$DEFAULT_TRUSTSTORE_FILENAME \\\n -alias CARoot -import -file $TRUSTSTORE_WORKING_DIRECTORY/$CA_CERT_FILE\n\n trust_store_file=\"$TRUSTSTORE_WORKING_DIRECTORY/$DEFAULT_TRUSTSTORE_FILENAME\"\n\n echo\n echo \"$TRUSTSTORE_WORKING_DIRECTORY/$DEFAULT_TRUSTSTORE_FILENAME was created.\"\n\n # don't need the cert because it's in the trust store.\n rm $TRUSTSTORE_WORKING_DIRECTORY/$CA_CERT_FILE\nelse\n echo\n echo -n \"Enter the path of the trust store file. \"\n read -e trust_store_file\n\n if ! [ -f $trust_store_file ]; then\n echo \"$trust_store_file isn't a file. Exiting.\"\n exit 1\n fi\n\n echo -n \"Enter the path of the trust store's private key. \"\n read -e trust_store_private_key_file\n\n if ! [ -f $trust_store_private_key_file ]; then\n echo \"$trust_store_private_key_file isn't a file. Exiting.\"\n exit 1\n fi\nfi\n\necho\necho \"Continuing with:\"\necho \" - trust store file: $trust_store_file\"\necho \" - trust store private key: $trust_store_private_key_file\"\n\nmkdir $KEYSTORE_WORKING_DIRECTORY\n\necho\necho \"Now, a keystore will be generated. Each broker and logical client needs its own\"\necho \"keystore. This script will create only one keystore. Run this script multiple\"\necho \"times for multiple keystores.\"\necho\necho \"You will be prompted for the following:\"\necho \" - A keystore password. Remember it.\"\necho \" - Personal information, such as your name.\"\necho \" NOTE: currently in Kafka, the Common Name (CN) does not need to be the FQDN of\"\necho \" this host. However, at some point, this may change. As such, make the CN\"\necho \" the FQDN. Some operating systems call the CN prompt 'first / last name'\"\necho \" - A key password, for the key being generated within the keystore. Remember this.\"\n\n# To learn more about CNs and FQDNs, read:\n# https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/X509ExtendedTrustManager.html\n\nkeytool -keystore $KEYSTORE_WORKING_DIRECTORY/$KEYSTORE_FILENAME \\\n -alias localhost -validity $VALIDITY_IN_DAYS -genkey -keyalg RSA\n\necho\necho \"'$KEYSTORE_WORKING_DIRECTORY/$KEYSTORE_FILENAME' now contains a key pair and a\"\necho \"self-signed certificate. Again, this keystore can only be used for one broker or\"\necho \"one logical client. Other brokers or clients need to generate their own keystores.\"\n\necho\necho \"Fetching the certificate from the trust store and storing in $CA_CERT_FILE.\"\necho\necho \"You will be prompted for the trust store's password (labeled 'keystore')\"\n\nkeytool -keystore $trust_store_file -export -alias CARoot -rfc -file $CA_CERT_FILE\n\necho\necho \"Now a certificate signing request will be made to the keystore.\"\necho\necho \"You will be prompted for the keystore's password.\"\nkeytool -keystore $KEYSTORE_WORKING_DIRECTORY/$KEYSTORE_FILENAME -alias localhost \\\n -certreq -file $KEYSTORE_SIGN_REQUEST\n\necho\necho \"Now the trust store's private key (CA) will sign the keystore's certificate.\"\necho\necho \"You will be prompted for the trust store's private key password.\"\nopenssl x509 -req -CA $CA_CERT_FILE -CAkey $trust_store_private_key_file \\\n -in $KEYSTORE_SIGN_REQUEST -out $KEYSTORE_SIGNED_CERT \\\n -days $VALIDITY_IN_DAYS -CAcreateserial\n# creates $KEYSTORE_SIGN_REQUEST_SRL which is never used or needed.\n\necho\necho \"Now the CA will be imported into the keystore.\"\necho\necho \"You will be prompted for the keystore's password and a confirmation that you want to\"\necho \"import the certificate.\"\nkeytool -keystore $KEYSTORE_WORKING_DIRECTORY/$KEYSTORE_FILENAME -alias CARoot \\\n -import -file $CA_CERT_FILE\nrm $CA_CERT_FILE # delete the trust store cert because it's stored in the trust store.\n\necho\necho \"Now the keystore's signed certificate will be imported back into the keystore.\"\necho\necho \"You will be prompted for the keystore's password.\"\nkeytool -keystore $KEYSTORE_WORKING_DIRECTORY/$KEYSTORE_FILENAME -alias localhost -import \\\n -file $KEYSTORE_SIGNED_CERT\n\necho\necho \"All done!\"\necho\necho \"Delete intermediate files? They are:\"\necho \" - '$KEYSTORE_SIGN_REQUEST_SRL': CA serial number\"\necho \" - '$KEYSTORE_SIGN_REQUEST': the keystore's certificate signing request\"\necho \" (that was fulfilled)\"\necho \" - '$KEYSTORE_SIGNED_CERT': the keystore's certificate, signed by the CA, and stored back\"\necho \" into the keystore\"\necho -n \"Delete? [yn] \"\nread delete_intermediate_files\n\nif [ \"$delete_intermediate_files\" == \"y\" ]; then\n rm $KEYSTORE_SIGN_REQUEST_SRL\n rm $KEYSTORE_SIGN_REQUEST\n rm $KEYSTORE_SIGNED_CERT\nfi\n````\n\nCreate, in the same repository, a repository named `secrets` with the following configuration.\n\n````bash \n# Content of ~/tmp/kafka/secrets/kafka_server_jaas.conf\n\nClient {\n org.apache.kafka.common.security.plain.PlainLoginModule required\n username=\"user\"\n password=\"password\";\n};\n````\n\nCreate a `docker-compose.yml` file with the following content.\n\n````bash\nversion: '3.5'\n\nservices:\n\n zookeeper:\n image: \"bitnami/zookeeper:latest\"\n ports:\n - \"2181:2181\"\n environment:\n - ALLOW_ANONYMOUS_LOGIN=yes\n\n kafka:\n image: bitnami/kafka:latest\n depends_on:\n - zookeeper\n ports:\n - '9092:9092'\n environment:\n ALLOW_PLAINTEXT_LISTENER: 'yes'\n KAFKA_ZOOKEEPER_PROTOCOL: 'PLAINTEXT'\n KAFKA_CFG_ZOOKEEPER_CONNECT: 'zookeeper:2181'\n KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: 'INTERNAL:PLAINTEXT,CLIENT:SASL_SSL'\n KAFKA_CFG_LISTENERS: 'INTERNAL://:9093,CLIENT://:9092'\n KAFKA_INTER_BROKER_LISTENER_NAME: 'INTERNAL'\n KAFKA_CFG_ADVERTISED_LISTENERS: 'INTERNAL://kafka:9093,CLIENT://kafka:9092'\n KAFKA_CLIENT_USERS: 'user'\n KAFKA_CLIENT_PASSWORDS: 'password'\n KAFKA_CERTIFICATE_PASSWORD: 'otoroshi'\n KAFKA_TLS_TYPE: 'JKS'\n KAFKA_OPTS: \"-Djava.security.auth.login.config=/opt/kafka/kafka_server_jaas.conf\"\n volumes:\n - ./secrets/kafka_server_jaas.conf:/opt/kafka/kafka_server_jaas.conf\n - ./truststore/kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro\n - ./keystore/kafka.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro\n````\n\nAt this point, your repository should be \n````\n/tmp/kafka\n | generate.sh\n | docker-compose.yml\n | truststore\n | kafka.truststore.jks\n | keystore \n | kafka.keystore.jks\n | secrets \n | kafka_server_jaas.conf\n````\n\nLaunch the command to create the zookeeper and kafka containers\n\n````bash\ndocker-compose up -d\n````\n\nCreate a new exporter on your Otoroshi instance with the following values\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"create-custom-auth-module.md","id":"/how-to-s/create-custom-auth-module.md","url":"/how-to-s/create-custom-auth-module.html","title":"Create your Authentication module","content":"# Create your Authentication module\n\nAuthentication modules can be used to protect routes. In some cases, you need to create your own custom authentication module to create a new one or simply inherit and extend an exiting module.\n\nYou can write your own authentication using your favorite IDE. Just create an SBT project with the following dependencies. It can be quite handy to manage the source code like any other piece of code, and it avoid the compilation time for the script at Otoroshi startup.\n\n```scala\nlazy val root = (project in file(\".\")).\n settings(\n inThisBuild(List(\n organization := \"com.example\",\n scalaVersion := \"2.12.7\",\n version := \"0.1.0-SNAPSHOT\"\n )),\n name := \"my-custom-auth-module\",\n libraryDependencies += \"fr.maif\" %% \"otoroshi\" % \"1x.x.x\"\n )\n```\n\nJust below, you can find an example of Custom Auth. module. \n\n```scala\npackage auth.custom\n\nimport akka.http.scaladsl.util.FastFuture\nimport otoroshi.auth.{AuthModule, AuthModuleConfig, Form, SessionCookieValues}\nimport otoroshi.controllers.routes\nimport otoroshi.env.Env\nimport otoroshi.models._\nimport otoroshi.security.IdGenerator\nimport otoroshi.utils.JsonPathValidator\nimport otoroshi.utils.syntax.implicits.BetterSyntax\nimport play.api.http.MimeTypes\nimport play.api.libs.json._\nimport play.api.mvc._\n\nimport scala.concurrent.{ExecutionContext, Future}\nimport scala.util.{Failure, Success, Try}\n\ncase class CustomModuleConfig(\n id: String,\n name: String,\n desc: String,\n clientSideSessionEnabled: Boolean,\n sessionMaxAge: Int = 86400,\n userValidators: Seq[JsonPathValidator] = Seq.empty,\n tags: Seq[String],\n metadata: Map[String, String],\n sessionCookieValues: SessionCookieValues,\n location: otoroshi.models.EntityLocation = otoroshi.models.EntityLocation(),\n form: Option[Form] = None,\n foo: String = \"bar\"\n ) extends AuthModuleConfig {\n def `type`: String = \"custom\"\n def humanName: String = \"Custom Authentication\"\n\n override def authModule(config: GlobalConfig): AuthModule = CustomAuthModule(this)\n override def withLocation(location: EntityLocation): AuthModuleConfig = copy(location = location)\n\n lazy val format = new Format[CustomModuleConfig] {\n override def writes(o: CustomModuleConfig): JsValue = o.asJson\n\n override def reads(json: JsValue): JsResult[CustomModuleConfig] = Try {\n CustomModuleConfig(\n location = otoroshi.models.EntityLocation.readFromKey(json),\n id = (json \\ \"id\").as[String],\n name = (json \\ \"name\").as[String],\n desc = (json \\ \"desc\").asOpt[String].getOrElse(\"--\"),\n clientSideSessionEnabled = (json \\ \"clientSideSessionEnabled\").asOpt[Boolean].getOrElse(true),\n sessionMaxAge = (json \\ \"sessionMaxAge\").asOpt[Int].getOrElse(86400),\n metadata = (json \\ \"metadata\").asOpt[Map[String, String]].getOrElse(Map.empty),\n tags = (json \\ \"tags\").asOpt[Seq[String]].getOrElse(Seq.empty[String]),\n sessionCookieValues =\n (json \\ \"sessionCookieValues\").asOpt(SessionCookieValues.fmt).getOrElse(SessionCookieValues()),\n userValidators = (json \\ \"userValidators\")\n .asOpt[Seq[JsValue]]\n .map(_.flatMap(v => JsonPathValidator.format.reads(v).asOpt))\n .getOrElse(Seq.empty),\n form = (json \\ \"form\").asOpt[JsValue].flatMap(json => Form._fmt.reads(json) match {\n case JsSuccess(value, _) => Some(value)\n case JsError(_) => None\n }),\n foo = (json \\ \"foo\").asOpt[String].getOrElse(\"bar\")\n )\n } match {\n case Failure(exception) => JsError(exception.getMessage)\n case Success(value) => JsSuccess(value)\n }\n }.asInstanceOf[Format[AuthModuleConfig]]\n\n override def _fmt()(implicit env: Env): Format[AuthModuleConfig] = format\n\n override def asJson =\n location.jsonWithKey ++ Json.obj(\n \"type\" -> \"custom\",\n \"id\" -> this.id,\n \"name\" -> this.name,\n \"desc\" -> this.desc,\n \"clientSideSessionEnabled\" -> this.clientSideSessionEnabled,\n \"sessionMaxAge\" -> this.sessionMaxAge,\n \"metadata\" -> this.metadata,\n \"tags\" -> JsArray(tags.map(JsString.apply)),\n \"sessionCookieValues\" -> SessionCookieValues.fmt.writes(this.sessionCookieValues),\n \"userValidators\" -> JsArray(userValidators.map(_.json)),\n \"form\" -> this.form.map(Form._fmt.writes),\n \"foo\" -> foo\n )\n\n def save()(implicit ec: ExecutionContext, env: Env): Future[Boolean] = env.datastores.authConfigsDataStore.set(this)\n\n override def cookieSuffix(desc: ServiceDescriptor) = s\"custom-auth-$id\"\n def theDescription: String = desc\n def theMetadata: Map[String, String] = metadata\n def theName: String = name\n def theTags: Seq[String] = tags\n}\n\nobject CustomAuthModule {\n def defaultConfig = CustomModuleConfig(\n id = IdGenerator.namedId(\"auth_mod\", IdGenerator.uuid),\n name = \"My custom auth. module\",\n desc = \"My custom auth. module\",\n tags = Seq.empty,\n metadata = Map.empty,\n sessionCookieValues = SessionCookieValues(),\n clientSideSessionEnabled = true,\n form = None)\n}\n\ncase class CustomAuthModule(authConfig: CustomModuleConfig) extends AuthModule {\n def this() = this(CustomAuthModule.defaultConfig)\n\n override def paLoginPage(request: RequestHeader, config: GlobalConfig, descriptor: ServiceDescriptor, isRoute: Boolean)\n (implicit ec: ExecutionContext, env: Env): Future[Result] = {\n val redirect = request.getQueryString(\"redirect\")\n val hash = env.sign(s\"${authConfig.id}:::${descriptor.id}\")\n env.datastores.authConfigsDataStore.generateLoginToken().flatMap { token =>\n Results\n .Ok(auth.custom.views.html.login(s\"/privateapps/generic/callback?desc=${descriptor.id}&hash=$hash&route=${isRoute}\", token))\n .as(MimeTypes.HTML)\n .addingToSession(\n \"ref\" -> authConfig.id,\n s\"pa-redirect-after-login-${authConfig.cookieSuffix(descriptor)}\" -> redirect.getOrElse(\n routes.PrivateAppsController.home.absoluteURL(env.exposedRootSchemeIsHttps)(request)\n )\n )(request)\n .future\n }\n }\n\n override def paLogout(request: RequestHeader, user: Option[PrivateAppsUser], config: GlobalConfig, descriptor: ServiceDescriptor)\n (implicit ec: ExecutionContext, env: Env): Future[Either[Result, Option[String]]] = FastFuture.successful(Right(None))\n\n override def paCallback(request: Request[AnyContent], config: GlobalConfig, descriptor: ServiceDescriptor)\n (implicit ec: ExecutionContext, env: Env): Future[Either[String, PrivateAppsUser]] = {\n PrivateAppsUser(\n randomId = IdGenerator.token(64),\n name = \"foo\",\n email = s\"foo@oto.tools\",\n profile = Json.obj(\n \"name\" -> \"foo\",\n \"email\" -> s\"foo@oto.tools\"\n ),\n realm = authConfig.cookieSuffix(descriptor),\n otoroshiData = None,\n authConfigId = authConfig.id,\n tags = Seq.empty,\n metadata = Map.empty,\n location = authConfig.location\n )\n .validate(authConfig.userValidators)\n .vfuture\n }\n\n override def boLoginPage(request: RequestHeader, config: GlobalConfig)(implicit ec: ExecutionContext, env: Env): Future[Result] = ???\n\n override def boLogout(request: RequestHeader, user: BackOfficeUser, config: GlobalConfig)(implicit ec: ExecutionContext, env: Env): Future[Either[Result, Option[String]]] = ???\n\n override def boCallback(request: Request[AnyContent], config: GlobalConfig)(implicit ec: ExecutionContext, env: Env): Future[Either[String, BackOfficeUser]] = ???\n}\n```\n\nThis custom Auth. module inherits from AuthModule (the Auth module have to inherit from the AuthModule trait to be found by Otoroshi). It exposes a simple UI to login, and create an user for each callback request without any verification. Methods starting with bo will be called in case that the auth. module is used on the back office and in other cases, the pa methods (pa for Private App) will be called to protect a route.\n\nThis custom Auth. module uses a [Play template](https://www.playframework.com/documentation/2.8.x/ScalaTemplates) to display the login page. It's not required by Otoroshi but it's a easy way to create a login form.\n\n```html \n@import otoroshi.env.Env\n\n@(action: String, token: String)\n\n
\n

Login page

\n\n
\n \n \n Login\n \n \n
\n```\n\nYour hierarchy files should be something like:\n\n```\nauth\n| custom\n |customModule.scala\n | views\n | login.scala.html\n```\n\nWhen your code is ready, create a jar file \n\n```\nsbt package\n```\n\nand add the jar file to the Otoroshi classpath\n\n```sh\njava -cp \"/path/to/customModule.jar:$/path/to/otoroshi.jar\" play.core.server.ProdServerStart\n```\n\nthen, in the authentication modules, you can chose your custom module in the list."},{"name":"custom-initial-state.md","id":"/how-to-s/custom-initial-state.md","url":"/how-to-s/custom-initial-state.html","title":"Initial state customization","content":"# Initial state customization\n\nwhen you start otoroshi for the first time, some basic entities will be created and stored in the datastore in order to make your instance work properly. However it might not be enough for your use case but you do want to bother with restoring a complete otoroshi export.\n\nIn order to make state customization easy, otoroshi provides the config. key `otoroshi.initialCustomization`, overriden by the env. variable `OTOROSHI_INITIAL_CUSTOMIZATION`\n\nThe expected structure is the following :\n\n```javascript\n{\n \"config\": { ... },\n \"admins\": [],\n \"simpleAdmins\": [],\n \"serviceGroups\": [],\n \"apiKeys\": [],\n \"serviceDescriptors\": [],\n \"errorTemplates\": [],\n \"jwtVerifiers\": [],\n \"authConfigs\": [],\n \"certificates\": [],\n \"clientValidators\": [],\n \"scripts\": [],\n \"tcpServices\": [],\n \"dataExporters\": [],\n \"tenants\": [],\n \"teams\": []\n}\n```\n\nin this structure, everything is optional. For every array property, items will be added to the datastore. For the global config. object, you can just add the parts that you need, and they will be merged with the existing config. object of the datastore.\n\n## Customize the global config.\n\nfor instance, if you want to customize the behavior of the TLS termination, you can use the following :\n\n```sh\nexport OTOROSHI_INITIAL_CUSTOMIZATION='{\"config\":{\"tlsSettings\":{\"defaultDomain\":\"www.foo.bar\",\"randomIfNotFound\":false}}'\n```\n\n## Customize entities\n\nif you want to add apikeys at first boot \n\n```sh\nexport OTOROSHI_INITIAL_CUSTOMIZATION='{\"apikeys\":[{\"_loc\":{\"tenant\":\"default\",\"teams\":[\"default\"]},\"clientId\":\"ksVlQ2KlZm0CnDfP\",\"clientSecret\":\"usZYbE1iwSsbpKY45W8kdbZySj1M5CWvFXe0sPbZ0glw6JalMsgorDvSBdr2ZVBk\",\"clientName\":\"awesome-apikey\",\"description\":\"the awesome apikey\",\"authorizedGroup\":\"default\",\"authorizedEntities\":[\"group_default\"],\"enabled\":true,\"readOnly\":false,\"allowClientIdOnly\":false,\"throttlingQuota\":10000000,\"dailyQuota\":10000000,\"monthlyQuota\":10000000,\"constrainedServicesOnly\":false,\"restrictions\":{\"enabled\":false,\"allowLast\":true,\"allowed\":[],\"forbidden\":[],\"notFound\":[]},\"rotation\":{\"enabled\":false,\"rotationEvery\":744,\"gracePeriod\":168,\"nextSecret\":null},\"validUntil\":null,\"tags\":[],\"metadata\":{}}]}'\n```\n"},{"name":"custom-log-levels.md","id":"/how-to-s/custom-log-levels.md","url":"/how-to-s/custom-log-levels.html","title":"Log levels customization","content":"# Log levels customization\n\nIf you want to customize the log level of your otoroshi instances, it's pretty easy to do it using environment variables or configuration file.\n\n## Customize log level for one logger with configuration file\n\nLet say you want to see `DEBUG` messages from the logger `otoroshi-http-handler`.\n\nThen you just have to declare in your otoroshi configuration file\n\n```\notoroshi.loggers {\n ...\n otoroshi-http-handler = \"DEBUG\"\n ...\n}\n```\n\npossible levels are `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`. Default one is `WARN`.\n\n## Customize log level for one logger with environment variable\n\nLet say you want to see `DEBUG` messages from the logger `otoroshi-http-handler`.\n\nThen you just have to declare an environment variable named `OTOROSHI_LOGGERS_OTOROSHI_HTTP_HANDLER` with value `DEBUG`. The rule is \n\n```scala\n\"OTOROSHI_LOGGERS_\" + loggerName.toUpperCase().replace(\"-\", \"_\")\n```\n\npossible levels are `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`. Default one is `WARN`.\n\n## List of loggers\n\n* [`otoroshi-error-handler`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-error-handler%22%29)\n* [`otoroshi-http-handler`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-http-handler%22%29)\n* [`otoroshi-http-handler-debug`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-http-handler-debug%22%29)\n* [`otoroshi-websocket-handler`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-websocket-handler%22%29)\n* [`otoroshi-websocket`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-websocket%22%29)\n* [`otoroshi-websocket-handler-actor`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-websocket-handler-actor%22%29)\n* [`otoroshi-snowmonkey`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-snowmonkey%22%29)\n* [`otoroshi-circuit-breaker`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-circuit-breaker%22%29)\n* [`otoroshi-circuit-breaker`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-circuit-breaker%22%29)\n* [`otoroshi-worker`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-worker%22%29)\n* [`otoroshi-http-handler`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-http-handler%22%29)\n* [`otoroshi-auth-controller`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-auth-controller%22%29)\n* [`otoroshi-swagger-controller`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-swagger-controller%22%29)\n* [`otoroshi-u2f-controller`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-u2f-controller%22%29)\n* [`otoroshi-backoffice-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-backoffice-api%22%29)\n* [`otoroshi-health-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-health-api%22%29)\n* [`otoroshi-stats-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-stats-api%22%29)\n* [`otoroshi-admin-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-admin-api%22%29)\n* [`otoroshi-auth-modules-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-auth-modules-api%22%29)\n* [`otoroshi-certificates-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-certificates-api%22%29)\n* [`otoroshi-pki`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-pki%22%29)\n* [`otoroshi-scripts-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-scripts-api%22%29)\n* [`otoroshi-analytics-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-analytics-api%22%29)\n* [`otoroshi-import-export-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-import-export-api%22%29)\n* [`otoroshi-templates-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-templates-api%22%29)\n* [`otoroshi-teams-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-teams-api%22%29)\n* [`otoroshi-events-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-events-api%22%29)\n* [`otoroshi-canary-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-canary-api%22%29)\n* [`otoroshi-data-exporter-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-data-exporter-api%22%29)\n* [`otoroshi-services-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-services-api%22%29)\n* [`otoroshi-tcp-service-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-tcp-service-api%22%29)\n* [`otoroshi-tenants-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-tenants-api%22%29)\n* [`otoroshi-global-config-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-global-config-api%22%29)\n* [`otoroshi-apikeys-fs-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-apikeys-fs-api%22%29)\n* [`otoroshi-apikeys-fg-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-apikeys-fg-api%22%29)\n* [`otoroshi-apikeys-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-apikeys-api%22%29)\n* [`otoroshi-statsd-actor`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-statsd-actor%22%29)\n* [`otoroshi-snow-monkey-api`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-snow-monkey-api%22%29)\n* [`otoroshi-jobs-eventstore-checker`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-jobs-eventstore-checker%22%29)\n* [`otoroshi-initials-certs-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-initials-certs-job%22%29)\n* [`otoroshi-alert-actor`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-alert-actor%22%29)\n* [`otoroshi-alert-actor-supervizer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-alert-actor-supervizer%22%29)\n* [`otoroshi-alerts`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-alerts%22%29)\n* [`otoroshi-apikeys-secrets-rotation-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-apikeys-secrets-rotation-job%22%29)\n* [`otoroshi-loader`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-loader%22%29)\n* [`otoroshi-api-action`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-api-action%22%29)\n* [`otoroshi-api-action`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-api-action%22%29)\n* [`otoroshi-analytics-writes-elastic`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-analytics-writes-elastic%22%29)\n* [`otoroshi-analytics-reads-elastic`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-analytics-reads-elastic%22%29)\n* [`otoroshi-events-actor-supervizer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-events-actor-supervizer%22%29)\n* [`otoroshi-data-exporter`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-data-exporter%22%29)\n* [`otoroshi-data-exporter-update-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-data-exporter-update-job%22%29)\n* [`otoroshi-kafka-wrapper`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-kafka-wrapper%22%29)\n* [`otoroshi-kafka-connector`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-kafka-connector%22%29)\n* [`otoroshi-analytics-webhook`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-analytics-webhook%22%29)\n* [`otoroshi-jobs-software-updates`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-jobs-software-updates%22%29)\n* [`otoroshi-analytics-actor`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-analytics-actor%22%29)\n* [`otoroshi-analytics-actor-supervizer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-analytics-actor-supervizer%22%29)\n* [`otoroshi-analytics-event`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-analytics-event%22%29)\n* [`otoroshi-env`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-env%22%29)\n* [`otoroshi-script-compiler`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-script-compiler%22%29)\n* [`otoroshi-script-manager`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-script-manager%22%29)\n* [`otoroshi-script`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-script%22%29)\n* [`otoroshi-tcp-proxy`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-tcp-proxy%22%29)\n* [`otoroshi-tcp-proxy`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-tcp-proxy%22%29)\n* [`otoroshi-tcp-proxy`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-tcp-proxy%22%29)\n* [`otoroshi-custom-timeouts`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-custom-timeouts%22%29)\n* [`otoroshi-client-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-client-config%22%29)\n* [`otoroshi-canary`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-canary%22%29)\n* [`otoroshi-redirection-settings`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-redirection-settings%22%29)\n* [`otoroshi-service-descriptor`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-service-descriptor%22%29)\n* [`otoroshi-service-descriptor-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-service-descriptor-datastore%22%29)\n* [`otoroshi-console-mailer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-console-mailer%22%29)\n* [`otoroshi-mailgun-mailer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-mailgun-mailer%22%29)\n* [`otoroshi-mailjet-mailer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-mailjet-mailer%22%29)\n* [`otoroshi-sendgrid-mailer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-sendgrid-mailer%22%29)\n* [`otoroshi-generic-mailer`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-generic-mailer%22%29)\n* [`otoroshi-clevercloud-client`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-clevercloud-client%22%29)\n* [`otoroshi-metrics`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-metrics%22%29)\n* [`otoroshi-gzip-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-gzip-config%22%29)\n* [`otoroshi-regex-pool`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-regex-pool%22%29)\n* [`otoroshi-ws-client-chooser`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-ws-client-chooser%22%29)\n* [`otoroshi-akka-ws-client`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-akka-ws-client%22%29)\n* [`otoroshi-http-implicits`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-http-implicits%22%29)\n* [`otoroshi-service-group`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-service-group%22%29)\n* [`otoroshi-data-exporter-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-data-exporter-config%22%29)\n* [`otoroshi-data-exporter-config-migration-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-data-exporter-config-migration-job%22%29)\n* [`otoroshi-lets-encrypt-helper`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-lets-encrypt-helper%22%29)\n* [`otoroshi-apkikey`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-apkikey%22%29)\n* [`otoroshi-error-template`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-error-template%22%29)\n* [`otoroshi-job-manager`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-job-manager%22%29)\n* [`otoroshi-plugins-internal-eventlistener-actor`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-internal-eventlistener-actor%22%29)\n* [`otoroshi-global-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-global-config%22%29)\n* [`otoroshi-jwks`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-jwks%22%29)\n* [`otoroshi-jwt-verifier`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-jwt-verifier%22%29)\n* [`otoroshi-global-jwt-verifier`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-global-jwt-verifier%22%29)\n* [`otoroshi-snowmonkey-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-snowmonkey-config%22%29)\n* [`otoroshi-webauthn-admin-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-webauthn-admin-datastore%22%29)\n* [`otoroshi-webauthn-admin-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-webauthn-admin-datastore%22%29)\n* [`otoroshi-service-datatstore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-service-datatstore%22%29)\n* [`otoroshi-cassandra-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-cassandra-datastores%22%29)\n* [`otoroshi-redis-like-store`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-redis-like-store%22%29)\n* [`otoroshi-globalconfig-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-globalconfig-datastore%22%29)\n* [`otoroshi-reactive-pg-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-reactive-pg-datastores%22%29)\n* [`otoroshi-reactive-pg-kv`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-reactive-pg-kv%22%29)\n* [`otoroshi-cassandra-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-cassandra-datastores%22%29)\n* [`otoroshi-apikey-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-apikey-datastore%22%29)\n* [`otoroshi-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-datastore%22%29)\n* [`otoroshi-certificate-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-certificate-datastore%22%29)\n* [`otoroshi-simple-admin-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-simple-admin-datastore%22%29)\n* [`otoroshi-atomic-in-memory-datastore`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-atomic-in-memory-datastore%22%29)\n* [`otoroshi-lettuce-redis`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-lettuce-redis%22%29)\n* [`otoroshi-lettuce-redis-cluster`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-lettuce-redis-cluster%22%29)\n* [`otoroshi-redis-lettuce-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-redis-lettuce-datastores%22%29)\n* [`otoroshi-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-datastores%22%29)\n* [`otoroshi-file-db-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-file-db-datastores%22%29)\n* [`otoroshi-http-db-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-http-db-datastores%22%29)\n* [`otoroshi-s3-datastores`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-s3-datastores%22%29)\n* [`PluginDocumentationGenerator`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22PluginDocumentationGenerator%22%29)\n* [`otoroshi-health-checker`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-health-checker%22%29)\n* [`otoroshi-healthcheck-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-healthcheck-job%22%29)\n* [`otoroshi-healthcheck-local-cache-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-healthcheck-local-cache-job%22%29)\n* [`otoroshi-plugins-response-cache`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-response-cache%22%29)\n* [`otoroshi-oidc-apikey-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-oidc-apikey-config%22%29)\n* [`otoroshi-plugins-maxmind-geolocation-info`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-maxmind-geolocation-info%22%29)\n* [`otoroshi-plugins-ipstack-geolocation-info`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-ipstack-geolocation-info%22%29)\n* [`otoroshi-plugins-maxmind-geolocation-helper`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-maxmind-geolocation-helper%22%29)\n* [`otoroshi-plugins-user-agent-helper`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-user-agent-helper%22%29)\n* [`otoroshi-plugins-user-agent-extractor`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-user-agent-extractor%22%29)\n* [`otoroshi-global-el`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-global-el%22%29)\n* [`otoroshi-plugins-oauth1-caller-plugin`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-oauth1-caller-plugin%22%29)\n* [`otoroshi-dynamic-sslcontext`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-dynamic-sslcontext%22%29)\n* [`otoroshi-plugins-access-log-clf`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-access-log-clf%22%29)\n* [`otoroshi-plugins-access-log-json`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-access-log-json%22%29)\n* [`otoroshi-plugins-kafka-access-log`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kafka-access-log%22%29)\n* [`otoroshi-plugins-kubernetes-client`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kubernetes-client%22%29)\n* [`otoroshi-plugins-kubernetes-ingress-controller-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kubernetes-ingress-controller-job%22%29)\n* [`otoroshi-plugins-kubernetes-ingress-sync`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kubernetes-ingress-sync%22%29)\n* [`otoroshi-plugins-kubernetes-crds-controller-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kubernetes-crds-controller-job%22%29)\n* [`otoroshi-plugins-kubernetes-crds-sync`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kubernetes-crds-sync%22%29)\n* [`otoroshi-cluster`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-cluster%22%29)\n* [`otoroshi-crd-validator`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-crd-validator%22%29)\n* [`otoroshi-sidecar-injector`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-sidecar-injector%22%29)\n* [`otoroshi-plugins-kubernetes-cert-sync`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kubernetes-cert-sync%22%29)\n* [`otoroshi-plugins-kubernetes-to-otoroshi-certs-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-kubernetes-to-otoroshi-certs-job%22%29)\n* [`otoroshi-plugins-otoroshi-certs-to-kubernetes-secrets-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-otoroshi-certs-to-kubernetes-secrets-job%22%29)\n* [`otoroshi-apikeys-workflow-job`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-apikeys-workflow-job%22%29)\n* [`otoroshi-cert-helper`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-cert-helper%22%29)\n* [`otoroshi-certificates-ocsp`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-certificates-ocsp%22%29)\n* [`otoroshi-claim`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-claim%22%29)\n* [`otoroshi-cert`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-cert%22%29)\n* [`otoroshi-ssl-provider`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-ssl-provider%22%29)\n* [`otoroshi-cert-data`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-cert-data%22%29)\n* [`otoroshi-client-cert-validator`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-client-cert-validator%22%29)\n* [`otoroshi-ssl-implicits`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-ssl-implicits%22%29)\n* [`otoroshi-saml-validator-utils`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-saml-validator-utils%22%29)\n* [`otoroshi-global-saml-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-global-saml-config%22%29)\n* [`otoroshi-plugins-hmac-caller-plugin`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-hmac-caller-plugin%22%29)\n* [`otoroshi-plugins-hmac-access-validator-plugin`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-hmac-access-validator-plugin%22%29)\n* [`otoroshi-plugins-hasallowedusersvalidator`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-hasallowedusersvalidator%22%29)\n* [`otoroshi-auth-module-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-auth-module-config%22%29)\n* [`otoroshi-basic-auth-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-basic-auth-config%22%29)\n* [`otoroshi-ldap-auth-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-ldap-auth-config%22%29)\n* [`otoroshi-plugins-jsonpath-helper`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-plugins-jsonpath-helper%22%29)\n* [`otoroshi-global-oauth2-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-global-oauth2-config%22%29)\n* [`otoroshi-global-oauth2-module`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-global-oauth2-module%22%29)\n* [`otoroshi-ldap-auth-config`](https://github.com/MAIF/otoroshi/search?q=Logger%28%22otoroshi-ldap-auth-config%22%29)\n"},{"name":"end-to-end-mtls.md","id":"/how-to-s/end-to-end-mtls.md","url":"/how-to-s/end-to-end-mtls.html","title":"End-to-end mTLS","content":"# End-to-end mTLS\n\nIf you want to use MTLS on otoroshi, you first need to enable it. It is not enabled by default as it will make TLS handshake way heavier. \nTo enable it just change the following config :\n\n```sh\notoroshi.ssl.fromOutside.clientAuth=None|Want|Need\n```\n\nor using env. variables\n\n```sh\nSSL_OUTSIDE_CLIENT_AUTH=None|Want|Need\n```\n\nYou can use the `Want` setup if you cant to have both mtls on some services and no mtls on other services.\n\nYou can also change the trusted CA list sent in the handshake certificate request from the `Danger Zone` in `Tls Settings`.\n\nOtoroshi support mutual TLS out of the box. mTLS from client to Otoroshi and from Otoroshi to targets are supported. In this article we will see how to configure Otoroshi to use end-to-end mTLS. All code and files used in this articles can be found on the [Otoroshi github](https://github.com/MAIF/otoroshi/tree/master/demos/mtls)\n\n### Create certificates\n\nBut first we need to generate some certificates to make the demo work\n\n```sh\nmkdir mtls-demo\ncd mtls-demo\nmkdir ca\nmkdir server\nmkdir client\n\n# create a certificate authority key, use password as pass phrase\nopenssl genrsa -out ./ca/ca-backend.key 4096\n# remove pass phrase\nopenssl rsa -in ./ca/ca-backend.key -out ./ca/ca-backend.key\n# generate the certificate authority cert\nopenssl req -new -x509 -sha256 -days 730 -key ./ca/ca-backend.key -out ./ca/ca-backend.cer -subj \"/CN=MTLSB\"\n\n\n# create a certificate authority key, use password as pass phrase\nopenssl genrsa -out ./ca/ca-frontend.key 2048\n# remove pass phrase\nopenssl rsa -in ./ca/ca-frontend.key -out ./ca/ca-frontend.key\n# generate the certificate authority cert\nopenssl req -new -x509 -sha256 -days 730 -key ./ca/ca-frontend.key -out ./ca/ca-frontend.cer -subj \"/CN=MTLSF\"\n\n\n# now create the backend cert key, use password as pass phrase\nopenssl genrsa -out ./server/_.backend.oto.tools.key 2048\n# remove pass phrase\nopenssl rsa -in ./server/_.backend.oto.tools.key -out ./server/_.backend.oto.tools.key\n# generate the csr for the certificate\nopenssl req -new -key ./server/_.backend.oto.tools.key -sha256 -out ./server/_.backend.oto.tools.csr -subj \"/CN=*.backend.oto.tools\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./server/_.backend.oto.tools.csr -CA ./ca/ca-backend.cer -CAkey ./ca/ca-backend.key -set_serial 1 -out ./server/_.backend.oto.tools.cer\n# verify the certificate, should output './server/_.backend.oto.tools.cer: OK'\nopenssl verify -CAfile ./ca/ca-backend.cer ./server/_.backend.oto.tools.cer\n\n\n# now create the frontend cert key, use password as pass phrase\nopenssl genrsa -out ./server/_.frontend.oto.tools.key 2048\n# remove pass phrase\nopenssl rsa -in ./server/_.frontend.oto.tools.key -out ./server/_.frontend.oto.tools.key\n# generate the csr for the certificate\nopenssl req -new -key ./server/_.frontend.oto.tools.key -sha256 -out ./server/_.frontend.oto.tools.csr -subj \"/CN=*.frontend.oto.tools\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./server/_.frontend.oto.tools.csr -CA ./ca/ca-frontend.cer -CAkey ./ca/ca-frontend.key -set_serial 1 -out ./server/_.frontend.oto.tools.cer\n# verify the certificate, should output './server/_.frontend.oto.tools.cer: OK'\nopenssl verify -CAfile ./ca/ca-frontend.cer ./server/_.frontend.oto.tools.cer\n\n\n# now create the client cert key for backend, use password as pass phrase\nopenssl genrsa -out ./client/_.backend.oto.tools.key 2048\n# remove pass phrase\nopenssl rsa -in ./client/_.backend.oto.tools.key -out ./client/_.backend.oto.tools.key\n# generate the csr for the certificate\nopenssl req -new -key ./client/_.backend.oto.tools.key -out ./client/_.backend.oto.tools.csr -subj \"/CN=*.backend.oto.tools\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./client/_.backend.oto.tools.csr -CA ./ca/ca-backend.cer -CAkey ./ca/ca-backend.key -set_serial 2 -out ./client/_.backend.oto.tools.cer\n# generate a pem version of the cert and key, use password as password\nopenssl x509 -in client/_.backend.oto.tools.cer -out client/_.backend.oto.tools.pem -outform PEM\n\n\n# now create the client cert key for frontend, use password as pass phrase\nopenssl genrsa -out ./client/_.frontend.oto.tools.key 2048\n# remove pass phrase\nopenssl rsa -in ./client/_.frontend.oto.tools.key -out ./client/_.frontend.oto.tools.key\n# generate the csr for the certificate\nopenssl req -new -key ./client/_.frontend.oto.tools.key -out ./client/_.frontend.oto.tools.csr -subj \"/CN=*.frontend.oto.tools\"\n# generate the certificate\nopenssl x509 -req -days 365 -sha256 -in ./client/_.frontend.oto.tools.csr -CA ./ca/ca-frontend.cer -CAkey ./ca/ca-frontend.key -set_serial 2 -out ./client/_.frontend.oto.tools.cer\n# generate a pkcs12 version of the cert and key, use password as password\n# openssl pkcs12 -export -clcerts -in client/_.frontend.oto.tools.cer -inkey client/_.frontend.oto.tools.key -out client/_.frontend.oto.tools.p12\nopenssl x509 -in client/_.frontend.oto.tools.cer -out client/_.frontend.oto.tools.pem -outform PEM\n```\n\nOnce it's done, you should have something like\n\n```sh\n$ tree\n.\n├── backend.js\n├── ca\n│   ├── ca-backend.cer\n│   ├── ca-backend.key\n│   ├── ca-frontend.cer\n│   └── ca-frontend.key\n├── client\n│   ├── _.backend.oto.tools.cer\n│   ├── _.backend.oto.tools.csr\n│   ├── _.backend.oto.tools.key\n│   ├── _.backend.oto.tools.pem\n│   ├── _.frontend.oto.tools.cer\n│   ├── _.frontend.oto.tools.csr\n│   ├── _.frontend.oto.tools.key\n│   └── _.frontend.oto.tools.pem\n└── server\n ├── _.backend.oto.tools.cer\n ├── _.backend.oto.tools.csr\n ├── _.backend.oto.tools.key\n ├── _.frontend.oto.tools.cer\n ├── _.frontend.oto.tools.csr\n └── _.frontend.oto.tools.key\n\n3 directories, 18 files\n```\n\n### The backend service \n\nnow, let's create a backend service using nodejs. Create a file named `backend.js`\n\n```sh\ntouch backend.js\n```\n\nand put the following content\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\nconst options = { \n key: fs.readFileSync('./server/_.backend.oto.tools.key'), \n cert: fs.readFileSync('./server/_.backend.oto.tools.cer'), \n ca: fs.readFileSync('./ca/ca-backend.cer'), \n}; \n\nconst server = https.createServer(options, (req, res) => { \n res.writeHead(200, {\n 'Content-Type': 'application/json'\n }); \n res.end(JSON.stringify({ message: 'Hello World!' }) + \"\\n\"); \n}).listen(8444);\n\nconsole.log('Server listening:', `http://localhost:${server.address().port}`);\n```\n\nto run the server, just do \n\n```sh\nnode ./backend.js\n```\n\nnow you can try your server with\n\n```sh\ncurl --cacert ./ca/ca-backend.cer 'https://api.backend.oto.tools:8444/'\n```\n\nThis should output :\n```json\n{ \"message\": \"Hello World!\" }\n```\n\nnow modify your backend server to ensure that the client provides a client certificate like:\n\n```js\nconst fs = require('fs'); \nconst https = require('https'); \n\nconst options = { \n key: fs.readFileSync('./server/_.backend.oto.tools.key'), \n cert: fs.readFileSync('./server/_.backend.oto.tools.cer'), \n ca: fs.readFileSync('./ca/ca-backend.cer'), \n requestCert: true, \n rejectUnauthorized: true\n}; \n\nconst server = https.createServer(options, (req, res) => { \n console.log('Client certificate CN: ', req.socket.getPeerCertificate().subject.CN);\n res.writeHead(200, {\n 'Content-Type': 'application/json'\n }); \n res.end(JSON.stringify({ message: 'Hello World!' }) + \"\\n\"); \n}).listen(8444);\n\nconsole.log('Server listening:', `http://localhost:${server.address().port}`);\n```\n\nyou can test your new server with\n\n```sh\ncurl \\\n --cacert ./ca/ca-backend.cer \\\n --cert ./client/_.backend.oto.tools.pem \\\n --key ./client/_.backend.oto.tools.key 'https://api.backend.oto.tools:8444/'\n```\n\nthe output should be :\n\n```json\n{ \"message\": \"Hello World!\" }\n```\n\n### Otoroshi setup\n\nDownload the latest version of the Otoroshi jar and run it like\n\n```sh\n java \\\n -Dotoroshi.adminPassword=password \\\n -Dotoroshi.ssl.fromOutside.clientAuth=Want \\\n -jar -Dotoroshi.storage=file otoroshi.jar\n\n[info] otoroshi-env - Admin API exposed on http://otoroshi-api.oto.tools:8080\n[info] otoroshi-env - Admin UI exposed on http://otoroshi.oto.tools:8080\n[info] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[info] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[info] otoroshi-env - You can log into the Otoroshi admin console with the following credentials: admin@otoroshi.io / password\n[info] play.api.Play - Application started (Prod)\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n[info] p.c.s.AkkaHttpServer - Listening for HTTPS on /0:0:0:0:0:0:0:0:8443\n[info] otoroshi-env - Generating a self signed SSL certificate for https://*.oto.tools ...\n```\n\nand log into otoroshi with the tuple `admin@otoroshi.io / password` displayed in the logs. \n\nOnce logged in, navigate to the routes page and create a new route.\n\n* Set a name then validate the creation\n* On frontend node, add `api.frontend.oto.tools` in the list of domains\n* On backend node, replace the target with `api.backend.oto.tools` as hostname and `8444` as port. \n\nSave the route and try to call it.\n\n```sh\ncurl 'http://api.frontend.oto.tools:8080/'\n```\n\nThis should output :\n```json\n{\"Otoroshi-Error\": \"Something went wrong, you should try later. Thanks for your understanding.\"}\n```\n\nyou should get an error due to the fact that Otoroshi doesn't know about the server certificate and the client certificate expected by the server.\n\nWe must declare the client and server certificates for `https://api.backend.oto.tools` to Otoroshi. \n\nGo to the [certificates page](http://otoroshi.oto.tools:8080/bo/dashboard/certificates) and create a new item. Drag and drop the content of the `./client/_.backend.oto.tools.cer` and `./client/_.backend.oto.tools.key` files, respectively in `Certificate full chain` and `Certificate private key`.\n\nIf you prefer to use the API, you can create an Otoroshi certificate automatically from a PEM bundle.\n\n```sh\ncat ./server/_.backend.oto.tools.cer ./ca/ca-backend.cer ./server/_.backend.oto.tools.key | curl \\\n -H 'Content-Type: text/plain' -X POST \\\n --data-binary @- \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n http://otoroshi-api.oto.tools:8080/api/certificates/_bundle \n```\n\nnow we have to expose `https://api.frontend.oto.tools:8443` using otoroshi. \n\nCreate a second item. Copy and paste the content of `./server/_.frontend.oto.tools.cer` and `./server/_.frontend.oto.tools.key` respectively in `Certificate full chain` and `Certificate private key`.\n\nIf you don't want to bother with UI copy/paste, you can use the import bundle api endpoint to create an otoroshi certificate automatically from a PEM bundle.\n\n```sh\ncat ./server/_.frontend.oto.tools.cer ./ca/ca-frontend.cer ./server/_.frontend.oto.tools.key | curl \\\n -H 'Content-Type: text/plain' -X POST \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n --data-binary @- \\\n http://otoroshi-api.oto.tools:8080/api/certificates/_bundle\n```\n\nOnce created, go back to your route. On the target of the backend node, we have to enable the custom Otoroshi TLS.\n\n* Click on the backend node\n* Click on your target\n* Click on the Show advanced settings button\n* Click on Custom TLS setup\n* Enable the section\n* In the list of certificates, select the backend certificate\n* In the list of trusted certificates, select the frontend certificate\n* Save your route\n \nTry the following command\n\n```sh\ncurl --cacert ./ca/ca-frontend.cer 'https://api.frontend.oto.tools:8443/'\n```\nthe output should be\n\n```json\n{\"message\":\"Hello World!\"}\n```\n\nNow we want to enforce the fact that we want client certificate for `api.frontend.oto.tools`. \n\nSearch in the list of plugins and add the `Client Certificate Only` plugin to your route.\n\nnow if you retry \n\n```sh\ncurl --cacert ./ca/ca-frontend.cer 'https://api.frontend.oto.tools:8443/'\n```\nthe output should be\n\n```json\n{\"Otoroshi-Error\":\"bad request\"}\n```\n\nyou should get an error because no client certificate is passed with the request. But if you pass the `./client/_.frontend.oto.tools.csr` client cert and the key in your curl call\n\n```sh\ncurl 'https://api.frontend.oto.tools:8443' \\\n --cacert ./ca/ca-frontend.cer \\\n --cert ./client/_.frontend.oto.tools.pem \\\n --key ./client/_.frontend.oto.tools.key\n```\nthe output should be\n\n```json\n{\"message\":\"Hello World!\"}\n```\n\n### Client certificate matching plugin\n\nOtoroshi can restrict and check all incoming client certificates on a route.\n\nSearch in the list of plugins the `Client certificate matching` plugin and add it the the flow.\n\nSave the route and retry your call again.\n\n```sh\ncurl 'https://api.frontend.oto.tools:8443' \\\n --cacert ./ca/ca-frontend.cer \\\n --cert ./client/_.frontend.oto.tools.pem \\\n --key ./client/_.frontend.oto.tools.key\n```\nthe output should be\n\n```json\n{\"Otoroshi-Error\":\"bad request\"}\n```\n\nOur client certificate is not matched by Otoroshi. We have to add the subject DN in the configuration of the `Client certificate matching` plugin to authorize it.\n\n```json\n{\n \"HasClientCertMatchingValidator\": {\n \"serialNumbers\": [],\n \"subjectDNs\": [\n \"CN=*.frontend.oto.tools\"\n ],\n \"issuerDNs\": [],\n \"regexSubjectDNs\": [],\n \"regexIssuerDNs\": []\n }\n}\n```\n\nSave the service and retry your call again.\n\n```sh\ncurl 'https://api.frontend.oto.tools:8443' \\\n --cacert ./ca/ca-frontend.cer \\\n --cert ./client/_.frontend.oto.tools.pem \\\n --key ./client/_.frontend.oto.tools.key\n```\nthe output should be\n\n```json\n{\"message\":\"Hello World!\"}\n```\n\n\n"},{"name":"export-alerts-using-mailgun.md","id":"/how-to-s/export-alerts-using-mailgun.md","url":"/how-to-s/export-alerts-using-mailgun.html","title":"Send alerts using mailgun","content":"# Send alerts using mailgun\n\nAll Otoroshi alerts can be send on different channels.\nOne of the ways is to send a group of specific alerts via emails.\n\nTo enable this behaviour, let's start by create an exporter of events.\n\nIn this tutorial, we will admit that you already have a mailgun account with an API key and a domain.\n\n## Create an Mailgun exporter\n\nLet's create an exporter. The exporter will export by default all events generated by Otoroshi.\n\n1. Go ahead, and navigate to http://otoroshi.oto.tools:8080\n2. Click on the cog icon on the top right\n3. Then `Exporters` button\n4. And add a new configuration when clicking on the `Add item` button\n5. Select the `mailer` in the `type` selector field\n6. Jump to `Exporter config` and select the `Mailgun` option\n7. Set the following values:\n* `EU` : false/true depending on your mailgun configuratin\n* `Mailgun api key` : your-mailgun-api-key\n* `Mailgun domain` : your-mailgun-domain\n* `Email addresses` : list of the recipient adresses\n\nWith this configuration, all Otoroshi events will be send to your listed addresses (we don't recommended to do that).\n\nTo filter events on `Alerts` type, we need to add the following configuration inside the `Filtering and projection` section (if you want to deep learn about this section, read this @ref:[part](../entities/data-exporters.md#matching-and-projections)).\n\n```json\n{\n \"include\": [\n { \"@type\": \"AlertEvent\" }\n ],\n \"exclude\": []\n}\n``` \n\nSave at the bottom page and enable the exporter (on the top of the page or in list of exporters). We will need to wait few seconds to receive the first alerts.\n\nThe **projection** field can be useful in the case you want to filter the fields contained in each alert sent.\n\nThe `Projection` field is a json where you can list the fields to keep for each alert.\n\n```json\n{\n \"@type\": true,\n \"@timestamp\": true,\n \"@id\": true\n}\n```\n\nWith this example, only `@type`, `@timestamp` and `@id` will be sent to the addresses of your recipients."},{"name":"export-events-to-elastic.md","id":"/how-to-s/export-events-to-elastic.md","url":"/how-to-s/export-events-to-elastic.html","title":"Export events to Elasticsearch","content":"# Export events to Elasticsearch\n\n### Before you start\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Deploy a Elasticsearch and kibana stack on Docker\n\nLet's start by creating an Elasticsearch and Kibana stack on our machine (if it's already done for you, you can skip this section).\n\nTo start an Elasticsearch container for development or testing, run:\n\n```sh\ndocker network create elastic\ndocker pull docker.elastic.co/elasticsearch/elasticsearch:7.15.1\ndocker run --name es01-test --net elastic -p 9200:9200 -p 9300:9300 -e \"discovery.type=single-node\" docker.elastic.co/elasticsearch/elasticsearch:7.15.1\n```\n\n```sh\ndocker pull docker.elastic.co/kibana/kibana:7.15.1\ndocker run --name kib01-test --net elastic -p 5601:5601 -e \"ELASTICSEARCH_HOSTS=http://es01-test:9200\" docker.elastic.co/kibana/kibana:7.15.1\n```\n\nTo access Kibana, go to @link:[http://localhost:5601](http://localhost:5601) { open=new }.\n\n### Create an Elasticsearch exporter\n\nLet's create an exporter. The exporter will export by default all events generated by Otoroshi.\n\n1. Go ahead, and navigate to @link:[http://otoroshi.oto.tools:8080](http://otoroshi.oto.tools:8080) { open=new }\n2. Click on the cog icon on the top right\n3. Then `Exporters` button\n4. And add a new configuration when clicking on the `Add item` button\n5. Select the `elastic` in the `type` selector field\n6. Jump to `Exporter config`\n7. Set the following values: `Cluster URI` -> `http://localhost:9200`\n\nThen test your configuration by clicking on the `Check connection` button. This should output a modal with the Elasticsearch version and the number of loaded docs.\n\nSave at the bottom of the page and enable the exporter (on the top of the page or in list of exporters).\n\n### Testing your configuration\n\nOne simple way to test is to setup the reading of our Elasticsearch instance by Otoroshi.\n\nNavigate to the danger zone (click on the cog on the top right and scroll to `danger zone`). Jump to the `Analytics: Elastic dashboard datasource (read)` section.\n\nSet the following values : `Cluster URI` -> `http://localhost:9200`\n\nThen click on the `Check connection`. This should ouput the same result as the previous part. Save the global configuration and navigate to @link:[http://otoroshi.oto.tools:8080/bo/dashboard/stats](http://otoroshi.oto.tools:8080/bo/dashboard/stats) { open=new }.\n\nThis should output a list of graphs.\n\n### Advanced usage\n\nBy default, an exporter handle all events from Otoroshi. In some case, you need to filter the events to send to elasticsearch.\n\nTo filter the events, jump to the `Filtering and projection` field in exporter view. Otoroshi supports to include a kind of events or to exclude a list of events (if you want to deep learn about this section, read this @ref:[part](../entities/data-exporters.md#matching-and-projections)). \n\nAn example which keep only events with a field `@type` of value `AlertEvent`:\n```json\n{\n \"include\": [\n { \"@type\": \"AlertEvent\" }\n ],\n \"exclude\": []\n}\n```\nAn example which exclude only events with a field `@type` of value `GatewayEvent` :\n```json\n{\n \"exclude\": [\n { \"@type\": \"GatewayEvent\" }\n ],\n \"include\": []\n}\n```\n\nThe next field is the **Projection**. This field is a json when you can list the fields to keep for each event.\n\n```json\n{\n \"@type\": true,\n \"@timestamp\": true,\n \"@id\": true\n}\n```\n\nWith this example, only `@type`, `@timestamp` and `@id` will be send to ES.\n\n### Debug your configuration\n\n#### Missing user rights on Elasticsearch\n\nWhen creating an exporter, Otoroshi try to join the index route of the elasticsearch instance. If you have a specific management access rights on Elasticsearch, you have two possiblities :\n\n- set a full access to the user used in Otoroshi for write in Elasticsearch\n- set the version of Elasticsearch inside the `Version` field of your exporter.\n\n#### None event appear in your Elasticsearch\n\nWhen creating an exporter, Otoroshi try to push the index template on Elasticsearch. If the post failed, Otoroshi will fail for each push of events and your database will keep empty. \n\nTo fix this problem, you can try to send the index template with the `Manually apply index template` button in your exporter."},{"name":"import-export-otoroshi-datastore.md","id":"/how-to-s/import-export-otoroshi-datastore.md","url":"/how-to-s/import-export-otoroshi-datastore.html","title":"Import and export Otoroshi datastore","content":"# Import and export Otoroshi datastore\n\n### Start Otoroshi with an initial datastore\n\nLet's start by downloading the latest Otoroshi\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar'\n```\n\nBy default, Otoroshi starts with domain `oto.tools` that targets `127.0.0.1` Now you are almost ready to run Otoroshi for the first time, we want run it with an initial data.\n\nTo do that, you need to add the **otoroshi.importFrom** setting to the Otoroshi configuration (of `$APP_IMPORT_FROM` env). It can be a file path or a URL. The content of the initial datastore can look something like the following.\n\n```json\n{\n \"label\": \"Otoroshi initial datastore\",\n \"admins\": [],\n \"simpleAdmins\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"username\": \"admin@otoroshi.io\",\n \"password\": \"$2a$10$iQRkqjKTW.5XH8ugQrnMDeUstx4KqmIeQ58dHHdW2Dv1FkyyAs4C.\",\n \"label\": \"Otoroshi Admin\",\n \"createdAt\": 1634651307724,\n \"type\": \"SIMPLE\",\n \"metadata\": {},\n \"tags\": [],\n \"rights\": [\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\"\n ]\n }\n ]\n }\n ],\n \"serviceGroups\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"id\": \"admin-api-group\",\n \"name\": \"Otoroshi Admin Api group\",\n \"description\": \"No description\",\n \"tags\": [],\n \"metadata\": {}\n },\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"id\": \"default\",\n \"name\": \"default-group\",\n \"description\": \"The default service group\",\n \"tags\": [],\n \"metadata\": {}\n }\n ],\n \"apiKeys\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"clientId\": \"admin-api-apikey-id\",\n \"clientSecret\": \"admin-api-apikey-secret\",\n \"clientName\": \"Otoroshi Backoffice ApiKey\",\n \"description\": \"The apikey use by the Otoroshi UI\",\n \"authorizedGroup\": \"admin-api-group\",\n \"authorizedEntities\": [\n \"group_admin-api-group\"\n ],\n \"enabled\": true,\n \"readOnly\": false,\n \"allowClientIdOnly\": false,\n \"throttlingQuota\": 10000,\n \"dailyQuota\": 10000000,\n \"monthlyQuota\": 10000000,\n \"constrainedServicesOnly\": false,\n \"restrictions\": {\n \"enabled\": false,\n \"allowLast\": true,\n \"allowed\": [],\n \"forbidden\": [],\n \"notFound\": []\n },\n \"rotation\": {\n \"enabled\": false,\n \"rotationEvery\": 744,\n \"gracePeriod\": 168,\n \"nextSecret\": null\n },\n \"validUntil\": null,\n \"tags\": [],\n \"metadata\": {}\n }\n ],\n \"serviceDescriptors\": [\n {\n \"_loc\": {\n \"tenant\": \"default\",\n \"teams\": [\n \"default\"\n ]\n },\n \"id\": \"admin-api-service\",\n \"groupId\": \"admin-api-group\",\n \"groups\": [\n \"admin-api-group\"\n ],\n \"name\": \"otoroshi-admin-api\",\n \"description\": \"\",\n \"env\": \"prod\",\n \"domain\": \"oto.tools\",\n \"subdomain\": \"otoroshi-api\",\n \"targetsLoadBalancing\": {\n \"type\": \"RoundRobin\"\n },\n \"targets\": [\n {\n \"host\": \"127.0.0.1:8080\",\n \"scheme\": \"http\",\n \"weight\": 1,\n \"mtlsConfig\": {\n \"certs\": [],\n \"trustedCerts\": [],\n \"mtls\": false,\n \"loose\": false,\n \"trustAll\": false\n },\n \"tags\": [],\n \"metadata\": {},\n \"protocol\": \"HTTP/1.1\",\n \"predicate\": {\n \"type\": \"AlwaysMatch\"\n },\n \"ipAddress\": null\n }\n ],\n \"root\": \"/\",\n \"matchingRoot\": null,\n \"stripPath\": true,\n \"localHost\": \"127.0.0.1:8080\",\n \"localScheme\": \"http\",\n \"redirectToLocal\": false,\n \"enabled\": true,\n \"userFacing\": false,\n \"privateApp\": false,\n \"forceHttps\": false,\n \"logAnalyticsOnServer\": false,\n \"useAkkaHttpClient\": true,\n \"useNewWSClient\": false,\n \"tcpUdpTunneling\": false,\n \"detectApiKeySooner\": false,\n \"maintenanceMode\": false,\n \"buildMode\": false,\n \"strictlyPrivate\": false,\n \"enforceSecureCommunication\": true,\n \"sendInfoToken\": true,\n \"sendStateChallenge\": true,\n \"sendOtoroshiHeadersBack\": true,\n \"readOnly\": false,\n \"xForwardedHeaders\": false,\n \"overrideHost\": true,\n \"allowHttp10\": true,\n \"letsEncrypt\": false,\n \"secComHeaders\": {\n \"claimRequestName\": null,\n \"stateRequestName\": null,\n \"stateResponseName\": null\n },\n \"secComTtl\": 30000,\n \"secComVersion\": 1,\n \"secComInfoTokenVersion\": \"Legacy\",\n \"secComExcludedPatterns\": [],\n \"securityExcludedPatterns\": [],\n \"publicPatterns\": [\n \"/health\",\n \"/metrics\"\n ],\n \"privatePatterns\": [],\n \"additionalHeaders\": {\n \"Host\": \"otoroshi-admin-internal-api.oto.tools\"\n },\n \"additionalHeadersOut\": {},\n \"missingOnlyHeadersIn\": {},\n \"missingOnlyHeadersOut\": {},\n \"removeHeadersIn\": [],\n \"removeHeadersOut\": [],\n \"headersVerification\": {},\n \"matchingHeaders\": {},\n \"ipFiltering\": {\n \"whitelist\": [],\n \"blacklist\": []\n },\n \"api\": {\n \"exposeApi\": false\n },\n \"healthCheck\": {\n \"enabled\": false,\n \"url\": \"/\"\n },\n \"clientConfig\": {\n \"useCircuitBreaker\": true,\n \"retries\": 1,\n \"maxErrors\": 20,\n \"retryInitialDelay\": 50,\n \"backoffFactor\": 2,\n \"callTimeout\": 30000,\n \"callAndStreamTimeout\": 120000,\n \"connectionTimeout\": 10000,\n \"idleTimeout\": 60000,\n \"globalTimeout\": 30000,\n \"sampleInterval\": 2000,\n \"proxy\": {},\n \"customTimeouts\": [],\n \"cacheConnectionSettings\": {\n \"enabled\": false,\n \"queueSize\": 2048\n }\n },\n \"canary\": {\n \"enabled\": false,\n \"traffic\": 0.2,\n \"targets\": [],\n \"root\": \"/\"\n },\n \"gzip\": {\n \"enabled\": false,\n \"excludedPatterns\": [],\n \"whiteList\": [\n \"text/*\",\n \"application/javascript\",\n \"application/json\"\n ],\n \"blackList\": [],\n \"bufferSize\": 8192,\n \"chunkedThreshold\": 102400,\n \"compressionLevel\": 5\n },\n \"metadata\": {},\n \"tags\": [],\n \"chaosConfig\": {\n \"enabled\": false,\n \"largeRequestFaultConfig\": null,\n \"largeResponseFaultConfig\": null,\n \"latencyInjectionFaultConfig\": null,\n \"badResponsesFaultConfig\": null\n },\n \"jwtVerifier\": {\n \"type\": \"ref\",\n \"ids\": [],\n \"id\": null,\n \"enabled\": false,\n \"excludedPatterns\": []\n },\n \"secComSettings\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"secComUseSameAlgo\": true,\n \"secComAlgoChallengeOtoToBack\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"secComAlgoChallengeBackToOto\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"secComAlgoInfoToken\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"cors\": {\n \"enabled\": false,\n \"allowOrigin\": \"*\",\n \"exposeHeaders\": [],\n \"allowHeaders\": [],\n \"allowMethods\": [],\n \"excludedPatterns\": [],\n \"maxAge\": null,\n \"allowCredentials\": true\n },\n \"redirection\": {\n \"enabled\": false,\n \"code\": 303,\n \"to\": \"https://www.otoroshi.io\"\n },\n \"authConfigRef\": null,\n \"clientValidatorRef\": null,\n \"transformerRef\": null,\n \"transformerRefs\": [],\n \"transformerConfig\": {},\n \"apiKeyConstraints\": {\n \"basicAuth\": {\n \"enabled\": true,\n \"headerName\": null,\n \"queryName\": null\n },\n \"customHeadersAuth\": {\n \"enabled\": true,\n \"clientIdHeaderName\": null,\n \"clientSecretHeaderName\": null\n },\n \"clientIdAuth\": {\n \"enabled\": true,\n \"headerName\": null,\n \"queryName\": null\n },\n \"jwtAuth\": {\n \"enabled\": true,\n \"secretSigned\": true,\n \"keyPairSigned\": true,\n \"includeRequestAttributes\": false,\n \"maxJwtLifespanSecs\": null,\n \"headerName\": null,\n \"queryName\": null,\n \"cookieName\": null\n },\n \"routing\": {\n \"noneTagIn\": [],\n \"oneTagIn\": [],\n \"allTagsIn\": [],\n \"noneMetaIn\": {},\n \"oneMetaIn\": {},\n \"allMetaIn\": {},\n \"noneMetaKeysIn\": [],\n \"oneMetaKeyIn\": [],\n \"allMetaKeysIn\": []\n }\n },\n \"restrictions\": {\n \"enabled\": false,\n \"allowLast\": true,\n \"allowed\": [],\n \"forbidden\": [],\n \"notFound\": []\n },\n \"accessValidator\": {\n \"enabled\": false,\n \"refs\": [],\n \"config\": {},\n \"excludedPatterns\": []\n },\n \"preRouting\": {\n \"enabled\": false,\n \"refs\": [],\n \"config\": {},\n \"excludedPatterns\": []\n },\n \"plugins\": {\n \"enabled\": false,\n \"refs\": [],\n \"config\": {},\n \"excluded\": []\n },\n \"hosts\": [\n \"otoroshi-api.oto.tools\"\n ],\n \"paths\": [],\n \"handleLegacyDomain\": true,\n \"issueCert\": false,\n \"issueCertCA\": null\n }\n ],\n \"errorTemplates\": [],\n \"jwtVerifiers\": [],\n \"authConfigs\": [],\n \"certificates\": [],\n \"clientValidators\": [],\n \"scripts\": [],\n \"tcpServices\": [],\n \"dataExporters\": [],\n \"tenants\": [\n {\n \"id\": \"default\",\n \"name\": \"Default organization\",\n \"description\": \"The default organization\",\n \"metadata\": {},\n \"tags\": []\n }\n ],\n \"teams\": [\n {\n \"id\": \"default\",\n \"tenant\": \"default\",\n \"name\": \"Default Team\",\n \"description\": \"The default Team of the default organization\",\n \"metadata\": {},\n \"tags\": []\n }\n ]\n}\n```\n\nRun an Otoroshi with the previous file as parameter.\n\n```sh\njava \\\n -Dotoroshi.adminPassword=password \\\n -Dotoroshi.importFrom=./initial-state.json \\\n -jar otoroshi.jar \n```\n\nThis should show\n\n```sh\n...\n[info] otoroshi-env - Importing from: ./initial-state.json\n[info] otoroshi-env - Successful import !\n...\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n...\n```\n\n> Warning : when you using Otoroshi with a datastore different from file or in-memory, Otoroshi will not reload the initialization script. If you expected, you have to manually clean your store.\n\n### Export the current datastore via the danger zone\n\nWhen Otoroshi is running, you can backup the global configuration store from the UI. Navigate to your instance (in our case @link:[http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone](http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone) { open=new }) and scroll to the bottom page. \n\nClick on `Full export` button to download the full global configuration.\n\n### Import a datastore from file via the danger zone\n\nWhen Otoroshi is running, you can recover a global configuration from the UI. Navigate to your instance (in our case @link:[http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone](http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone) { open=new }) and scroll to the bottom of the page. \n\nClick on `Recover from a full export file` button to apply all configurations from a file.\n\n### Export the current datastore with the Admin API\n\nOtoroshi exposes his own Admin API to manage Otoroshi resources. To call this api, you need to have an api key with the rights on `Otoroshi Admin Api group`. This group includes the `Otoroshi-admin-api` service that you can found on the services page. \n\nBy default, and with our initial configuration, Otoroshi has already created an api key named `Otoroshi Backoffice ApiKey`. You can verify the rights of an api key on its page by checking the `Authorized On` field (you should find the `Otoroshi Admin Api group` inside).\n\nThe default api key id and secret are `admin-api-apikey-id` and `admin-api-apikey-secret`.\n\nRun the next command with these values.\n\n```sh\ncurl \\\n -H 'Content-Type: application/json' \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n 'http://otoroshi-api.oto.tools:8080/api/otoroshi.json'\n```\n\nWhen calling the `/api/otoroshi.json`, the return should be the current datastore including the service descriptors, the api keys, all others resources like certificates and authentification modules, and the the global config (representing the form of the danger zone).\n\n### Import the current datastore with the Admin API\n\nAs the same way of previous section, you can erase the current datastore with a POST request. The route is the same : `/api/otoroshi.json`.\n\n```sh\ncurl \\\n -X POST \\\n -H 'Content-Type: application/json' \\\n -d '{\n \"label\" : \"Otoroshi export\",\n \"dateRaw\" : 1634714811217,\n \"date\" : \"2021-10-20 09:26:51\",\n \"stats\" : {\n \"calls\" : 4,\n \"dataIn\" : 0,\n \"dataOut\" : 97991\n },\n \"config\" : {\n \"tags\" : [ ],\n \"letsEncryptSettings\" : {\n \"enabled\" : false,\n \"server\" : \"acme://letsencrypt.org/staging\",\n \"emails\" : [ ],\n \"contacts\" : [ ],\n \"publicKey\" : \"\",\n \"privateKey\" : \"\"\n },\n \"lines\" : [ \"prod\" ],\n \"maintenanceMode\" : false,\n \"enableEmbeddedMetrics\" : true,\n \"streamEntityOnly\" : true,\n \"autoLinkToDefaultGroup\" : true,\n \"limitConcurrentRequests\" : false,\n \"maxConcurrentRequests\" : 1000,\n \"maxHttp10ResponseSize\" : 4194304,\n \"useCircuitBreakers\" : true,\n \"apiReadOnly\" : false,\n \"u2fLoginOnly\" : false,\n \"trustXForwarded\" : true,\n \"ipFiltering\" : {\n \"whitelist\" : [ ],\n \"blacklist\" : [ ]\n },\n \"throttlingQuota\" : 10000000,\n \"perIpThrottlingQuota\" : 10000000,\n \"analyticsWebhooks\" : [ ],\n \"alertsWebhooks\" : [ ],\n \"elasticWritesConfigs\" : [ ],\n \"elasticReadsConfig\" : null,\n \"alertsEmails\" : [ ],\n \"logAnalyticsOnServer\" : false,\n \"useAkkaHttpClient\" : false,\n \"endlessIpAddresses\" : [ ],\n \"statsdConfig\" : null,\n \"kafkaConfig\" : {\n \"servers\" : [ ],\n \"keyPass\" : null,\n \"keystore\" : null,\n \"truststore\" : null,\n \"topic\" : \"otoroshi-events\",\n \"mtlsConfig\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n },\n \"backOfficeAuthRef\" : null,\n \"mailerSettings\" : {\n \"type\" : \"none\"\n },\n \"cleverSettings\" : null,\n \"maxWebhookSize\" : 100,\n \"middleFingers\" : false,\n \"maxLogsSize\" : 10000,\n \"otoroshiId\" : \"83539cbca-76ee-4abc-ad31-a4794e873848\",\n \"snowMonkeyConfig\" : {\n \"enabled\" : false,\n \"outageStrategy\" : \"OneServicePerGroup\",\n \"includeUserFacingDescriptors\" : false,\n \"dryRun\" : false,\n \"timesPerDay\" : 1,\n \"startTime\" : \"09:00:00.000\",\n \"stopTime\" : \"23:59:59.000\",\n \"outageDurationFrom\" : 600000,\n \"outageDurationTo\" : 3600000,\n \"targetGroups\" : [ ],\n \"chaosConfig\" : {\n \"enabled\" : true,\n \"largeRequestFaultConfig\" : null,\n \"largeResponseFaultConfig\" : null,\n \"latencyInjectionFaultConfig\" : {\n \"ratio\" : 0.2,\n \"from\" : 500,\n \"to\" : 5000\n },\n \"badResponsesFaultConfig\" : {\n \"ratio\" : 0.2,\n \"responses\" : [ {\n \"status\" : 502,\n \"body\" : \"{\\\"error\\\":\\\"Nihonzaru everywhere ...\\\"}\",\n \"headers\" : {\n \"Content-Type\" : \"application/json\"\n }\n } ]\n }\n }\n },\n \"scripts\" : {\n \"enabled\" : false,\n \"transformersRefs\" : [ ],\n \"transformersConfig\" : { },\n \"validatorRefs\" : [ ],\n \"validatorConfig\" : { },\n \"preRouteRefs\" : [ ],\n \"preRouteConfig\" : { },\n \"sinkRefs\" : [ ],\n \"sinkConfig\" : { },\n \"jobRefs\" : [ ],\n \"jobConfig\" : { }\n },\n \"geolocationSettings\" : {\n \"type\" : \"none\"\n },\n \"userAgentSettings\" : {\n \"enabled\" : false\n },\n \"autoCert\" : {\n \"enabled\" : false,\n \"replyNicely\" : false,\n \"caRef\" : null,\n \"allowed\" : [ ],\n \"notAllowed\" : [ ]\n },\n \"tlsSettings\" : {\n \"defaultDomain\" : null,\n \"randomIfNotFound\" : false,\n \"includeJdkCaServer\" : true,\n \"includeJdkCaClient\" : true,\n \"trustedCAsServer\" : [ ]\n },\n \"plugins\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excluded\" : [ ]\n },\n \"metadata\" : { }\n },\n \"admins\" : [ ],\n \"simpleAdmins\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"username\" : \"admin@otoroshi.io\",\n \"password\" : \"$2a$10$iQRkqjKTW.5XH8ugQrnMDeUstx4KqmIeQ58dHHdW2Dv1FkyyAs4C.\",\n \"label\" : \"Otoroshi Admin\",\n \"createdAt\" : 1634651307724,\n \"type\" : \"SIMPLE\",\n \"metadata\" : { },\n \"tags\" : [ ],\n \"rights\" : [ {\n \"tenant\" : \"*:rw\",\n \"teams\" : [ \"*:rw\" ]\n } ]\n } ],\n \"serviceGroups\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"admin-api-group\",\n \"name\" : \"Otoroshi Admin Api group\",\n \"description\" : \"No description\",\n \"tags\" : [ ],\n \"metadata\" : { }\n }, {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"default\",\n \"name\" : \"default-group\",\n \"description\" : \"The default service group\",\n \"tags\" : [ ],\n \"metadata\" : { }\n } ],\n \"apiKeys\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"clientId\" : \"admin-api-apikey-id\",\n \"clientSecret\" : \"admin-api-apikey-secret\",\n \"clientName\" : \"Otoroshi Backoffice ApiKey\",\n \"description\" : \"The apikey use by the Otoroshi UI\",\n \"authorizedGroup\" : \"admin-api-group\",\n \"authorizedEntities\" : [ \"group_admin-api-group\" ],\n \"enabled\" : true,\n \"readOnly\" : false,\n \"allowClientIdOnly\" : false,\n \"throttlingQuota\" : 10000,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"constrainedServicesOnly\" : false,\n \"restrictions\" : {\n \"enabled\" : false,\n \"allowLast\" : true,\n \"allowed\" : [ ],\n \"forbidden\" : [ ],\n \"notFound\" : [ ]\n },\n \"rotation\" : {\n \"enabled\" : false,\n \"rotationEvery\" : 744,\n \"gracePeriod\" : 168,\n \"nextSecret\" : null\n },\n \"validUntil\" : null,\n \"tags\" : [ ],\n \"metadata\" : { }\n } ],\n \"serviceDescriptors\" : [ {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"admin-api-service\",\n \"groupId\" : \"admin-api-group\",\n \"groups\" : [ \"admin-api-group\" ],\n \"name\" : \"otoroshi-admin-api\",\n \"description\" : \"\",\n \"env\" : \"prod\",\n \"domain\" : \"oto.tools\",\n \"subdomain\" : \"otoroshi-api\",\n \"targetsLoadBalancing\" : {\n \"type\" : \"RoundRobin\"\n },\n \"targets\" : [ {\n \"host\" : \"127.0.0.1:8080\",\n \"scheme\" : \"http\",\n \"weight\" : 1,\n \"mtlsConfig\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n },\n \"tags\" : [ ],\n \"metadata\" : { },\n \"protocol\" : \"HTTP/1.1\",\n \"predicate\" : {\n \"type\" : \"AlwaysMatch\"\n },\n \"ipAddress\" : null\n } ],\n \"root\" : \"/\",\n \"matchingRoot\" : null,\n \"stripPath\" : true,\n \"localHost\" : \"127.0.0.1:8080\",\n \"localScheme\" : \"http\",\n \"redirectToLocal\" : false,\n \"enabled\" : true,\n \"userFacing\" : false,\n \"privateApp\" : false,\n \"forceHttps\" : false,\n \"logAnalyticsOnServer\" : false,\n \"useAkkaHttpClient\" : true,\n \"useNewWSClient\" : false,\n \"tcpUdpTunneling\" : false,\n \"detectApiKeySooner\" : false,\n \"maintenanceMode\" : false,\n \"buildMode\" : false,\n \"strictlyPrivate\" : false,\n \"enforceSecureCommunication\" : true,\n \"sendInfoToken\" : true,\n \"sendStateChallenge\" : true,\n \"sendOtoroshiHeadersBack\" : true,\n \"readOnly\" : false,\n \"xForwardedHeaders\" : false,\n \"overrideHost\" : true,\n \"allowHttp10\" : true,\n \"letsEncrypt\" : false,\n \"secComHeaders\" : {\n \"claimRequestName\" : null,\n \"stateRequestName\" : null,\n \"stateResponseName\" : null\n },\n \"secComTtl\" : 30000,\n \"secComVersion\" : 1,\n \"secComInfoTokenVersion\" : \"Legacy\",\n \"secComExcludedPatterns\" : [ ],\n \"securityExcludedPatterns\" : [ ],\n \"publicPatterns\" : [ \"/health\", \"/metrics\" ],\n \"privatePatterns\" : [ ],\n \"additionalHeaders\" : {\n \"Host\" : \"otoroshi-admin-internal-api.oto.tools\"\n },\n \"additionalHeadersOut\" : { },\n \"missingOnlyHeadersIn\" : { },\n \"missingOnlyHeadersOut\" : { },\n \"removeHeadersIn\" : [ ],\n \"removeHeadersOut\" : [ ],\n \"headersVerification\" : { },\n \"matchingHeaders\" : { },\n \"ipFiltering\" : {\n \"whitelist\" : [ ],\n \"blacklist\" : [ ]\n },\n \"api\" : {\n \"exposeApi\" : false\n },\n \"healthCheck\" : {\n \"enabled\" : false,\n \"url\" : \"/\"\n },\n \"clientConfig\" : {\n \"useCircuitBreaker\" : true,\n \"retries\" : 1,\n \"maxErrors\" : 20,\n \"retryInitialDelay\" : 50,\n \"backoffFactor\" : 2,\n \"callTimeout\" : 30000,\n \"callAndStreamTimeout\" : 120000,\n \"connectionTimeout\" : 10000,\n \"idleTimeout\" : 60000,\n \"globalTimeout\" : 30000,\n \"sampleInterval\" : 2000,\n \"proxy\" : { },\n \"customTimeouts\" : [ ],\n \"cacheConnectionSettings\" : {\n \"enabled\" : false,\n \"queueSize\" : 2048\n }\n },\n \"canary\" : {\n \"enabled\" : false,\n \"traffic\" : 0.2,\n \"targets\" : [ ],\n \"root\" : \"/\"\n },\n \"gzip\" : {\n \"enabled\" : false,\n \"excludedPatterns\" : [ ],\n \"whiteList\" : [ \"text/*\", \"application/javascript\", \"application/json\" ],\n \"blackList\" : [ ],\n \"bufferSize\" : 8192,\n \"chunkedThreshold\" : 102400,\n \"compressionLevel\" : 5\n },\n \"metadata\" : { },\n \"tags\" : [ ],\n \"chaosConfig\" : {\n \"enabled\" : false,\n \"largeRequestFaultConfig\" : null,\n \"largeResponseFaultConfig\" : null,\n \"latencyInjectionFaultConfig\" : null,\n \"badResponsesFaultConfig\" : null\n },\n \"jwtVerifier\" : {\n \"type\" : \"ref\",\n \"ids\" : [ ],\n \"id\" : null,\n \"enabled\" : false,\n \"excludedPatterns\" : [ ]\n },\n \"secComSettings\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"secComUseSameAlgo\" : true,\n \"secComAlgoChallengeOtoToBack\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"secComAlgoChallengeBackToOto\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"secComAlgoInfoToken\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"cors\" : {\n \"enabled\" : false,\n \"allowOrigin\" : \"*\",\n \"exposeHeaders\" : [ ],\n \"allowHeaders\" : [ ],\n \"allowMethods\" : [ ],\n \"excludedPatterns\" : [ ],\n \"maxAge\" : null,\n \"allowCredentials\" : true\n },\n \"redirection\" : {\n \"enabled\" : false,\n \"code\" : 303,\n \"to\" : \"https://www.otoroshi.io\"\n },\n \"authConfigRef\" : null,\n \"clientValidatorRef\" : null,\n \"transformerRef\" : null,\n \"transformerRefs\" : [ ],\n \"transformerConfig\" : { },\n \"apiKeyConstraints\" : {\n \"basicAuth\" : {\n \"enabled\" : true,\n \"headerName\" : null,\n \"queryName\" : null\n },\n \"customHeadersAuth\" : {\n \"enabled\" : true,\n \"clientIdHeaderName\" : null,\n \"clientSecretHeaderName\" : null\n },\n \"clientIdAuth\" : {\n \"enabled\" : true,\n \"headerName\" : null,\n \"queryName\" : null\n },\n \"jwtAuth\" : {\n \"enabled\" : true,\n \"secretSigned\" : true,\n \"keyPairSigned\" : true,\n \"includeRequestAttributes\" : false,\n \"maxJwtLifespanSecs\" : null,\n \"headerName\" : null,\n \"queryName\" : null,\n \"cookieName\" : null\n },\n \"routing\" : {\n \"noneTagIn\" : [ ],\n \"oneTagIn\" : [ ],\n \"allTagsIn\" : [ ],\n \"noneMetaIn\" : { },\n \"oneMetaIn\" : { },\n \"allMetaIn\" : { },\n \"noneMetaKeysIn\" : [ ],\n \"oneMetaKeyIn\" : [ ],\n \"allMetaKeysIn\" : [ ]\n }\n },\n \"restrictions\" : {\n \"enabled\" : false,\n \"allowLast\" : true,\n \"allowed\" : [ ],\n \"forbidden\" : [ ],\n \"notFound\" : [ ]\n },\n \"accessValidator\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excludedPatterns\" : [ ]\n },\n \"preRouting\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excludedPatterns\" : [ ]\n },\n \"plugins\" : {\n \"enabled\" : false,\n \"refs\" : [ ],\n \"config\" : { },\n \"excluded\" : [ ]\n },\n \"hosts\" : [ \"otoroshi-api.oto.tools\" ],\n \"paths\" : [ ],\n \"handleLegacyDomain\" : true,\n \"issueCert\" : false,\n \"issueCertCA\" : null\n } ],\n \"errorTemplates\" : [ ],\n \"jwtVerifiers\" : [ ],\n \"authConfigs\" : [ ],\n \"certificates\" : [],\n \"clientValidators\" : [ ],\n \"scripts\" : [ ],\n \"tcpServices\" : [ ],\n \"dataExporters\" : [ ],\n \"tenants\" : [ {\n \"id\" : \"default\",\n \"name\" : \"Default organization\",\n \"description\" : \"The default organization\",\n \"metadata\" : { },\n \"tags\" : [ ]\n } ],\n \"teams\" : [ {\n \"id\" : \"default\",\n \"tenant\" : \"default\",\n \"name\" : \"Default Team\",\n \"description\" : \"The default Team of the default organization\",\n \"metadata\" : { },\n \"tags\" : [ ]\n } ]\n }' \\\n 'http://otoroshi-api.oto.tools:8080/api/otoroshi.json' \\\n -u admin-api-apikey-id:admin-api-apikey-secret \n```\n\nThis should output :\n\n```json\n{ \"done\":true }\n```\n\n> Note : be very carefully with this POST command. If you send a wrong JSON, you risk breaking your instance.\n\nThe second way is to send the same configuration but from a file. You can pass two kind of file : a `json` file or a `ndjson` file. Both files are available as export methods on the danger zone.\n\n```sh\n# the curl is run from a folder containing the initial-state.json file \ncurl -X POST \\\n -H \"Content-Type: application/json\" \\\n -d @./initial-state.json \\\n 'http://otoroshi-api.oto.tools:8080/api/otoroshi.json' \\\n -u admin-api-apikey-id:admin-api-apikey-secret\n```\n\nThis should output :\n\n```json\n{ \"done\":true }\n```\n\n> Note: To send a ndjson file, you have to set the Content-Type header at `application/x-ndjson`"},{"name":"index.md","id":"/how-to-s/index.md","url":"/how-to-s/index.html","title":"How to's","content":"# How to's\n\nin this section, we will explain some mainstream Otoroshi usage scenario's \n\n* @ref:[Otoroshi and WASM](./wasm-usage.md)\n* @ref:[WASM Manager](./wasm-manager-installation.md)\n* @ref:[Tailscale integration](./tailscale-integration.md)\n* @ref:[End-to-end mTLS](./end-to-end-mtls.md)\n* @ref:[Send alerts by emails](./export-alerts-using-mailgun.md)\n* @ref:[Export events to Elasticsearch](./export-events-to-elastic.md)\n* @ref:[Import/export Otoroshi datastore](./import-export-otoroshi-datastore.md)\n* @ref:[Secure an app with Auth0](./secure-app-with-auth0.md)\n* @ref:[Secure an app with Keycloak](./secure-app-with-keycloak.md)\n* @ref:[Secure an app with LDAP](./secure-app-with-ldap.md)\n* @ref:[Secure an api with apikeys](./secure-with-apikey.md)\n* @ref:[Secure an app with OAuth1](./secure-with-oauth1-client.md)\n* @ref:[Secure an api with OAuth2 client_credentials flow](./secure-with-oauth2-client-credentials.md)\n* @ref:[Setup an Otoroshi cluster](./setup-otoroshi-cluster.md)\n* @ref:[TLS termination using Let's Encrypt](./tls-using-lets-encrypt.md)\n* @ref:[Secure an app with jwt verifiers](./secure-an-app-with-jwt-verifiers.md)\n* @ref:[Secure the communication between a backend app and Otoroshi](./secure-the-communication-between-a-backend-app-and-otoroshi.md)\n* @ref:[TLS termination using your own certificates](./tls-termination-using-own-certificates.md)\n* @ref:[The resources loader](./resources-loader.md)\n* @ref:[Log levels customization](./custom-log-levels.md)\n* @ref:[Initial state customization](./custom-initial-state.md)\n* @ref:[Communicate with Kafka](./communicate-with-kafka.md)\n* @ref:[Create your custom Authentication module](./create-custom-auth-module.md)\n* @ref:[Working with Eureka](./working-with-eureka.md)\n* @ref:[Instantiate a WAF with Coraza](./instantiate-waf-coraza.md)\n\n@@@ index\n\n\n* [WASM usage](./wasm-usage.md)\n* [WASM Manager](./wasm-manager-installation.md)\n* [Tailscale integration](./tailscale-integration.md)\n* [End-to-end mTLS](./end-to-end-mtls.md)\n* [Send alerts by emails](./export-alerts-using-mailgun.md)\n* [Export events to Elasticsearch](./export-events-to-elastic.md)\n* [Import/export Otoroshi datastore](./import-export-otoroshi-datastore.md)\n* [Secure an app with Auth0](./secure-app-with-auth0.md)\n* [Secure an app with Keycloak](./secure-app-with-keycloak.md)\n* [Secure an app with LDAP](./secure-app-with-ldap.md)\n* [Secure an api with apikeys](./secure-with-apikey.md)\n* [Secure an app with OAuth1](./secure-with-oauth1-client.md)\n* [Secure an api with OAuth2 client_credentials flow](./secure-with-oauth2-client-credentials.md)\n* [Setup an Otoroshi cluster](./setup-otoroshi-cluster.md)\n* [TLS termination using Let's Encrypt](./tls-using-lets-encrypt.md)\n* [Secure an app with jwt verifiers](./secure-an-app-with-jwt-verifiers.md)\n* [Secure the communication between a backend app and Otoroshi](./secure-the-communication-between-a-backend-app-and-otoroshi.md)\n* [TLS termination using your own certificates](./tls-termination-using-own-certificates.md)\n* [The resources loader](./resources-loader.md)\n* [Log levels customization](./custom-log-levels.md)\n* [Initial state customization](./custom-initial-state.md)\n* [Communicate with Kafka](./communicate-with-kafka.md)\n* [Create your custom Authentication module](./create-custom-auth-module.md)\n* [Working with Eureka](./working-with-eureka.md)\n* [Instantiate a WAF with Coraza](./instantiate-waf-coraza.md)\n@@@\n"},{"name":"instantiate-waf-coraza.md","id":"/how-to-s/instantiate-waf-coraza.md","url":"/how-to-s/instantiate-waf-coraza.html","title":"Instantiate a WAF with Coraza","content":"# Instantiate a WAF with Coraza\n\nSometimes you may want to secure an app with a [Web Appplication Firewall (WAF)](https://en.wikipedia.org/wiki/Web_application_firewall) and apply the security rules from the [OWASP Core Rule Set](https://owasp.org/www-project-modsecurity-core-rule-set/). To allow that, we integrated [the Coraza WAF](https://coraza.io/) in Otoroshi through a plugin that uses the WASM version of Coraza.\n\n### Before you start\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Create a WAF configuration\n\nfirst, go on [the features page of otoroshi](http://otoroshi.oto.tools:8080/bo/dashboard/features) and then click on the [Coraza WAF configs. item](http://otoroshi.oto.tools:8080/bo/dashboard/extensions/coraza-waf/coraza-configs). \n\nNow create a new configuration, give it a name and a description, ensure that you enabled the `Inspect req/res body` flag and save your configuration.\n\nThe corresponding admin api call is the following :\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/apis/coraza-waf.extensions.otoroshi.io/v1/coraza-configs' \\\n -u admin-api-apikey-id:admin-api-apikey-secret -H 'Content-Type: application/json' -d '\n{\n \"id\": \"coraza-waf-demo\",\n \"name\": \"My blocking WAF\",\n \"description\": \"An awesome WAF\",\n \"inspect_body\": true,\n \"config\": {\n \"directives_map\": {\n \"default\": [\n \"Include @recommended-conf\",\n \"Include @crs-setup-conf\",\n \"Include @owasp_crs/*.conf\",\n \"SecRuleEngine DetectionOnly\"\n ]\n },\n \"default_directives\": \"default\",\n \"per_authority_directives\": {}\n }\n}'\n```\n\n### Configure Coraza and the OWASP Core Rule Set\n\nNow you can easily configure the coraza WAF in the `json` config. section. By default it should look something like :\n\n```json\n{\n \"directives_map\": {\n \"default\": [\n \"Include @recommended-conf\",\n \"Include @crs-setup-conf\",\n \"Include @owasp_crs/*.conf\",\n \"SecRuleEngine DetectionOnly\"\n ]\n },\n \"default_directives\": \"default\",\n \"per_authority_directives\": {}\n}\n```\n\nYou can find anything about it in [the documentation of Coraza](https://coraza.io/docs/tutorials/introduction/).\n\nhere we have the basic setup to apply the OWASP core rule set in detection mode only. \nSo each time Coraza will find something weird in a request, it will only log it but let the request pass.\n We can enable blocking by setting `\"SecRuleEngine On\"`\n\nwe can also deny access to the `/admin` uri by adding the following directive\n\n```json\n\"SecRule REQUEST_URI \\\"@streq /admin\\\" \\\"id:101,phase:1,t:lowercase,deny\\\"\"\n```\n\nYou can also provide multiple profile of rules in the `directives_map` with different names and use the `per_authority_directives` object to map hostnames to a specific profile.\n\nthe corresponding admin api call is the following :\n\n```sh\ncurl -X PUT 'http://otoroshi-api.oto.tools:8080/apis/coraza-waf.extensions.otoroshi.io/v1/coraza-configs/coraza-waf-demo' \\\n -u admin-api-apikey-id:admin-api-apikey-secret -H 'Content-Type: application/json' -d '\n{\n \"id\": \"coraza-waf-demo\",\n \"name\": \"My blocking WAF\",\n \"description\": \"An awesome WAF\",\n \"inspect_body\": true,\n \"config\": {\n \"directives_map\": {\n \"default\": [\n \"Include @recommended-conf\",\n \"Include @crs-setup-conf\",\n \"Include @owasp_crs/*.conf\",\n \"SecRule REQUEST_URI \\\"@streq /admin\\\" \\\"id:101,phase:1,t:lowercase,deny\\\"\",\n \"SecRuleEngine On\"\n ]\n },\n \"default_directives\": \"default\",\n \"per_authority_directives\": {}\n }\n}'\n```\n\n### Add the WAF plugin on your route\n\nNow you can create a new route that will use your WAF configuration. Let say we want a route on `http://wouf.oto.tools:8080` to goes to `https://www.otoroshi.io`. Now add the `Coraza WAF` plugin to your route and in the configuration select the configuration you created previously.\n\nthe corresponding admin api call is the following :\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/routes' \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n -H 'Content-Type: application/json' -d '\n{\n \"id\": \"route_demo\",\n \"name\": \"WAF route\",\n \"description\": \"A new route with a WAF enabled\",\n \"frontend\": {\n \"domains\": [\n \"wouf.oto.tools\"\n ]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"www.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.wasm.proxywasm.NgCorazaWAF\",\n \"config\": {\n \"ref\": \"coraza-waf-demo\"\n },\n \"plugin_index\": {\n \"validate_access\": 0,\n \"transform_request\": 0,\n \"transform_response\": 0\n }\n },\n {\n \"plugin\": \"cp:otoroshi.next.plugins.OverrideHost\",\n \"plugin_index\": {\n \"transform_request\": 1\n }\n }\n ]\n}'\n```\n\n### Try to use an exploit ;)\n\nlet try to trigger Coraza with a Log4Shell crafted request:\n\n```sh\ncurl 'http://wouf.oto.tools:9999' -H 'foo: ${jndi:rmi://foo/bar}' --include\n\nHTTP/1.1 403 Forbidden\nDate: Thu, 25 May 2023 09:47:04 GMT\nContent-Type: text/plain\nContent-Length: 0\n\n```\n\nor access to `/admin`\n\n```sh\ncurl 'http://wouf.oto.tools:9999/admin' --include\n\nHTTP/1.1 403 Forbidden\nDate: Thu, 25 May 2023 09:47:04 GMT\nContent-Type: text/plain\nContent-Length: 0\n\n```\n\nif you look at otoroshi logs you will find something like :\n\n```log\n[error] otoroshi-proxy-wasm - [client \"127.0.0.1\"] Coraza: Warning. Potential Remote Command Execution: Log4j / Log4shell \n [file \"@owasp_crs/REQUEST-944-APPLICATION-ATTACK-JAVA.conf\"] [line \"10608\"] [id \"944150\"] [rev \"\"] \n [msg \"Potential Remote Command Execution: Log4j / Log4shell\"] [data \"\"] [severity \"critical\"] \n [ver \"OWASP_CRS/4.0.0-rc1\"] [maturity \"0\"] [accuracy \"0\"] [tag \"application-multi\"] \n [tag \"language-java\"] [tag \"platform-multi\"] [tag \"attack-rce\"] [tag \"OWASP_CRS\"] \n [tag \"capec/1000/152/137/6\"] [tag \"PCI/6.5.2\"] [tag \"paranoia-level/1\"] [hostname \"wwwwouf.oto.tools\"] \n [uri \"/\"] [unique_id \"uTYakrlgMBydVGLodbz\"]\n[error] otoroshi-proxy-wasm - [client \"127.0.0.1\"] Coraza: Warning. Inbound Anomaly Score Exceeded (Total Score: 5) \n [file \"@owasp_crs/REQUEST-949-BLOCKING-EVALUATION.conf\"] [line \"11029\"] [id \"949110\"] [rev \"\"] \n [msg \"Inbound Anomaly Score Exceeded (Total Score: 5)\"] \n [data \"\"] [severity \"emergency\"] [ver \"OWASP_CRS/4.0.0-rc1\"] [maturity \"0\"] [accuracy \"0\"] \n [tag \"anomaly-evaluation\"] [hostname \"wwwwouf.oto.tools\"] [uri \"/\"] [unique_id \"uTYakrlgMBydVGLodbz\"]\n[info] otoroshi-proxy-wasm - Transaction interrupted tx_id=\"uTYakrlgMBydVGLodbz\" context_id=3 action=\"deny\" phase=\"http_response_headers\"\n...\n[error] otoroshi-proxy-wasm - [client \"127.0.0.1\"] Coraza: Warning. [file \"\"] [line \"12914\"] \n [id \"101\"] [rev \"\"] [msg \"\"] [data \"\"] [severity \"emergency\"] [ver \"\"] [maturity \"0\"] [accuracy \"0\"] \n [hostname \"wwwwouf.oto.tools\"] [uri \"/admin\"] [unique_id \"mqXZeMdzRaVAqIiqvHf\"]\n[info] otoroshi-proxy-wasm - Transaction interrupted tx_id=\"mqXZeMdzRaVAqIiqvHf\" context_id=2 action=\"deny\" phase=\"http_request_headers\"\n```\n\n### Generated events\n\neach time Coraza will generate log about vunerability detection, an event will be generated in otoroshi and exporter through the usual data exporter way. The event will look like :\n\n```json\n{\n \"@id\" : \"86b647450-3cc7-42a9-aaec-828d261a8c74\",\n \"@timestamp\" : 1684938211157,\n \"@type\" : \"CorazaTrailEvent\",\n \"@product\" : \"otoroshi\",\n \"@serviceId\" : \"--\",\n \"@service\" : \"--\",\n \"@env\" : \"prod\",\n \"level\" : \"ERROR\",\n \"msg\" : \"Coraza: Warning. Potential Remote Command Execution: Log4j / Log4shell\",\n \"fields\" : {\n \"hostname\" : \"wouf.oto.tools\",\n \"maturity\" : \"0\",\n \"line\" : \"10608\",\n \"unique_id\" : \"oNbisKlXWaCdXntaUpq\",\n \"tag\" : \"paranoia-level/1\",\n \"data\" : \"\",\n \"accuracy\" : \"0\",\n \"uri\" : \"/\",\n \"rev\" : \"\",\n \"id\" : \"944150\",\n \"client\" : \"127.0.0.1\",\n \"ver\" : \"OWASP_CRS/4.0.0-rc1\",\n \"file\" : \"@owasp_crs/REQUEST-944-APPLICATION-ATTACK-JAVA.conf\",\n \"msg\" : \"Potential Remote Command Execution: Log4j / Log4shell\",\n \"severity\" : \"critical\"\n },\n \"raw\" : \"[client \\\"127.0.0.1\\\"] Coraza: Warning. Potential Remote Command Execution: Log4j / Log4shell [file \\\"@owasp_crs/REQUEST-944-APPLICATION-ATTACK-JAVA.conf\\\"] [line \\\"10608\\\"] [id \\\"944150\\\"] [rev \\\"\\\"] [msg \\\"Potential Remote Command Execution: Log4j / Log4shell\\\"] [data \\\"\\\"] [severity \\\"critical\\\"] [ver \\\"OWASP_CRS/4.0.0-rc1\\\"] [maturity \\\"0\\\"] [accuracy \\\"0\\\"] [tag \\\"application-multi\\\"] [tag \\\"language-java\\\"] [tag \\\"platform-multi\\\"] [tag \\\"attack-rce\\\"] [tag \\\"OWASP_CRS\\\"] [tag \\\"capec/1000/152/137/6\\\"] [tag \\\"PCI/6.5.2\\\"] [tag \\\"paranoia-level/1\\\"] [hostname \\\"wouf.oto.tools\\\"] [uri \\\"/\\\"] [unique_id \\\"oNbisKlXWaCdXntaUpq\\\"]\\n\",\n}\n```"},{"name":"resources-loader.md","id":"/how-to-s/resources-loader.md","url":"/how-to-s/resources-loader.html","title":"The resources loader","content":"# The resources loader\n\nThe resources loader is a tool to create an Otoroshi resource from a raw content. This content can be found on each Otoroshi resources pages (services descriptors, apikeys, certificates, etc ...). To get the content of a resource as file, you can use the two export buttons, one to export as JSON format and the other as YAML format.\n\nOnce exported, the content of the resource can be import with the resource loader. You can import single or multiples resources on one time, as JSON and YAML format.\n\nThe resource loader is available on this route [`bo/dashboard/resources-loader`](http://otoroshi.oto.tools:8080/bo/dashboard/resources-loader).\n\nOn this page, you can paste the content of your resources and click on **Load resources**.\n\nFor each detected resource, the loader will display :\n\n* a resource name corresponding to the field `name` \n* a resource type corresponding to the type of created resource (ServiceDescriptor, ApiKey, Certificate, etc)\n* a toggle to choose if you want to include the element for the creation step\n* the updated status by the creation process\n\nOnce you have selected the resources to create, you can **Import selected resources**.\n\nOnce generated, all status will be updated. If all is working, the status will be equals to done.\n\nIf you want to get back to the initial page, you can use the **restart** button."},{"name":"secure-an-app-with-jwt-verifiers.md","id":"/how-to-s/secure-an-app-with-jwt-verifiers.md","url":"/how-to-s/secure-an-app-with-jwt-verifiers.html","title":"Secure an api with jwt verifiers","content":"# Secure an api with jwt verifiers\n\nA Jwt verifier is the guard that verifies the signature of tokens in requests. \n\nA verifier can obvisouly verify or generate.\n\n### Before you start\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Your first jwt verifier\n\nLet's start by validating all incoming request tokens tokens on our simple route created in the @ref:[Before you start](#before-you-start) section.\n\n1. Navigate to the simple route\n2. Search in the list of plugins and add the `Jwt verification only` plugin on the flow\n3. Click on `Start by select or create a JWT Verifier`\n4. Create a new JWT verifier\n5. Set `simple-jwt-verifier` as `Name`\n6. Select `Hmac + SHA` as `Algo` (for this example, we expect tokens with a symetric signature), `512` as `SHA size` and `otoroshi` as `HMAC secret`\n7. Confirm the creation \n\nSave your route and try to call it\n\n```sh\ncurl -X GET 'http://myservice.oto.tools:8080/' --include\n```\n\nThis should output : \n```json\n{\n \"Otoroshi-Error\": \"error.expected.token.not.found\"\n}\n```\n\nA simple way to generate a token is to use @link:[jwt.io](http://jwt.io) { open=new }. Once navigate, define `HS512` as `alg` in header section and insert `otoroshi` as verify signature secret. \n\nOnce created, copy-paste the token from jwt.io to the Authorization header and call our service.\n\n```sh\n# replace xxxx by the generated token\ncurl -X GET \\\n -H \"X-JWT-Token: xxxx\" \\\n 'http://myservice.oto.tools:8080'\n```\n\nThis should output a json with `X-JWT-Token` in headers field. Its value is exactly the same as the passed token.\n\n```json\n{\n \"method\": \"GET\",\n \"path\": \"/\",\n \"headers\": {\n \"host\": \"mirror.otoroshi.io\",\n \"X-JWT-Token\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.ipDFgkww51mSaSg_199BMRj4gK20LGz_czozu3u8rCFFO1X20MwcabSqEzUc0q4qQ4rjTxjoR4HeUDVcw8BxoQ\",\n ...\n }\n}\n```\n\n### Verify and generate a new token\n\nAn other feature is to verify the incomings tokens and generate new ones, with a different signature and claims. \n\nLet's start by extending the @link:[previous verifier](http://otoroshi.oto.tools:8080/bo/dashboard/jwt-verifiers) { open=new }.\n\n1. Jump to the `Verif Strategy` field and select `Verify and re-sign JWT token`. \n2. Edit the name with `jwt-verify-and-resign`\n3. Remove the default field in `Verify token fields` array\n4. Change the second `Hmac secret` in `Re-sign settings` section with `otoroshi-internal-secret`\n5. Save your verifier.\n\n> Note : the name of the verifier doesn't impact the identifier. So you can save the changes of your verifier without modifying the identifier used in your call. \n\n```sh\n# replace xxxx by the generated token\ncurl -X GET \\\n -H \"Authorization: xxxx\" \\\n 'http://myservice.oto.tools:8080'\n```\n\nThis should output a json with `authorization` in headers field. This time, the value are different and you can check his signature on @link:[jwt.io](https://jwt.io) { open=new } (the expected secret of the generated token is **otoroshi-internal-secret**)\n\n\n\n### Verify, transform and generate a new token\n\nThe most advanced verifier is able to do the same as the previous ones, with the ability to configure the token generation (claims, output header name).\n\nLet's start by extending the @link:[previous verifier](http://otoroshi.oto.tools:8080/bo/dashboard/jwt-verifiers) { open=new }.\n\n1. Jump to the `Verif Strategy` field and select `Verify, transform and re-sign JWT token`. \n\n2. Edit the name with `jwt-verify-transform-and-resign`\n3. Remove the default field in `Verify token fields` array\n4. Change the second `Hmac secret` in `Re-sign settings` section with `otoroshi-internal-secret`\n5. Set `Internal-Authorization` as `Header name`\n6. Set `key` on first field of `Rename token fields` and `from-otoroshi-verifier` on second field\n7. Set `generated-key` and `generated-value` as `Set token fields`\n8. Add `generated_at` and `${date}` as second field of `Set token fields` (Otoroshi supports an @ref:[expression language](../topics/expression-language.md))\n9. Save your verifier and try to call your service again.\n\nThis should output a json with `authorization` in headers field and our generate token in `Internal-Authorization`.\nOnce paste in @link:[jwt.io](https://jwt.io) { open=new }, you should have :\n\n\n\nYou can see, in the payload of your token, the two claims **from-otoroshi-verifier** and **generated-key** added during the generation of the token by the JWT verifier.\n"},{"name":"secure-app-with-auth0.md","id":"/how-to-s/secure-app-with-auth0.md","url":"/how-to-s/secure-app-with-auth0.html","title":"Secure an app with Auth0","content":"# Secure an app with Auth0\n\n### Download Otoroshi\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Configure an Auth0 client\n\nThe first step of this tutorial is to setup an Auth0 application with the information of the instance of our Otoroshi.\n\nNavigate to @link:[https://manage.auth0.com](https://manage.auth0.com) { open=new } (create an account if it's not already done). \n\nLet's create an application when clicking on the **Applications** button on the sidebar. Then click on the **Create application** button on the top right.\n\n1. Choose `Regular Web Applications` as `Application type`\n2. Then set for example `otoroshi-client` as `Name`, and confirm the creation\n3. Jump to the `Settings` tab\n4. Scroll to the `Application URLs` section and add the following url as `Allowed Callback URLs` : `http://otoroshi.oto.tools:8080/backoffice/auth0/callback`\n5. Set `https://otoroshi.oto.tools:8080/` as `Allowed Logout URLs`\n6. Set `https://otoroshi.oto.tools:8080` as `Allowed Web Origins` \n7. Save changes at the bottom of the page.\n\nOnce done, we have a full setup, with a client ID and secret at the top of the page, which authorizes our Otoroshi and redirects the user to the callback url when they log into Auth0.\n\n### Create an Auth0 provider module\n\nLet's back to Otoroshi to create an authentication module with `OAuth2 / OIDC provider` as `type`.\n\n1. Go ahead, and navigate to @link:[http://otoroshi.oto.tools:8080](http://otoroshi.oto.tools:8080) { open=new }\n1. Click on the cog icon on the top right\n1. Then `Authentication configs` button\n1. And add a new configuration when clicking on the `Add item` button\n2. Select the `OAuth provider` in the type selector field\n3. Then click on `Get from OIDC config` and paste `https://..auth0.com/.well-known/openid-configuration`. Replace the tenant name by the name of your tenant (displayed on the left top of auth0 page), and the region of the tenant (`eu` in my case).\n\nOnce done, set the `Client ID` and the `Client secret` from your Auth0 application. End the configuration with `http://otoroshi.oto.tools:8080/backoffice/auth0/callback` as `Callback URL`.\n\nAt the bottom of the page, disable the `secure` button (because we're using http and this configuration avoid to include cookie in an HTTP Request without secure channel, typically HTTPs).\n\n### Connect to Otoroshi with Auth0 authentication\n\nTo secure Otoroshi with your Auth0 configuration, we have to register an **Authentication configuration** as a BackOffice Auth. configuration.\n\n1. Navigate to the **danger zone** (when clicking on the cog on the top right and selecting Danger zone)\n2. Scroll to the **BackOffice auth. settings**\n3. Select your last Authentication configuration (created in the previous section)\n4. Save the global configuration with the button on the top right\n\n#### Testing your configuration\n\n1. Disconnect from your instance\n1. Then click on the *Login using third-party* button (or navigate to http://otoroshi.oto.tools:8080)\n2. Click on **Login using Third-party** button\n3. If all is configured, Otoroshi will redirect you to the auth0 server login page\n4. Set your account credentials\n5. Good works! You're connected to Otoroshi with an Auth0 module.\n\n### Secure an app with Auth0 authentication\n\nWith the previous configuration, you can secure any of Otoroshi services with it. \n\nThe first step is to apply a little change on the previous configuration. \n\n1. Navigate to @link:[http://otoroshi.oto.tools:8080/bo/dashboard/auth-configs](http://otoroshi.oto.tools:8080/bo/dashboard/auth-configs) { open=new }.\n2. Create a new **Authentication module** configuration with the same values.\n3. Replace the `Callback URL` field to `http://privateapps.oto.tools:8080/privateapps/generic/callback` (we changed this value because the redirection of a connected user by a third-party server is covered by another route by Otoroshi).\n4. Disable the `secure` button (because we're using http and this configuration avoid to include cookie in an HTTP Request without secure channel, typically HTTPs)\n\n> Note : an Otoroshi service is called **a private app** when it is protected by an Authentication module.\n\nWe can set the Authentication module on your route.\n\n1. Navigate to any created route\n2. Search in the list of plugins the plugin named `Authentication`\n3. Select your Authentication config inside the list\n4. Don't forget to save your configuration.\n5. Now you can try to call your route and see the Auth0 login page appears.\n\n\n"},{"name":"secure-app-with-keycloak.md","id":"/how-to-s/secure-app-with-keycloak.md","url":"/how-to-s/secure-app-with-keycloak.html","title":"Secure an app with Keycloak","content":"# Secure an app with Keycloak\n\n### Before you start\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Running a keycloak instance with docker\n\n```sh\ndocker run \\\n -p 8080:8080 \\\n -e KEYCLOAK_USER=admin \\\n -e KEYCLOAK_PASSWORD=admin \\\n --name keycloak-server \\\n --detach jboss/keycloak:15.0.1\n```\n\nThis should download the image of keycloak (if you haven't already it) and display the digest of the created container. This command mapped TCP port 8080 in the container to port 8080 of your laptop and created a server with `admin/admin` as admin credentials.\n\nOnce started, you can open a browser on @link:[http://localhost:8080](http://localhost:8080) { open=new } and click on `Administration Console`. Log to your instance with `admin/admin` as credentials.\n\nThe first step is to create a Keycloak client, an entity that can request Keycloak to authenticate a user. Click on the **clients** button on the sidebar, and then on **Create** button at the top right of the view.\n\nFill the client form with the following values.\n\n* `Client ID`: `keycloak-otoroshi-backoffice`\n* `Client Protocol`: `openid-connect`\n* `Root URL`: `http://otoroshi.oto.tools:8080/`\n\nValidate the creation of the client by clicking on the **Save** button.\n\nThe next step is to change the `Access Type` used by default. Jump to the `Access Type` field and select `confidential`. The confidential configuration force the client application to send at Keycloak a client ID and a client Secret. Scroll to the bottom of the page and save the configuration.\n\nNow scroll to the top of your page. Just at the right of the `Settings` tab, a new tab appeared : the `Credentials` page. Click on this tab, and make sure that `Client Id and Secret` is selected as `Client Authenticator` and copy the generated `Secret` to the next part.\n\n### Create a Keycloak provider module\n\n1. Go ahead, and navigate to http://otoroshi.oto.tools:8080\n1. Click on the cog icon on the top right\n1. Then `Authentication configs` button\n1. And add a new configuration when clicking on the `Add item` button\n2. Select the `OAuth2 / OIDC provider` in the type selector field\n3. Set a basic name and description\n\nA simple way to import a Keycloak client is to give the `URL of the OpenID Connect` Otoroshi. By default, keycloak used the next URL : `http://localhost:8080/auth/realms/master/.well-known/openid-configuration`. \n\nClick on the `Get from OIDC config` button and paste the previous link. Once it's done, scroll to the `URLs` section. All URLs has been fill with the values picked from the JSON object returns by the previous URL.\n\nThe only fields to change are : \n\n* `Client ID`: `keycloak-otoroshi-backoffice`\n* `Client Secret`: Paste the secret from the Credentials Keycloak page. In my case, it's something like `90c9bf0b-2c0c-4eb0-aa02-72195beb9da7`\n* `Callback URL`: `http://otoroshi.oto.tools:8080/backoffice/auth0/callback`\n\nAt the bottom of the page, disable the `secure` button (because we're using http and this configuration avoid to include cookie in an HTTP Request without secure channel, typically HTTPs). Nothing else to change, just save the configuration.\n\n### Connect to Otoroshi with Keycloak authentication\n\nTo secure Otoroshi with your Keycloak configuration, we have to register an Authentication configuration as a BackOffice Auth. configuration.\n\n1. Navigate to the **danger zone** (when clicking on the cog on the top right and selecting Danger zone)\n1. Scroll to the **BackOffice auth. settings**\n1. Select your last Authentication configuration (created in the previous section)\n1. Save the global configuration with the button on the top right\n\n### Testing your configuration\n\n1. Disconnect from your instance\n1. Then click on the **Login using third-party** button (or navigate to @link:[http://otoroshi.oto.tools:8080](http://otoroshi.oto.tools:8080) { open=new })\n2. Click on **Login using Third-party** button\n3. If all is configured, Otoroshi will redirect you to the keycloak login page\n4. Set `admin/admin` as user and trust the user by clicking on `yes` button.\n5. Good work! You're connected to Otoroshi with an Keycloak module.\n\n> A fallback solution is always available in the event of a bad authentication configuration. By going to http://otoroshi.oto.tools:8080/bo/simple/login, the administrators will be able to redefine the configuration.\n\n### Visualize an admin user session or a private user session\n\nEach user, wheter connected user to the Otoroshi UI or at a private Otoroshi app, has an own session. As an administrator of Otoroshi, you can visualize via Otoroshi the list of the connected users and their profile.\n\nLet's start by navigating to the `Admin users sessions` page (just @link:[here](http://otoroshi.oto.tools:8080/bo/dashboard/sessions/admin) or when clicking on the cog, and on the `Admins sessions` button at the bottom of the list).\n\nThis page gives a complete view of the connected admins. For each admin, you have his connection date and his expiration date. You can also check the `Profile` and the `Rights` of the connected users.\n\nIf we check the profile and the rights of the previously logged user (from Keycloak in the previous part) we can retrieve the following information :\n\n```json\n{\n \"sub\": \"4c8cd101-ca28-4611-80b9-efa504ac51fd\",\n \"upn\": \"admin\",\n \"email_verified\": false,\n \"address\": {},\n \"groups\": [\n \"create-realm\",\n \"default-roles-master\",\n \"offline_access\",\n \"admin\",\n \"uma_authorization\"\n ],\n \"preferred_username\": \"admin\"\n}\n```\n\nand his default rights \n\n```sh\n[\n {\n \"tenant\": \"default:rw\",\n \"teams\": [\n \"default:rw\"\n ]\n }\n]\n```\n\nWe haven't create any specific groups in Keycloak or specify rights in Otoroshi for him. In this case, the use received the default Otoroshi rights at his connection. The user can navigate on the default Organization and Teams (which are two resources created by Otoroshi at the boot) and have the full access on its (`r`: Read, `w`: Write, `*`: read/write).\n\nIn the same way, you'll find all users connected to a private Otoroshi app when navigate on the @link:[`Private App View`](http://otoroshi.oto.tools:8080/bo/dashboard/sessions/private) or using the cog at the top of the page. \n\n### Configure the Keycloak module to force logged in users to be an Otoroshi admin with full access\n\nGo back to the Keycloak module in `Authentication configs` view. Turn on the `Supers admin only` button and save your configuration. Try again the connection to Otoroshi using Keycloak third-party server.\n\nOnce connected, click on the cog button, and check that you have access to the full features of Otoroshi (like Admin user sessions). Now, your rights should be : \n```json\n[\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\"\n ]\n }\n]\n```\n\n### Merge Id token content on user profile\n\nGo back to the Keycloak module in `Authentication configs` view. Turn on the `Read profile` from token button and save your configuration. Try again the connection to Otoroshi using Keycloak third-party server.\n\nOnce connected, your profile should be contains all Keycloak id token : \n```json\n{\n \"exp\": 1634286674,\n \"iat\": 1634286614,\n \"auth_time\": 1634286614,\n \"jti\": \"eb368578-e886-4caa-a51b-c1d04973c80e\",\n \"iss\": \"http://localhost:8080/auth/realms/master\",\n \"aud\": [\n \"master-realm\",\n \"account\"\n ],\n \"sub\": \"4c8cd101-ca28-4611-80b9-efa504ac51fd\",\n \"typ\": \"Bearer\",\n \"azp\": \"keycloak-otoroshi-backoffice\",\n \"session_state\": \"e44fe471-aa3b-477d-b792-4f7b4caea220\",\n \"acr\": \"1\",\n \"allowed-origins\": [\n \"http://otoroshi.oto.tools:8080\"\n ],\n \"realm_access\": {\n \"roles\": [\n \"create-realm\",\n \"default-roles-master\",\n \"offline_access\",\n \"admin\",\n \"uma_authorization\"\n ]\n },\n \"resource_access\": {\n \"master-realm\": {\n \"roles\": [\n \"view-identity-providers\",\n \"view-realm\",\n \"manage-identity-providers\",\n \"impersonation\",\n \"create-client\",\n \"manage-users\",\n \"query-realms\",\n \"view-authorization\",\n \"query-clients\",\n \"query-users\",\n \"manage-events\",\n \"manage-realm\",\n \"view-events\",\n \"view-users\",\n \"view-clients\",\n \"manage-authorization\",\n \"manage-clients\",\n \"query-groups\"\n ]\n },\n \"account\": {\n \"roles\": [\n \"manage-account\",\n \"manage-account-links\",\n \"view-profile\"\n ]\n }\n }\n ...\n}\n```\n\n### Manage the Otoroshi user rights from keycloak\n\nOne powerful feature supports by Otoroshi, is to use the Keycloak groups attributes to set a list of rights for a Otoroshi user.\n\nIn the Keycloak module, you have a field, named `Otoroshi rights field name` with `otoroshi_rights` as default value. This field is used by Otoroshi to retrieve information from the Id token groups.\n\nLet's create a group in Keycloak, and set our default Admin user inside.\nIn Keycloak admin console :\n\n1. Navigate to the groups view, using the keycloak sidebar\n2. Create a new group with `my-group` as `Name`\n3. Then, on the `Attributes` tab, create an attribute with `otoroshi_rights` as `Key` and the following json array as `Value`\n```json\n[\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\",\n \"my-future-team:rw\"\n ]\n }\n]\n```\n\nWith this configuration, the user have a full access on all Otoroshi resources (my-future-team is not created in Otoroshi but it's not a problem, Otoroshi can handle it and use this rights only when the team will be present)\n\nClick on the **Add** button and **save** the group. The last step is to assign our user to this group. Jump to `Users` view using the sidebar, click on **View all users**, edit the user and his group membership using the `Groups` tab (use **join** button the assign user in `my-group`).\n\nThe next step is to add a mapper in the Keycloak client. By default, Keycloak doesn't expose any users information (like group membership or users attribute). We need to ask to Keycloak to expose the user attribute `otoroshi_rights` set previously on group.\n\nNavigate to the `Keycloak-otoroshi-backoffice` client, and jump to `Mappers` tab. Create a new mapper with the following values: \n\n* Name: `otoroshi_rights`\n* Mapper Type: `User Attribute`\n* User Attribute: `otoroshi_rights`\n* Token Claim Name: `otoroshi_rights`\n* Claim JSON Type: `JSON`\n* Multivalued: `√`\n* Aggregate attribute values: `√`\n\nGo back to the Authentication Keycloak module inside Otoroshi UI, and turn off **Super admins only**. **Save** the configuration.\n\nOnce done, try again the connection to Otoroshi using Keycloak third-party server.\nNow, your rights should be : \n```json\n[\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\",\n \"my-future-team:rw\"\n ]\n }\n]\n```\n\n### Secure an app with Keycloak authentication\n\nThe only change to apply on the previous authentication module is on the callback URL. When you want secure a Otoroshi service, and transform it on `Private App`, you need to set the `Callback URL` at `http://privateapps.oto.tools:8080/privateapps/generic/callback`. This configuration will redirect users to the backend service after they have successfully logged in.\n\n1. Go back to the authentication module\n2. Jump to the `Callback URL` field\n3. Paste this value `http://privateapps.oto.tools:8080/privateapps/generic/callback`\n4. Save your configuration\n5. Navigate to `http://myservice.oto.tools:8080`.\n6. You should redirect to the keycloak login page.\n7. Once logged in, you can check the content of the private app session created.\n\nThe rights should be : \n\n```json\n[\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\",\n \"my-future-team:rw\"\n ]\n }\n]\n```"},{"name":"secure-app-with-ldap.md","id":"/how-to-s/secure-app-with-ldap.md","url":"/how-to-s/secure-app-with-ldap.html","title":"Secure an app and/or your Otoroshi UI with LDAP","content":"# Secure an app and/or your Otoroshi UI with LDAP\n\n### Before you start\n\n@@include[fetch-and-start.md](../includes/fetch-and-start.md) { #init }\n\n#### Running an simple OpenLDAP server \n\nRun OpenLDAP docker image : \n```sh\ndocker run \\\n -p 389:389 \\\n -p 636:636 \\\n --env LDAP_ORGANISATION=\"Otoroshi company\" \\\n --env LDAP_DOMAIN=\"otoroshi.tools\" \\\n --env LDAP_ADMIN_PASSWORD=\"otoroshi\" \\\n --env LDAP_READONLY_USER=\"false\" \\\n --env LDAP_TLS\"false\" \\\n --env LDAP_TLS_ENFORCE\"false\" \\\n --name my-openldap-container \\\n --detach osixia/openldap:1.5.0\n```\n\nLet's make the first search in our LDAP container :\n\n```sh\ndocker exec my-openldap-container ldapsearch -x -H ldap://localhost -b dc=otoroshi,dc=tools -D \"cn=admin,dc=otoroshi,dc=tools\" -w otoroshi\n```\n\nThis should output :\n```sh\n# extended LDIF\n ...\n# otoroshi.tools\ndn: dc=otoroshi,dc=tools\nobjectClass: top\nobjectClass: dcObject\nobjectClass: organization\no: Otoroshi company\ndc: otoroshi\n\n# search result\nsearch: 2\nresult: 0 Success\n...\n```\n\nNow you can seed the open LDAP server with a few users. \n\nJoin your LDAP container.\n\n```sh\ndocker exec -it my-openldap-container \"/bin/bash\"\n```\n\nThe command `ldapadd` needs of a file to run.\n\nLaunch this command to create a `bootstrap.ldif` with one organization, one singers group with John user and a last group with Baz as scientist.\n\n```sh\necho -e \"\ndn: ou=People,dc=otoroshi,dc=tools\nobjectclass: top\nobjectclass: organizationalUnit\nou: People\n\ndn: ou=Role,dc=otoroshi,dc=tools\nobjectclass: top\nobjectclass: organizationalUnit\nou: Role\n\ndn: uid=john,ou=People,dc=otoroshi,dc=tools\nobjectclass: top\nobjectclass: person\nobjectclass: organizationalPerson\nobjectclass: inetOrgPerson\nuid: john\ncn: John\nsn: Brown\nmail: john@otoroshi.tools\npostalCode: 88442\nuserPassword: password\n\ndn: uid=baz,ou=People,dc=otoroshi,dc=tools\nobjectclass: top\nobjectclass: person\nobjectclass: organizationalPerson\nobjectclass: inetOrgPerson\nuid: baz\ncn: Baz\nsn: Wilson\nmail: baz@otoroshi.tools\npostalCode: 88443\nuserPassword: password\n\ndn: cn=singers,ou=Role,dc=otoroshi,dc=tools\nobjectclass: top\nobjectclass: groupOfNames\ncn: singers\nmember: uid=john,ou=People,dc=otoroshi,dc=tools\n\ndn: cn=scientists,ou=Role,dc=otoroshi,dc=tools\nobjectclass: top\nobjectclass: groupOfNames\ncn: scientists\nmember: uid=baz,ou=People,dc=otoroshi,dc=tools\n\" > bootstrap.ldif\n\nldapadd -x -w otoroshi -D \"cn=admin,dc=otoroshi,dc=tools\" -f bootstrap.ldif -v\n```\n\n### Create an Authentication configuration\n\n- Go ahead, and navigate to @link:[http://otoroshi.oto.tools:8080](http://otoroshi.oto.tools:8080) { open=new }\n- Click on the cog icon on the top right\n- Then `Authentication configs` button\n- And add a new configuration when clicking on the `Add item` button\n- Select the `Ldap auth. provider` in the type selector field\n- Set a basic name and description\n- Then set `ldap://localhost:389` as `LDAP Server URL`and `dc=otoroshi,dc=tools` as `Search Base`\n- Create a group filter (in the next part, we'll change this filter to spread users in different groups with given rights) with \n - objectClass=groupOfNames as `Group filter` \n - All as `Tenant`\n - All as `Team`\n - Read/Write as `Rights`\n- Set the search filter as `(uid=${username})`\n- Set `cn=admin,dc=otoroshi,dc=tools` as `Admin username`\n- Set `otoroshi` as `Admin password`\n- At the bottom of the page, disable the `secure` button (because we're using http and this configuration avoid to include cookie in an HTTP Request without secure channel, typically HTTPs)\n\n\n At this point, your configuration should be similar to :\n \n\n\n\n> Dont' forget to save on the bottom page your configuration before to quit the page.\n\n- Test the connection when clicking on `Test admin connection` button. This should show a `It works!` message\n\n- Finally, test the user connection button and set `john/password` or `baz/password` as credentials. This should show a `It works!` message\n\n> Dont' forget to save on the bottom page your configuration before to quit the page.\n\n\n### Connect to Otoroshi with LDAP authentication\n\nTo secure Otoroshi with your LDAP configuration, we have to register an **Authentication configuration** as a BackOffice Auth. configuration.\n\n- Navigate to the **danger zone** (when clicking on the cog on the top right and selecting Danger zone)\n- Scroll to the **BackOffice auth. settings**\n- Select your last Authentication configuration (created in the previous section)\n- Save the global configuration with the button on the top right\n\n### Testing your configuration\n\n- Disconnect from your instance\n- Then click on the **Login using third-party** button (or navigate to @link:[http://otoroshi.oto.tools:8080/backoffice/auth0/login](http://otoroshi.oto.tools:8080/backoffice/auth0/login) { open=new })\n- Set `john/password` or `baz/password` as credentials\n\n> A fallback solution is always available in the event of a bad authentication configuration. By going to http://otoroshi.oto.tools:8080/bo/simple/login, the administrators will be able to redefine the configuration.\n\n\n#### Secure an app with LDAP authentication\n\nOnce the configuration is done, you can secure any of Otoroshi routes. \n\n- Navigate to any created route\n- Add the `Authentication` plugin to your route\n- Select your Authentication config inside the list\n- Save your configuration\n\nNow try to call your route. The login module should appear.\n\n#### Manage LDAP users rights on Otoroshi\n\nFor each group filter, you can affect a list of rights:\n\n- on an `Organization`\n- on a `Team`\n- and a level of rights : `Read`, `Write` or `Read/Write`\n\n\nStart by navigate to your authentication configuration (created in @ref:[previous](#create-an-authentication-configuration) step).\n\nThen, replace the values of the `Mapping group filter` field to match LDAP groups with Otoroshi rights.\n\n\n\n\nWith this configuration, Baz is an administrator of Otoroshi with full rights (read / write) on all organizations.\n\nConversely, John can't see any configuration pages (like the danger zone) because he has only the read rights on Otoroshi.\n\nYou can easily test this behaviour by @ref:[testing](#testing-your-configuration) with both credentials.\n\n\n#### Advanced usage of LDAP Authentication\n\nIn the previous section, we have define rights for each LDAP groups. But in some case, we want to have a finer granularity like set rights for a specific user. The last 4 fields of the authentication form cover this. \n\nLet's start by adding few properties for each connected users with `Extra metadata`.\n\n```json\n// Add this configuration in extra metadata part\n{\n \"provider\": \"OpenLDAP\"\n}\n```\n\nThe next field `Data override` is merged with extra metadata when a user connects to a `private app` or to the UI (inside Otoroshi, private app is a service secure by any authentication module). The `Email field name` is configured to match with the `mail` field from LDAP user data.\n\n```json \n{\n \"john@otoroshi.tools\": {\n \"stage_name\": \"Will\"\n }\n}\n```\n\nIf you try to connect to an app with this configuration, the user result profile should be :\n\n```json\n{\n ...,\n \"metadata\": {\n \"lastname\": \"Willy\",\n \"stage_name\": \"Will\"\n }\n}\n```\n\nLet's try to increase the John rights with the `Additional rights group`.\n\nThis field supports the creation of virtual groups. A virtual group is composed of a list of users and a list of rights for each teams/organizations.\n\n```json\n// increase_john_rights is a virtual group which adds full access rights at john \n{\n \"increase_john_rights\": {\n \"rights\": [\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\"\n ]\n }\n ],\n \"users\": [\n \"john@otoroshi.tools\"\n ]\n }\n}\n```\n\nThe last field `Rights override` is useful when you want erase the rights of an user with only specific rights. This field is the last to be applied on the user rights. \n\nTo resume, when John connects to Otoroshi, he receives the rights to only read the default Organization (from **Mapping group filter**), then he is promote to administrator role (from **Additional rights group**) and finally his rights are reset with the last field **Rights override** to the read rights.\n\n```json \n{\n \"john@otoroshi.tools\": [\n {\n \"tenant\": \"*:r\",\n \"teams\": [\n \"*:r\"\n ]\n }\n ]\n}\n```\n\n\n\n\n\n\n\n\n"},{"name":"secure-the-communication-between-a-backend-app-and-otoroshi.md","id":"/how-to-s/secure-the-communication-between-a-backend-app-and-otoroshi.md","url":"/how-to-s/secure-the-communication-between-a-backend-app-and-otoroshi.html","title":"Secure the communication between a backend app and Otoroshi","content":"# Secure the communication between a backend app and Otoroshi\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\nLet's create a new route with the Otorochi challenge plugin enabled.\n\n```sh\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/routes \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"myapi.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"localhost\",\n \"port\": 8081,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.OtoroshiChallenge\",\n \"config\": {\n \"version\": 2,\n \"ttl\": 30,\n \"request_header_name\": \"Otoroshi-State\",\n \"response_header_name\": \"Otoroshi-State-Resp\",\n \"algo_to_backend\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"algo_from_backend\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"secret\",\n \"base64\": false\n },\n \"state_resp_leeway\": 10\n }\n }\n ]\n}\nEOF\n```\n\nLet's use the following application, developed in NodeJS, which supports both versions of the exchange protocol.\n\nClone this @link:[repository](https://github.com/MAIF/otoroshi/blob/master/demos/challenge) and run the installation of the dependencies.\n\n```sh\ngit clone 'git@github.com:MAIF/otoroshi.git' --depth=1\ncd ./otoroshi/demos/challenge\nnpm install\nPORT=8081 node server.js\n```\n\nThe last command should return : \n\n```sh\nchallenge-verifier listening on http://0.0.0.0:8081\n```\n\nThis project runs an express client with one middleware. The middleware handles each request, and check if the header `State token header` is present in headers. By default, the incoming expected header is `Otoroshi-State` by the application and `Otoroshi-State-Resp` header in the headers of the return request. \n\nTry to call your service via http://myapi.oto.tools:8080/. This should return a successful response with all headers received by the backend app. \n\nNow try to disable the middleware in the nodejs file by commenting the following line. \n\n```js\n// app.use(OtoroshiMiddleware());\n```\n\nTry to call again your service. This time, Otoroshi breaks the return response from your backend service, and returns.\n\n```sh\nDownstream microservice does not seems to be secured. Cancelling request !\n```"},{"name":"secure-with-apikey.md","id":"/how-to-s/secure-with-apikey.md","url":"/how-to-s/secure-with-apikey.html","title":"Secure an api with api keys","content":"# Secure an api with api keys\n\n### Before you start\n\n@@include[fetch-and-start.md](../includes/fetch-and-start.md) { #init }\n\n### Create a simple route\n\n**From UI**\n\n1. Navigate to @link:[http://otoroshi.oto.tools:8080/bo/dashboard/routes](http://otoroshi.oto.tools:8080/bo/dashboard/routes) { open=new } and click on the `create new route` button\n2. Give a name to your route\n3. Save your route\n4. Set `myservice.oto.tools` as frontend domains\n5. Set `https://mirror.otoroshi.io` as backend target (hostname: `mirror.otoroshi.io`, port: `443`, Tls: `Enabled`)\n\n**From Admin API**\n\n```sh\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/routes \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"id\": \"myservice\",\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"myservice.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n }\n}\nEOF\n```\n\n### Secure routes with api key\n\nBy default, a route is public. In our case, we want to secure all paths starting with `/api` and leave all others unauthenticated.\n\nLet's add a new plugin, called `Apikeys`, to our route. Search in the list of plugins, then add it to the flow.\nOnce done, restrict its range by setting up `/api` in the `Informations>include` section.\n\n**From Admin API**\n\n```sh\ncurl -X PUT http://otoroshi-api.oto.tools:8080/api/routes/myservice \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"id\": \"myservice\",\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"myservice.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.ApikeyCalls\",\n \"include\": [\n \"/api\"\n ],\n \"config\": {\n \"validate\": true,\n \"mandatory\": true,\n \"wipe_backend_request\": true,\n \"update_quotas\": true\n }\n }\n ]\n}\nEOF\n```\n\nNavigate to @link:[http://myservice.oto.tools:8080/api/test](http://myservice.oto.tools:8080/api/test) { open=new } again. If the service is configured, you should have a `Service Not found error`.\n\nThe expected error on the `/api/test`, indicate that an api key is required to access to this part of the backend service.\n\nNavigate to any other routes which are not starting by `/api/*` like @link:[http://myservice.oto.tools:8080/test/bar](http://myservice.oto.tools:8080/test/bar) { open=new }\n\n\n### Generate an api key to request secure services\n\nNavigate to @link:[http://otoroshi.oto.tools:8080/bo/dashboard/apikeys/add](http://otoroshi.oto.tools:8080/bo/dashboard/apikeys/add) { open=new } or when clicking on the **Add apikey** button on the sidebar.\n\nThe only required fields of an Otoroshi api key are : \n\n* `ApiKey id`\n* `ApiKey Secret`\n* `ApiKey Name`\n\nThese fields are automatically generated by Otoroshi. However, you can override these values and indicate an additional description.\n\nTo simplify the rest of the tutorial, set the values:\n\n* `my-first-api-key-id` as `ApiKey Id`\n* `my-first-api-key-secret` as `ApiKey Secret`\n\nClick on **Create and stay on this ApiKey** button at the bottom of the page.\n\nNow you created the key, it's time to call our previous generated service with it.\n\nOtoroshi supports two methods to achieve that. \nOnce by passing Otoroshi api key in two headers : `Otoroshi-Client-Id` and `Otoroshi-Client-Secret` (these headers names can be override on each service).\nAnd the second by passing Otoroshi api key in the authentication Header (basically the `Authorization` header) as a basic encoded value.\n\nLet's ahead and call our service :\n\n```sh\ncurl -X GET \\\n -H 'Otoroshi-Client-Id: my-first-api-key-id' \\\n -H 'Otoroshi-Client-Secret: my-first-api-key-secret' \\\n 'http://myservice.oto.tools:8080/api/test' --include\n```\n\nAnd with the second method :\n\n```sh\ncurl -X GET \\\n -H 'Authorization: Basic bXktZmlyc3QtYXBpLWtleS1pZDpteS1maXJzdC1hcGkta2V5LXNlY3JldA==' \\\n 'http://myservice.oto.tools:8080/api/test' --include\n```\n\n> Tips : To easily fill your headers, you can jump to the `Call examples` section in each api key view. In this section the header names are the default values and the service url is not set. You have to adapt these lines to your case. \n\n### Override defaults headers names for a route\n\nIn some case, we want to change the defaults headers names (and it's a quite good idea).\n\nLet's start by navigating to the `Apikeys` plugin in the Designer of our route.\n\nThe first values to change are the headers names used to read the api key from client. Start by clicking on `extractors > CustomHeaders` and set the following values :\n\n* `api-key-header-id` as `Custom client id header name`\n* `api-key-header-secret` as `Custom client secret header name`\n\nSave the route, and call the service again.\n\n```sh\ncurl -X GET \\\n -H 'Otoroshi-Client-Id: my-first-api-key-id' \\\n -H 'Otoroshi-Client-Secret: my-first-api-key-secret' \\\n 'http://myservice.oto.tools:8080/api/test' --include\n```\n\nThis should output an error because Otoroshi are expecting the api keys in other headers.\n\n```json\n{\n \"Otoroshi-Error\": \"No ApiKey provided\"\n}\n```\n\nCall one again the service but with the changed headers names.\n\n```sh\ncurl -X GET \\\n -H 'api-key-header-id: my-first-api-key-id' \\\n -H 'api-key-header-secret: my-first-api-key-secret' \\\n 'http://myservice.oto.tools:8080/api/test' --include\n```\n\nAll others default services will continue to accept the api keys with the `Otoroshi-Client-Id` and `Otoroshi-Client-Secret` headers, whereas our service, will accept the `api-key-header-id` and `api-key-header-secret` headers.\n\n### Accept only api keys with expected values\n\nBy default, a secure service only accepts requests with api key. But all generated api keys are eligible to call our service and in some case, we want authorize only a couple of api keys.\n\nYou can restrict the list of accepted api keys by giving a list of `metadata` or/and `tags`. Each api key has a list of `tags` and `metadata`, which can be used by Otoroshi to validate a request with an api key. All api key metadata/tags can be forward to your service (see `Otoroshi Challenge` section of a service to get more information about `Otoroshi info. token`).\n\nLet's starting by only accepting api keys with the `otoroshi` tag.\n\nClick on the `ApiKeys` plugin, and enabled the `Routing` section. These constraints guarantee that a request will only be transmitted if all the constraints are validated.\n\nIn our first case, set `otoroshi` in `One Tag in` array and save the service.\nThen call our service with :\n```sh\ncurl -X GET \\\n -H 'Otoroshi-Client-Id: my-first-api-key-id' \\\n -H 'Otoroshi-Client-Secret: my-first-api-key-secret' \\\n 'http://myservice.oto.tools:8080/api/test' --include\n```\n\nThis should output :\n```json\n// Error reason : Our api key doesn't contains the expected tag.\n{\n \"Otoroshi-Error\": \"Bad API key\"\n}\n```\n\nNavigate to the edit page of our api key, and jump to the `Metadata and tags` section.\nIn this section, add `otoroshi` in `Tags` array, then save the api key. Call once again your call and you will normally get a successful response of our backend service.\n\nIn this example, we have limited our service to API keys that have `otoroshi` as a tag.\n\nOtoroshi provides a few others behaviours. For each behaviour, *Api key used should*:\n\n* `All Tags in` : have all of the following tags\n* `No Tags in` : not have one of the following tags\n* `One Tag in` : have at least one of the following tags\n\n---\n\n* `All Meta. in` : have all of the following metadata entries\n* `No Meta. in` : not have one of the following metadata entries\n* `One Meta. in` : have at least one of the following metadata entries\n \n----\n\n* `One Meta key in` : have at least one of the following key in metadata\n* `All Meta key in` : have all of the following keys in metadata\n* `No Meta key in` : not have one of the following keys in metadata"},{"name":"secure-with-oauth1-client.md","id":"/how-to-s/secure-with-oauth1-client.md","url":"/how-to-s/secure-with-oauth1-client.html","title":"Secure an app with OAuth1 client flow","content":"# Secure an app with OAuth1 client flow\n\n### Before you start\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Running an simple OAuth 1 server\n\nIn this tutorial, we'll instantiate a oauth 1 server with docker. If you alredy have the necessary, skip this section @ref:[to](#create-an-oauth-1-provider-module).\n\nLet's start by running the server\n\n```sh\ndocker run -d --name oauth1-server --rm \\\n -p 5000:5000 \\\n -e OAUTH1_CLIENT_ID=2NVVBip7I5kfl0TwVmGzTphhC98kmXScpZaoz7ET \\\n -e OAUTH1_CLIENT_SECRET=wXzb8tGqXNbBQ5juA0ZKuFAmSW7RwOw8uSbdE3MvbrI8wjcbGp \\\n -e OAUTH1_REDIRECT_URI=http://otoroshi.oto.tools:8080/backoffice/auth0/callback \\\n ghcr.io/beryju/oauth1-test-server\n```\n\nWe created a oauth 1 server which accepts `http://otoroshi.oto.tools:8080/backoffice/auth0/callback` as `Redirect URI`. This URL is used by Otoroshi to retrieve a token and a profile at the end of an authentication process.\n\nAfter this command, the container logs should output :\n```sh \n127.0.0.1 - - [14/Oct/2021 12:10:49] \"HEAD /api/health HTTP/1.1\" 200 -\n```\n\n### Create an OAuth 1 provider module\n\n1. Go ahead, and navigate to @link:[http://otoroshi.oto.tools:8080](http://otoroshi.oto.tools:8080) { open=new }\n1. Click on the cog icon on the top right\n1. Then **Authentication configs** button\n1. And add a new configuration when clicking on the **Add item** button\n2. Select the `Oauth1 provider` in the type selector field\n3. Set a basic name and description like `oauth1-provider`\n4. Set `2NVVBip7I5kfl0TwVmGzTphhC98kmXScpZaoz7ET` as `Consumer key`\n5. Set `wXzb8tGqXNbBQ5juA0ZKuFAmSW7RwOw8uSbdE3MvbrI8wjcbGp` as `Consumer secret`\n6. Set `http://localhost:5000/oauth/request_token` as `Request Token URL`\n7. Set `http://localhost:5000/oauth/authorize` as `Authorize URL`\n8. Set `http://localhost:oauth/access_token` as `Access token URL`\n9. Set `http://localhost:5000/api/me` as `Profile URL`\n10. Set `http://otoroshi.oto.tools:8080/backoffice/auth0/callback` as `Callback URL`\n11. At the bottom of the page, disable the **secure** button (because we're using http and this configuration avoid to include cookie in an HTTP Request without secure channel, typically HTTPs)\n\n At this point, your configuration should be similar to :\n\n\n\n\nWith this configuration, the connected user will receive default access on teams and organizations. If you want to change the access rights for a specific user, you can achieve it with the `Rights override` field and a configuration like :\n\n```json\n{\n \"foo@example.com\": [\n {\n \"tenant\": \"*:rw\",\n \"teams\": [\n \"*:rw\"\n ]\n }\n ]\n}\n```\n\nSave your configuration at the bottom of the page, then navigate to the `danger zone` to use your module as a third-party connection to the Otoroshi UI.\n\n### Connect to Otoroshi with OAuth1 authentication\n\nTo secure Otoroshi with your OAuth1 configuration, we have to register an Authentication configuration as a BackOffice Auth. configuration.\n\n1. Navigate to the **danger zone** (when clicking on the cog on the top right and selecting Danger zone)\n1. Scroll to the **BackOffice auth. settings**\n1. Select your last Authentication configuration (created in the previous section)\n1. Save the global configuration with the button on the top right\n\n### Testing your configuration\n\n1. Disconnect from your instance\n1. Then click on the **Login using third-party** button (or navigate to http://otoroshi.oto.tools:8080)\n2. Click on **Login using Third-party** button\n3. If all is configured, Otoroshi will redirect you to the oauth 1 server login page\n4. Set `example-user` as user and trust the user by clicking on `yes` button.\n5. Good work! You're connected to Otoroshi with an OAuth1 module.\n\n> A fallback solution is always available in the event of a bad authentication configuration. By going to http://otoroshi.oto.tools:8080/bo/simple/login, the administrators will be able to redefine the configuration.\n\n### Secure an app with OAuth 1 authentication\n\nWith the previous configuration, you can secure any of Otoroshi services with it. \n\nThe first step is to apply a little change on the previous configuration. \n\n1. Navigate to @link:[http://otoroshi.oto.tools:8080/bo/dashboard/auth-configs](http://otoroshi.oto.tools:8080/bo/dashboard/auth-configs) { open=new }.\n2. Create a new auth module configuration with the same values.\n3. Replace the `Callback URL` field to `http://privateapps.oto.tools:8080/privateapps/generic/callback` (we changed this value because the redirection of a logged user by a third-party server is cover by an other route by Otoroshi).\n4. Disable the `secure` button (because we're using http and this configuration avoid to include cookie in an HTTP Request without secure channel, typically HTTPs)\n\n> Note : an Otoroshi service is called a private app when it is protected by an authentication module.\n\nOur example server supports only one redirect URI. We need to kill it, and to create a new container with `http://otoroshi.oto.tools:8080/privateapps/generic/callback` as `OAUTH1_REDIRECT_URI`\n\n```sh\ndocker rm -f oauth1-server\ndocker run -d --name oauth1-server --rm \\\n -p 5000:5000 \\\n -e OAUTH1_CLIENT_ID=2NVVBip7I5kfl0TwVmGzTphhC98kmXScpZaoz7ET \\\n -e OAUTH1_CLIENT_SECRET=wXzb8tGqXNbBQ5juA0ZKuFAmSW7RwOw8uSbdE3MvbrI8wjcbGp \\\n -e OAUTH1_REDIRECT_URI=http://privateapps.oto.tools:8080/privateapps/generic/callback \\\n ghcr.io/beryju/oauth1-test-server\n```\n\nOnce the authentication module and the new container created, we can define the authentication module on the service.\n\n1. Navigate to any created route\n2. Search in the list of plugins the plugin named `Authentication`\n3. Select your Authentication config inside the list\n4. Don't forget to save your configuration.\n\nNow you can try to call your route and see the login module appears.\n\n> \n\nThe allow access to the user.\n\n> \n\nIf you had any errors, make sure of :\n\n* check if you are on http or https, and if the **secure cookie option** is enabled or not on the authentication module\n* check if your OAuth1 server has the REDIRECT_URI set on **privateapps/...**\n* Make sure your server supports POST or GET OAuth1 flow set on authentication module\n\nOnce the configuration is working, you can check, when connecting with an Otoroshi admin user, the `Private App session` created (use the cog at the top right of the page, and select `Priv. app sesssions`, or navigate to @link:[http://otoroshi.oto.tools:8080/bo/dashboard/sessions/private](http://otoroshi.oto.tools:8080/bo/dashboard/sessions/private) { open=new }).\n\nOne interesing feature is to check the profile of the connected user. In our case, when clicking on the `Profile` button of the right user, we should have : \n\n```json\n{\n \"email\": \"foo@example.com\",\n \"id\": 1,\n \"name\": \"test name\",\n \"screen_name\": \"example-user\"\n}\n```"},{"name":"secure-with-oauth2-client-credentials.md","id":"/how-to-s/secure-with-oauth2-client-credentials.md","url":"/how-to-s/secure-with-oauth2-client-credentials.html","title":"Secure an app with OAuth2 client_credential flow","content":"# Secure an app with OAuth2 client_credential flow\n\nOtoroshi makes it easy for your app to implement the [OAuth2 Client Credentials Flow](https://auth0.com/docs/authorization/flows/client-credentials-flow). \n\nWith machine-to-machine (M2M) applications, the system authenticates and authorizes the app rather than a user. With the client credential flow, applications will pass along their Client ID and Client Secret to authenticate themselves and get a token.\n\n## Deployed the Client Credential Service\n\nThe Client Credential Service must be enabled as a global plugin on your Otoroshi instance. Once enabled, it will expose three endpoints to issue and validate tokens for your routes.\n\nLet's navigate to your otoroshi instance (in our case http://otoroshi.oto.tools:8080) on the danger zone (`top right cog icon / Danger zone` or at [/bo/dashboard/dangerzone](http://otoroshi.oto.tools:8080/bo/dashboard/dangerzone)).\n\nTo enable a plugin in global on Otoroshi, you must add it in the `Global Plugins` section.\n\n1. Open the `Global Plugin` section \n2. Click on `enabled` (if not already done)\n3. Search the plugin named `Client Credential Service` of type `Sink` (you need to enabled it on the old or new Otoroshi engine, depending on your use case)\n4. Inject the default configuration by clicking on the button (if you are using the old Otoroshi engine)\n\nIf you click on the arrow near each plugin, you will have the documentation of the plugin and its default configuration.\n\nThe client credential plugin has by default 4 parameters : \n\n* `domain`: a regex used to expose the three endpoints (`default`: *)\n* `expiration`: duration until the token expire (in ms) (`default`: 3600000)\n* `defaultKeyPair`: a key pair used to sign the jwt token. By default, Otoroshi is deployed with an otoroshi-jwt-signing that you can visualize on the jwt verifiers certificates (`default`: \"otoroshi-jwt-signing\")\n* `secure`: if enabled, Otoroshi will expose routes only in the https requests case (`default`: true)\n\nIn this tutorial, we will set the configuration as following : \n\n* `domain`: oauth.oto.tools\n* `expiration`: 3600000\n* `defaultKeyPair`: otoroshi-jwt-signing\n* `secure`: false\n\nNow that the plugin is running, third routes are exposed on each matching domain of the regex.\n\n* `GET /.well-known/otoroshi/oauth/jwks.json` : retrieve all public keys presents in Otoroshi\n* `POST /.well-known/otoroshi/oauth/token/introspect` : validate and decode the token \n* `POST /.well-known/otoroshi/oauth/token` : generate a token with the fields provided\n\nOnce the global configuration saved, we can deployed a simple service to test it.\n\nLet's navigate to the routes page, and create a new route with : \n\n1. `foo.oto.tools` as `domain` in the frontend node\n2. `mirror.otoroshi.io` as hostname in the list of targets of the backend node, and `443` as `port`.\n3. Search in the list of plugins and add the `Apikeys` plugin to the flow\n4. In the extractors section of the `Apikeys` plugin, disabled the `Basic`, `Client id` and `Custom headers` option.\n5. Save your route\n\nLet's make a first call, to check if the jwks are already exposed :\n\n```sh\ncurl 'http://oauth.oto.tools:8080/.well-known/otoroshi/oauth/jwks.json'\n```\n\nThe output should look like a list of public keys : \n```sh\n{\n \"keys\": [\n {\n \"kty\": \"RSA\",\n \"e\": \"AQAB\",\n \"kid\": \"otoroshi-intermediate-ca\",\n ...\n }\n ...\n ]\n}\n``` \n\nLet's make a call to your route. \n\n```sh\ncurl 'http://foo.oto.tools:8080/'\n```\n\nThis should output the expected error: \n```json\n{\n \"Otoroshi-Error\": \"No ApiKey provided\"\n}\n```\n\nThe first step is to generate an api key. Navigate to the api keys page, and create an item with the following values (it will be more easy to use them in the next step)\n\n* `my-id` as `ApiKey Id`\n* `my-secret` as `ApiKey Secret`\n\nThe next step is to get a token by calling the endpoint `http://oauth.oto.tools:8080/.well-known/otoroshi/oauth/jwks.json`. The required fields are the grand type, the client and the client secret corresponding to our generated api key.\n\n```sh\ncurl -X POST http://oauth.oto.tools:8080/.well-known/otoroshi/oauth/token \\\n-H \"Content-Type: application/json\" \\\n-d @- <<'EOF'\n{\n \"grant_type\": \"client_credentials\",\n \"client_id\":\"my-id\",\n \"client_secret\":\"my-secret\"\n}\nEOF\n```\n\nThis request have one more optional field, named `scope`. The scope can be used to set a bunch of scope on the generated access token.\n\nThe last command should look like : \n\n```sh\n{\n \"access_token\": \"generated-token-xxxxx\",\n \"token_type\": \"Bearer\",\n \"expires_in\": 3600\n}\n```\n\nNow we can call our api with the generated token\n\n```sh\ncurl 'http://foo.oto.tools:8080/' \\\n -H \"Authorization: Bearer generated-token-xxxxx\"\n```\n\nThis should output a successful call with the list of headers with a field named `Authorization` containing the previous access token.\n\n## Other possible configuration\n\nBy default, Otoroshi generate the access token with the specified key pair in the configuration. But, in some case, you want a specific key pair by client_id/client_secret.\nThe `jwt-sign-keypair` metadata can be set on any api key with the id of the key pair as value. \n"},{"name":"setup-otoroshi-cluster.md","id":"/how-to-s/setup-otoroshi-cluster.md","url":"/how-to-s/setup-otoroshi-cluster.html","title":"Setup an Otoroshi cluster","content":"# Setup an Otoroshi cluster\n\nIn this tutorial, you create an cluster of Otoroshi.\n\n### Summary \n\n1. Deploy an Otoroshi cluster with one leader and 2 workers \n2. Add a load balancer in front of the workers \n3. Validate the installation by adding a header on the requests\n\nLet's start by downloading the latest jar of Otoroshi.\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar'\n```\n\nThen create an instance of Otoroshi and indicates with the `otoroshi.cluster.mode` environment variable that it will be the leader.\n\n```sh\njava -Dhttp.port=8091 -Dhttps.port=9091 -Dotoroshi.cluster.mode=leader -jar otoroshi.jar\n```\n\nLet's create two Otoroshi workers, exposed on `:8082/:8092` and `:8083/:8093` ports, and setting the leader URL in the `otoroshi.cluster.leader.urls` environment variable.\n\nThe first worker will listen on the `:8082/:8092` ports\n```sh\njava \\\n -Dotoroshi.cluster.worker.name=worker-1 \\\n -Dhttp.port=8092 \\\n -Dhttps.port=9092 \\\n -Dotoroshi.cluster.mode=worker \\\n -Dotoroshi.cluster.leader.urls.0='http://127.0.0.1:8091' -jar otoroshi.jar\n```\n\nThe second worker will listen on the `:8083/:8093` ports\n```sh\njava \\\n -Dotoroshi.cluster.worker.name=worker-2 \\\n -Dhttp.port=8093 \\\n -Dhttps.port=9093 \\\n -Dotoroshi.cluster.mode=worker \\\n -Dotoroshi.cluster.leader.urls.0='http://127.0.0.1:8091' -jar otoroshi.jar\n```\n\nOnce launched, you can navigate to the @link:[cluster view](http://otoroshi.oto.tools:8091/bo/dashboard/cluster) { open=new }. The cluster is now configured, you can see the 3 instances and some health informations on each instance.\n\nTo complete our installation, we want to spread the incoming requests accross otoroshi worker instances. \n\nIn this tutorial, we will use `haproxy` has a TCP loadbalancer. If you don't have haproxy installed, you can use docker to run an haproxy instance as explained below.\n\nBut first, we need an haproxy configuration file named `haproxy.cfg` with the following content :\n\n```sh\nfrontend front_nodes_http\n bind *:8080\n mode tcp\n default_backend back_http_nodes\n timeout client 1m\n\nbackend back_http_nodes\n mode tcp\n balance roundrobin\n server node1 host.docker.internal:8092 # (1)\n server node2 host.docker.internal:8093 # (1)\n timeout connect 10s\n timeout server 1m\n```\n\nand run haproxy with this config file\n\nno docker\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #no_docker }\n\ndocker (on linux)\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #docker_linux }\n\ndocker (on macos)\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #docker_mac }\n\ndocker (on windows)\n: @@snip [run.sh](../snippets/cluster-run-ha.sh) { #docker_windows }\n\nThe last step is to create a route, add a rule to add, in the headers, a specific value to identify the worker used.\n\nCreate this route, exposed on `http://api.oto.tools:xxxx`, which will forward all requests to the mirror `https://mirror.otoroshi.io`.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8091/api/routes' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"myapi\",\n \"frontend\": {\n \"domains\": [\"api.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.AdditionalHeadersIn\",\n \"config\": {\n \"headers\": {\n \"worker-name\": \"${config.otoroshi.cluster.worker.name}\"\n }\n }\n }\n ]\n}\nEOF\n```\n\nOnce created, call two times the service. If all is working, the header received by the backend service will have `worker-1` and `worker-2` as value.\n\n```sh\ncurl 'http://api.oto.tools:8080'\n## Response headers\n{\n ...\n \"worker-name\": \"worker-2\"\n ...\n}\n```\n\nThis should output `worker-1`, then `worker-2`, etc. Well done, your loadbalancing is working and your cluster is set up correctly.\n\n\n"},{"name":"tailscale-integration.md","id":"/how-to-s/tailscale-integration.md","url":"/how-to-s/tailscale-integration.html","title":"Tailscale integration","content":"# Tailscale integration\n\n[Tailscale](https://tailscale.com/) is a VPN service that let you create your own private network based on [Wireguard](https://www.wireguard.com/). Tailscale goes beyond the simple meshed wireguard based VPN and offers out of the box NAT traversal, third party identity provider integration, access control, magic DNS and let's encrypt integration for the machines on your VPN.\n\nOtoroshi provides somes plugins out of the box to work in a [Tailscale](https://tailscale.com/) environment.\n\nby default Otoroshi, works out of the box when integrated in a `tailnet` as you can contact other machines usign their ip address. But we can go a little bit further.\n\n## tailnet configuration\n\nfirst thing, go to your tailnet setting on [tailscale.com](https://login.tailscale.com/admin/machines) and go to the [DNS tab](https://login.tailscale.com/admin/dns). Here you can find \n\n* your tailnet name: the domain name of all your machines on your tailnet\n* MagicDNS: a way to address your machines by directly using their names\n* HTTPS Certificates: HTTPS certificates provision for all your machines\n\nto use otoroshi Tailscale plugin you must enable `MagicDNS` and `HTTPS Certificates`\n\n## Tailscale certificates integration\n\nyou can use tailscale generated let's encrypt certificates in otoroshi by using the `Tailscale certificate fetcher job` in the plugins section of the danger zone. Once enabled, this job will fetch certificates for domains in `xxxx.ts.net` that belong to your tailnet. \n\nas usual, the fetched certificates will be available in the [certificates page](http://otoroshi.oto.tools:8080/bo/dashboard/certificates) of otoroshi.\n\n## Tailscale targets integration\n\nthe following pair of plugins let your contact tailscale machine by using their names even if their are multiple instance.\n\nwhen you register a machine on a tailnet, you have to provide a name for it, let say `my-server`. This machine will be addressable in your tailnet with `my-server.tailxxx.ts.net`. But if you have multiple instance of the same server on several machines with the same `my-server` name, their DNS name on the tailnet will be `my-server.tailxxx.ts.net`, `my-server-1.tailxxx.ts.net`, `my-server-2.tailxxx.ts.net`, etc. If you want to use those names in an otoroshi backend it could be tricky if the application has something like autoscaling enabled.\n\nin that case, you can add the `Tailscale targets job` in the plugins section of the danger zone. Once enabled, this job will fetch periodically available machine on the tailnet with their names and DNS names. Then, in a route, you can use the `Tailscale select target by name` plugin to tell otoroshi to loadbalance traffic between all machine that have the name specified in the plugin config. instead of their DNS name."},{"name":"tls-termination-using-own-certificates.md","id":"/how-to-s/tls-termination-using-own-certificates.md","url":"/how-to-s/tls-termination-using-own-certificates.html","title":"TLS termination using your own certificates","content":"# TLS termination using your own certificates\n\nThe goal of this tutorial is to expose a service via https using a certificate generated by openssl.\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\nTry to call the service.\n\n```sh\ncurl 'http://myservice.oto.tools:8080'\n```\n\nThis should output something like\n\n```json\n{\n \"method\": \"GET\",\n \"path\": \"/\",\n \"headers\": {\n \"host\": \"mirror.opunmaif.io\",\n \"accept\": \"*/*\",\n \"user-agent\": \"curl/7.64.1\",\n \"x-forwarded-port\": \"443\",\n \"opun-proxied-host\": \"mirror.otoroshi.io\",\n \"otoroshi-request-id\": \"1463145856319359618\",\n \"otoroshi-proxied-host\": \"myservice.oto.tools:8080\",\n \"opun-gateway-request-id\": \"1463145856554240100\",\n \"x-forwarded-proto\": \"https\",\n },\n \"body\": \"\"\n}\n```\n\nLet's try to call the service in https.\n\n```sh\ncurl 'https://myservice.oto.tools:8443'\n```\n\nThis should output\n\n```sh\ncurl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to myservice.oto.tools:8443\n```\n\nTo fix it, we have to generate a certificate and import it in Otoroshi to match the domain `myservice.oto.tools`.\n\n> If you already had a certificate you can skip the next set of commands and directly import your certificate in Otoroshi\n\nWe will use openssl to generate a private key and a self-signed certificate.\n\n```sh\nopenssl genrsa -out myservice.key 4096\n# remove pass phrase\nopenssl rsa -in myservice.key -out myservice.key\n# generate the certificate authority cert\nopenssl req -new -x509 -sha256 -days 730 -key myservice.key -out myservice.cer -subj \"/CN=myservice.oto.tools\"\n```\n\nCheck the content of the certificate \n\n```sh\nopenssl x509 -in myservice.cer -text\n```\n\nThis should contains something like\n\n```sh\nCertificate:\n Data:\n Version: 1 (0x0)\n Serial Number: 9572962808320067790 (0x84d9fef455f188ce)\n Signature Algorithm: sha256WithRSAEncryption\n Issuer: CN=myservice.oto.tools\n Validity\n Not Before: Nov 23 14:25:55 2021 GMT\n Not After : Nov 23 14:25:55 2022 GMT\n Subject: CN=myservice.oto.tools\n Subject Public Key Info:\n Public Key Algorithm: rsaEncryption\n Public-Key: (4096 bit)\n Modulus:\n...\n```\n\nOnce generated, go back to Otoroshi and navigate to the certificates management page (`top right cog icon / SSL/TLS certificates` or at @link:[`/bo/dashboard/certificates`](http://otoroshi.oto.tools:8080/bo/dashboard/certificates)) and click on `Add item`.\n\nSet `myservice-certificate` as `name` and `description`.\n\nDrop the `myservice.cer` file or copy the content to the `Certificate full chain` field.\n\nDo the same action for the `myservice.key` file in the `Certificate private key` field.\n\nSet your passphrase password in the `private key password` field if you added one.\n\nLet's try the same call to the service.\n\n```sh\ncurl 'https://myservice.oto.tools:8443'\n```\n\nAn error should occurs due to the untrsuted received certificate server\n\n```sh\ncurl: (60) SSL certificate problem: self signed certificate\nMore details here: https://curl.haxx.se/docs/sslcerts.html\n\ncurl failed to verify the legitimacy of the server and therefore could not\nestablish a secure connection to it. To learn more about this situation and\nhow to fix it, please visit the web page mentioned above.\n```\n\nEnd this tutorial by trusting the certificate server \n\n```sh\ncurl 'https://myservice.oto.tools:8443' --cacert myservice.cer\n```\n\nThis should finally output\n\n```json\n{\n \"method\": \"GET\",\n \"path\": \"/\",\n \"headers\": {\n \"host\": \"mirror.opunmaif.io\",\n \"accept\": \"*/*\",\n \"user-agent\": \"curl/7.64.1\",\n \"x-forwarded-port\": \"443\",\n \"opun-proxied-host\": \"mirror.otoroshi.io\",\n \"otoroshi-request-id\": \"1463158439730479893\",\n \"otoroshi-proxied-host\": \"myservice.oto.tools:8443\",\n \"opun-gateway-request-id\": \"1463158439558515871\",\n \"x-forwarded-proto\": \"https\",\n \"sozu-id\": \"01FN6MGKSYZNJYHEMP4R5PJ4Q5\"\n },\n \"body\": \"\"\n}\n```\n\n"},{"name":"tls-using-lets-encrypt.md","id":"/how-to-s/tls-using-lets-encrypt.md","url":"/how-to-s/tls-using-lets-encrypt.html","title":"TLS termination using Let's Encrypt","content":"# TLS termination using Let's Encrypt\n\nAs you know, Otoroshi is capable of doing TLS termination for your services. You can import your own certificates, generate certificates from scratch and you can also use the @link:[ACME protocol](https://datatracker.ietf.org/doc/html/rfc8555) to generate certificates. One of the most popular service offering ACME certificates creation is @link:[Let's Encrypt](https://letsencrypt.org/).\n\n@@@ warning\nIn order to make this tutorial work, your otoroshi instance MUST be accessible from the internet in order to be reachable by Let's Encrypt ACME process. Also, the domain name used for the certificates MUST be configured to reach your otoroshi instance at your DNS provider level.\n@@@\n\n@@@ note\nthis tutorial can work with any ACME provider with the same rules. your otoroshi instance MUST be accessible by the ACME process. Also, the domain name used for the certificates MUST be configured to reach your otoroshi instance at your DNS provider level.\n@@@\n\n## Setup let's encrypt on otoroshi\n\nGo on the danger zone page by clicking on the [`cog icon / Danger Zone`](http://otoroshi.oto.tools:8080/bo/dashboard/certificates). Scroll to the `Let's Encrypt settings` section. Enable it, and specify the address of the ACME server (for production Let's Encrypt it's `acme://letsencrypt.org`, for testing, it's `acme://letsencrypt.org/staging`. Any ACME server address should work). You can also add one or more email addresses or contact urls that will be included in your Let's Encrypt account. You don't have to fill the `public/private key` inputs as they will be automatically generated on the first usage.\n\n## Creating let's encrypt certificate from FQDNs\n\nYou can go to the certificates page by clicking on the [`cog icon / SSL/TLS Certificates`](http://otoroshi.oto.tools:8080/bo/dashboard/certificates). Here, click on the `+ Let's Encrypt certificate` button. A popup will show up to ask you the FQDN that you want for you certificate. Once done, click on the `Create` button. A few moment later, you will be redirected on a brand new certificate generated by Let's encrypt. You can now enjoy accessing your service behind the FQDN with TLS.\n\n## Creating let's encrypt certificate from a service\n\nYou can go to any service page and enable the flag `Issue Let's Encrypt cert.`. Do not forget to save your service. A few moment later, the certificates will be available in the certificates page and you can will be able to enjoy accessing your service with TLS.\n"},{"name":"wasm-manager-installation.md","id":"/how-to-s/wasm-manager-installation.md","url":"/how-to-s/wasm-manager-installation.html","title":"Deploy your own WASM Manager","content":"# Deploy your own WASM Manager\n\n@@@ div { .centered-img }\n\n@@@\n\n## Manager's configuration\n\nIn the @ref:[WASM tutorial](./wasm-usage.md) we used existing WASM files. These files has been generated with the WASM Manager solution provided by the Otoroshi team. \n\nThe wasm manager is a code editor in the browser that will help you to write and compile your plugin to WASM using Rust or Assembly Script. \nYou can install your own man ager instance using a docker image.\n\n```sh\ndocker run -p 5001:5001 maif/otoroshi-wasm-manager\n```\n\nThis should download and run the latest version of the manager. Once launched, you can navigate [http://localhost:5001]([http://localhost:5001) (or any other binding port). \n\nThis should show an authentication error. The manager can run with or without authentication, and you can confige it using the `AUTH_MODE` environment variable (`AUTH` or `NO_AUTH` values).\n\nThe manager is configurable by environment variables. The manager uses an object storage (S3 compatible) as storage solution. \nYou can configure your S3 with the four variables `S3_ACCESS_KEY_ID`, `S3_SECRET_ACCESS_KEY`, `S3_ENDPOINT` and `S3_BUCKET`.\n\nFeel free to change the following variables:\n\n\n| NAME | DEFAULT VALUE | DESCRIPTION |\n| ------------------------- | ------------------ | -------------------------------------------------------------------------- |\n| MANAGER_PORT | 5001 | The manager will be exposed on this port |\n| MANAGER_ALLOWED_DOMAINS | otoroshi.oto.tools | Array of origins, separated by comma, which is allowed to call the manager |\n| MANAGER_MAX_PARALLEL_JOBS | 2 | Number of parallel jobs to compile plugins |\n\nThe following variables are useful to bind the manager with Otoroshi and to run it behind (we will use them in the next section of this tutorial).\n\n| NAME | DEFAULT VALUE | DESCRIPTION |\n| ---------------------- | ----------------------- | ------------------------------------------------------ |\n| OTOROSHI_USER_HEADER | Otoroshi-User | Header used to extract the user from Otoroshi request |\n| OTOROSHI_TOKEN_SECRET | veryverysecret | the secret used to sign the user token |\n\n## Tutorial\n\n1. [Before you start](#before-you-start)\n2. [Deploy the manager using Docker](#deploy-the-manager-using-docker)\n3. [Create a route to expose and protect the manager with authentication](#create-a-route-to-expose-and-protect-the-manager-with-authentication)\n4. [Create a first validator plugin using the manager](#create-a-first-validator-plugin-using-the-manager)\n5. [Configure the danger zone of Otoroshi to bind Otoroshi and the manager](#configure-the-danger-zone-of-otoroshi-to-bind-otoroshi-and-the-manager)\n6. [Create a route using the generated wasm file](#create-a-route-using-the-generated-wasm-file)\n7. [Test your route](#test-your-route)\n\nAfter completing these steps you will have a running Otoroshi instance and our owm WASM manager linked together.\n\n### Before your start\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Deploy the manager using Docker\n\nLet's start by deploying an instance of S3. If you already have an instance you can skip the next section.\n\n```sh\ndocker network create manager-network\ndocker run --name s3Server -p 8000:8000 -e SCALITY_ACCESS_KEY_ID=access_key -e SCALITY_SECRET_ACCESS_KEY=secret --net manager-network scality/s3server \n```\n\nOnce launched, we can run a manager instance.\n\n```sh\ndocker run -d --net manager-network \\\n --name wasm-manager \\\n -p 5001:5001 \\\n -e \"MANAGER_PORT=5001\" \\\n -e \"AUTH_MODE=AUTH\" \\\n -e \"MANAGER_MAX_PARALLEL_JOBS=2\" \\\n -e \"MANAGER_ALLOWED_DOMAINS=otoroshi.oto.tools,wasm-manager.oto.tools,localhost:5001\" \\\n -e \"OTOROSHI_USER_HEADER=Otoroshi-User\" \\\n -e \"OTOROSHI_TOKEN_SECRET=veryverysecret\" \\\n -e \"S3_ACCESS_KEY_ID=access_key\" \\\n -e \"S3_SECRET_ACCESS_KEY=secret\" \\\n -e \"S3_FORCE_PATH_STYLE=true\" \\\n -e \"S3_ENDPOINT=http://host.docker.internal:8000\" \\\n -e \"S3_BUCKET=wasm-manager\" \\\n -e \"DOCKER_USAGE=true\" \\\n maif/otoroshi-wasm-manager\n```\n\nOnce launched, go to [http://localhost:5001](http://localhost:5001). If everything is working as intended, \nyou should see, at the bottom right of your screen the following error\n\n```\nYou're not authorized to access to manager\n```\n\nThis error indicates that the manager could not authorize the request. \nActually, the manager expects to be only reachable through Otoroshi (this is the definition of the `AUTH_MODE=AUTH`). \nSo we need to create a route in Otoroshi to properly expose our manager to the rest of the world.\n\n### Create a route to expose and protect the manager with authentication\n\nWe are going to use the admin API of Otoroshi to create the route. The configuration of the route is:\n\n* `wasm-manager` as name\n* `wasm-manager.oto.tools` as exposed domain\n* `localhost:5001` as target without TLS option enabled\n\nWe need to add two more plugins to require the authentication from users and to pass the logged in user to the manager. \nThese plugins are named `Authentication` and `Otoroshi Info. token`. \nThe Authentication plugin will use an in-memory authentication with one default user (wasm@otoroshi.io/password). \nThe second plugin will be configured with the value of the `OTOROSHI_USER_HEADER` environment variable. \n\nLet's create the authentication module (if you are interested in how authentication module works, \nyou should read the other tutorials about How to secure an app). \nThe following command creates an in-memory authentication module with an user.\n\n```sh\ncurl -X POST \"http://otoroshi-api.oto.tools:8080/api/auths\" \\\n-u \"admin-api-apikey-id:admin-api-apikey-secret\" \\\n-H 'Content-Type: application/json; charset=utf-8' \\\n-d @- <<'EOF'\n{\n \"id\": \"wasm_manager_in_memory\",\n \"type\": \"basic\",\n \"name\": \"In memory authentication\",\n \"desc\": \"Group of static users\",\n \"users\": [\n {\n \"name\": \"User Otoroshi\",\n \"password\": \"$2a$10$oIf4JkaOsfiypk5ZK8DKOumiNbb2xHMZUkYkuJyuIqMDYnR/zXj9i\",\n \"email\": \"wasm@otoroshi.io\"\n }\n ],\n \"sessionCookieValues\": {\n \"httpOnly\": true,\n \"secure\": false\n }\n}\nEOF\n```\n\nOnce created, you can create our route to expose the manager.\n\n```sh\ncurl -X POST \"http://otoroshi-api.oto.tools:8080/api/routes\" \\\n-H \"Content-type: application/json\" \\\n-u \"admin-api-apikey-id:admin-api-apikey-secret\" \\\n-d @- <<'EOF'\n{\n \"id\": \"wasm-manager\",\n \"name\": \"wasm-manager\",\n \"frontend\": {\n \"domains\": [\"wasm-manager.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"localhost\",\n \"port\": 5001,\n \"tls\": false\n }\n ],\n \"load_balancing\": {\n \"type\": \"RoundRobin\"\n }\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.AuthModule\",\n \"exclude\": [\n \"/plugins\",\n \"/wasm/.*\"\n ],\n \"config\": {\n \"pass_with_apikey\": false,\n \"auth_module\": null,\n \"module\": \"wasm_manager_in_memory\"\n }\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.ApikeyCalls\",\n \"include\": [\n \"/plugins\",\n \"/wasm/.*\"\n ],\n \"config\": {}\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.OtoroshiInfos\",\n \"config\": {\n \"version\": \"Latest\",\n \"ttl\": 30,\n \"header_name\": \"Otoroshi-User\",\n \"algo\": {\n \"type\": \"HSAlgoSettings\",\n \"size\": 512,\n \"secret\": \"veryverysecret\"\n }\n }\n }\n ]\n}\nEOF\n```\n\nTry to access to the manager with the new domain: http://wasm-manager.oto.tools:8080. \nThis should redirect you to the login page of Otoroshi. Enter the credentials of the user: wasm@otoroshi.io/password\nCongratulations, you now have a secure manager.\n\n### Create a first validator plugin using the manager\n\nIn the previous part, we secured the manager. Now, is the time to create your first simple plugin, written in Rust. \nThis plugin will apply a check on the request and ensure that the headers contains the key-value foo:bar.\n\n1. On the right top of the screen, click on the plus icon to create a new plugin\n2. Select the Rust language\n3. Call it `my-first-validator` and press the enter key\n4. Click on the new plugin called `my-first-validator`\n\nBefore continuing, let's explain the different files already present in your plugin. \n\n* `types.rs`: this file contains all Otoroshi structures that the plugin can receive and respond\n* `lib.rs`: this file is the core of your plugin. It must contain at least one **function** which will be called by Otoroshi when executing the plugin.\n* `Cargo.toml`: for each rust package, this file is called its manifest. It is written in the TOML format. \nIt contains metadata that is needed to compile the package. You can read more information about it [here](https://doc.rust-lang.org/cargo/reference/manifest.html)\n\nYou can write a plugin for different uses cases in Otoroshi: validate an access, transform request or generate a target. \nIn terms of plugin type,\nyou need to change your plugin's context and reponse types accordingly.\n\nLet's take the example of creating a validator plugin. If we search in the types.rs file, we can found the corresponding \ntypes named: `WasmAccessValidatorContext` and `WasmAccessValidatorResponse`.\nThese types must be use in the declaration of the main **function** (named execute in our case).\n\n```rust\n... \npub fn execute(Json(context): Json) -> FnResult> {\n \n}\n```\n\nWith this code, we declare a function named `execute`, which takes a context of type WasmAccessValidatorContext as parameter, \nand which returns an object of type WasmAccessValidatorResponse. Now, let's add the check of the foo header.\n\n```rust\n... \npub fn execute(Json(context): Json) -> FnResult> {\n match context.request.headers.get(\"foo\") {\n Some(foo) => if foo == \"bar\" {\n Ok(Json(types::WasmAccessValidatorResponse { \n result: true,\n error: None\n }))\n } else {\n Ok(Json(types::WasmAccessValidatorResponse { \n result: false, \n error: Some(types::WasmAccessValidatorError { \n message: format!(\"{} is not authorized\", foo).to_owned(), \n status: 401\n }) \n }))\n },\n None => Ok(Json(types::WasmAccessValidatorResponse { \n result: false, \n error: Some(types::WasmAccessValidatorError { \n message: \"you're not authorized\".to_owned(), \n status: 401\n }) \n }))\n }\n}\n```\n\nFirst, we checked if the foo header is present, otherwise we return an object of type WasmAccessValidatorError.\nIn the other case, we continue by checking its value. In this example, we have used three types, already declared for you in the types.rs file:\n`WasmAccessValidatorResponse`, `WasmAccessValidatorError` and `WasmAccessValidatorContext`. \n\nAt this time, the content of your lib.rs file should be:\n\n```rust\nmod types;\n\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn execute(Json(context): Json) -> FnResult> {\n match context.request.headers.get(\"foo\") {\n Some(foo) => if foo == \"bar\" {\n Ok(Json(types::WasmAccessValidatorResponse { \n result: true,\n error: None\n }))\n } else {\n Ok(Json(types::WasmAccessValidatorResponse { \n result: false, \n error: Some(types::WasmAccessValidatorError { \n message: format!(\"{} is not authorized\", foo).to_owned(), \n status: 401\n }) \n }))\n },\n None => Ok(Json(types::WasmAccessValidatorResponse { \n result: false, \n error: Some(types::WasmAccessValidatorError { \n message: \"you're not authorized\".to_owned(), \n status: 401\n }) \n }))\n }\n}\n```\n\nLet's compile this plugin by clicking on the hammer icon at the right top of your screen. Once done, you can try your built plugin directly in the UI.\nClick on the play button at the right top of your screen, select your plugin and the correct type of the incoming fake context. \nOnce done, click on the run button at the bottom of your screen. This should output an error.\n\n```json\n{\n \"result\": false,\n \"error\": {\n \"message\": \"asd is not authorized\",\n \"status\": 401\n }\n}\n```\n\nLet's edit the fake input context by adding the exepected foo Header.\n\n```json\n{\n \"request\": {\n \"id\": 0,\n \"method\": \"\",\n \"headers\": {\n \"foo\": \"bar\"\n },\n \"cookies\"\n ...\n```\n\nResubmit the command. It should pass.\n\n### Configure the danger zone of Otoroshi to bind Otoroshi and the manager\n\nNow that we have our compiled plugin, we have to connect Otoroshi with the manager. Let's navigate to the danger zone, and add the following values in the WASM manager section:\n\n* `URL`: admin-api-apikey-id\n* `Apikey id`: admin-api-apikey-secret\n* `Apikey secret`: http://localhost:5001\n* `User(s)`: *\n\nThe User(s) property is used by the manager to filter the list of returned plugins (example: wasm@otoroshi.io will only return the list of plugins created by this user). \n\nDon't forget to save the configuration.\n\n### Create a route using the generated wasm file\n\nThe last step of our tutorial is to create the route using the validator. Let's create the route with the following parameters:\n\n```sh\ncurl -X POST \"http://otoroshi-api.oto.tools:8080/api/routes\" \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"id\": \"wasm-route\",\n \"name\": \"wasm-route\",\n \"frontend\": {\n \"domains\": [\"wasm-route.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"localhost\",\n \"port\": 5001,\n \"tls\": false\n }\n ],\n \"load_balancing\": {\n \"type\": \"RoundRobin\"\n }\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.WasmAccessValidator\",\n \"enabled\": true,\n \"config\": {\n \"compiler_source\": \"my-first-validator\",\n \"functionName\": \"execute\"\n }\n }\n ]\n}\nEOF\n```\n\nYou can validate the creation by navigating to the [dashboard](http://otoroshi.oto.tools:9999/bo/dashboard/routes/wasm-route?tab=flow)\n\n### Test your route\n\nRun the two following commands. The first should show an unauthorized error and the second should conclude this tutorial.\n\n```sh\ncurl \"http://wasm-route.oto.tools:8080\"\n```\n\nand \n\n```sh\ncurl \"http://wasm-route.oto.tools:8080\" -H \"foo:bar\"\n```\n\nCongratulations, you have successfully written your first validator using your own manager.\n"},{"name":"wasm-usage.md","id":"/how-to-s/wasm-usage.md","url":"/how-to-s/wasm-usage.html","title":"Using wasm plugins","content":"# Using wasm plugins\n\nWebAssembly (WASM) is a simple machine model and executable format with an extensive specification. It is designed to be portable, compact, and execute at or near native speeds. Otoroshi already supports the execution of WASM files by providing different plugins that can be applied on routes. You can find more about those plugins @ref:[here](../topics/wasm-usage.md)\n\nTo simplify the process of WASM creation and usage, Otoroshi provides:\n\n- otoroshi ui integration: a full set of plugins that let you pick which WASM function to runtime at any point in a route\n- otoroshi `wasm-manager`: a code editor in the browser that let you write your plugin in `Rust`, `TinyGo`, `Javascript` or `Assembly Script` without having to think about compiling it to WASM (you can find a complete tutorial about it @ref:[here](../how-to-s/wasm-manager-installation.md))\n\n@@@ div { .centered-img }\n\n@@@\n\n## Tutorial\n\n1. [Before your start](#before-your-start)\n2. [Create the route with the plugin validator](#create-the-route-with-the-plugin-validator)\n3. [Test your validator](#test-your-validator)\n4. [Update the route by replacing the backend with a WASM file](#update-the-route-by-replacing-the-backend-with-a-wasm-file)\n5. [WASM backend test](#wasm-backend-test)\n\nAfter completing these steps you will have a route that uses WASM plugins written in Rust.\n\n## Before your start\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n## Create the route with the plugin validator\n\nFor this tutorial, we will start with an existing wasm file. The main function of this file will check the value of an http header to allow access or not. The can find this file at [https://raw.githubusercontent.com/MAIF/otoroshi/master/demos/wasm/first-validator.wasm](#https://raw.githubusercontent.com/MAIF/otoroshi/master/demos/wasm/first-validator.wasm)\n\nThe main function of this validator, written in rust, should look like:\n\nvalidator.rs\n: @@snip [validator.rs](../snippets/wasm-manager/validator.rs) \n\nvalidator.js\n: @@snip [validator.js](../snippets/wasm-manager/validator.js) \n\nvalidator.ts\n: @@snip [validator.ts](../snippets/wasm-manager/validator.ts) \n\nvalidator.js\n: @@snip [validator.js](../snippets/wasm-manager/validator.js) \n\nvalidator.go\n: @@snip [validator.js](../snippets/wasm-manager/validator.go) \n\nThe plugin receives the request context from Otoroshi (the matching route, the api key if present, the headers, etc) as `WasmAccessValidatorContext` object. \nThen it applies a check on the headers, and responds with an error or success depending on the content of the foo header. \nObviously, the previous snippet is an example and the editor allows you to write whatever you want as a check.\n\nLet's create a route that uses the previous wasm file as an access validator plugin :\n\n```sh\ncurl -X POST \"http://otoroshi-api.oto.tools:8080/api/routes\" \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"id\": \"demo-otoroshi\",\n \"name\": \"demo-otoroshi\",\n \"frontend\": {\n \"domains\": [\"demo-otoroshi.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ],\n \"load_balancing\": {\n \"type\": \"RoundRobin\"\n }\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.OverrideHost\",\n \"enabled\": true\n },\n {\n \"plugin\": \"cp:otoroshi.next.plugins.WasmAccessValidator\",\n \"enabled\": true,\n \"config\": {\n \"source\": {\n \"kind\": \"http\",\n \"path\": \"https://raw.githubusercontent.com/MAIF/otoroshi/master/demos/wasm/first-validator.wasm\",\n \"opts\": {}\n },\n \"memoryPages\": 4,\n \"functionName\": \"execute\"\n }\n }\n ]\n}\nEOF\n```\n\nThis request will apply the following process:\n\n* names the route *demo-otoroshi*\n* creates a frontend exposed on the `demo-otoroshi.oto.tools` \n* forward requests on one target, reachable at `mirror.otoroshi.io` using TLS on port 443\n* adds the *WasmAccessValidator* plugin to validate access based on the foo header to the route\n\nYou can validate the route creation by navigating to the [dashboard](http://otoroshi.oto.tools:8080/bo/dashboard/routes/demo-otoroshi?tab=flow)\n\n## Test your validator\n\n```shell\ncurl \"http://demo-otoroshi.oto.tools:8080\" -I\n```\n\nThis should output the following error:\n\n```\nHTTP/1.1 401 Unauthorized\n```\n\nLet's call again the route by adding the header foo with the bar value.\n\n```shell\ncurl \"http://demo-otoroshi.oto.tools:8080\" -H \"foo:bar\" -I\n```\n\nThis should output the successfull message:\n\n```\nHTTP/1.1 200 OK\n```\n\n## Update the route by replacing the backend with a WASM file\n\nThe next step in this tutorial is to use a WASM file as backend of the route. We will use an existing WASM file, available in our wasm demos repository on github. \nThe content of this plugin, called `wasm-target.wasm`, looks like:\n\ntarget.rs\n: @@snip [target.rs](../snippets/wasm-manager/target.rs) \n\ntarget.js\n: @@snip [target.js](../snippets/wasm-manager/target.js) \n\ntarget.ts\n: @@snip [target.ts](../snippets/wasm-manager/target.ts) \n\ntarget.js\n: @@snip [target.js](../snippets/wasm-manager/target.js) \n\ntarget.go\n: @@snip [target.js](../snippets/wasm-manager/target.go) \n\nLet's explain this snippet. The purpose of this type of plugin is to respond an HTTP response with http status, body and headers map.\n\n1. Includes all public structures from `types.rs` file. This file contains predefined Otoroshi structures that plugins can manipulate.\n2. Necessary imports. [Extism](https://extism.org/docs/overview)'s goal is to make all software programmable by providing a plug-in system. \n3. Creates a map of new headers that will be merged with incoming request headers.\n4. Creates the response object with the map of merged headers, a simple JSON body and a successfull status code.\n\nThe file is downloadable [here](#https://raw.githubusercontent.com/MAIF/otoroshi/master/demos/wasm/wasm-target.wasm).\n\nLet's update the route using the this wasm file.\n\n```sh\ncurl -X PUT \"http://otoroshi-api.oto.tools:8080/api/routes/demo-otoroshi\" \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"id\": \"demo-otoroshi\",\n \"name\": \"demo-otoroshi\",\n \"frontend\": {\n \"domains\": [\"demo-otoroshi.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ],\n \"load_balancing\": {\n \"type\": \"RoundRobin\"\n }\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.OverrideHost\",\n \"enabled\": true\n },\n {\n \"plugin\": \"cp:otoroshi.next.plugins.WasmAccessValidator\",\n \"enabled\": true,\n \"config\": {\n \"source\": {\n \"kind\": \"http\",\n \"path\": \"https://raw.githubusercontent.com/MAIF/otoroshi/master/demos/wasm/first-validator.wasm\",\n \"opts\": {}\n },\n \"memoryPages\": 4,\n \"functionName\": \"execute\"\n }\n },\n {\n \"plugin\": \"cp:otoroshi.next.plugins.WasmBackend\",\n \"enabled\": true,\n \"config\": {\n \"source\": {\n \"kind\": \"http\",\n \"path\": \"https://raw.githubusercontent.com/MAIF/otoroshi/master/demos/wasm/wasm-target.wasm\",\n \"opts\": {}\n },\n \"memoryPages\": 4,\n \"functionName\": \"execute\"\n }\n }\n ]\n}\nEOF\n```\n\nThe response should contains the updated route content.\n\n## WASM backend test\n\nLet's call our route.\n\n```sh\ncurl \"http://demo-otoroshi.oto.tools:8080\" -H \"foo:bar\" -H \"fifi: foo\" -v\n```\n\nThis should output:\n\n```\n* Trying 127.0.0.1:8080...\n* Connected to demo-otoroshi.oto.tools (127.0.0.1) port 8080 (#0)\n> GET / HTTP/1.1\n> Host: demo-otoroshi.oto.tools:8080\n> User-Agent: curl/7.79.1\n> Accept: */*\n> foo:bar\n> fifi:foo\n>\n* Mark bundle as not supporting multiuse\n< HTTP/1.1 200 OK\n< foo: bar\n< Host: demo-otoroshi.oto.tools:8080\n<\n* Closing connection 0\n{\"foo\": \"bar\"}\n```\n\nIn this response, we can find our headers send in the curl command and those added by the wasm plugin.\n\n\n\n"},{"name":"working-with-eureka.md","id":"/how-to-s/working-with-eureka.md","url":"/how-to-s/working-with-eureka.html","title":"Working with Eureka","content":"# Working with Eureka\n\nEureka is a library of Spring Cloud Netflix, which provides two parts to register and discover services.\nGenerally, the services are applications written with Spring but Eureka also provides a way to communicate in REST. The main goals of Eureka are to allow clients to find and communicate with each other without hard-coding the hostname and port.\nAll services are registered in an Eureka Server.\n\nTo work with Eureka, Otoroshi has three differents plugins:\n\n* to expose its own Eureka Server instance\n* to discover an existing Eureka Server instance\n* to use Eureka application as an Otoroshi target and took advantage of all Otoroshi clients features (load-balancing, rate limiting, etc...)\n\nLet's cut this tutorial in three parts. \n\n- Create an simple Spring application that we'll use as an Eureka Client\n- Deploy an implementation of the Otoroshi Eureka Server (using the `Eureka Instance` plugin), register eureka clients and expose them using the `Internal Eureka Server` plugin\n- Deploy an Netflix Eureka Server and use it in Otoroshi to discover apps using the `External Eureka Server` plugin.\n\n\nIn this tutorial: \n\n- [Create an Otoroshi route with the Internal Eureka Server plugin](#create-an-otoroshi-route-with-the-internal-eureka-server-plugin)\n- [Create a simple Eureka Client and register it](#create-a-simple-eureka-client-and-register-it)\n- [Connect to an external Eureka server](#connect-to-an-external-eureka-server)\n\n### Download Otoroshi\n\n@@include[initialize.md](../includes/initialize.md) { #initialize-otoroshi }\n\n### Create an Otoroshi route with the Internal Eureka Server plugin\n\n@@@ note\nWe'll supposed that you have an Otoroshi exposed on the 8080 port with the new Otoroshi engine enabled\n@@@\n\nLet's jump to the routes Otoroshi [view](http://otoroshi.oto.tools:8080/bo/dashboard/routes) and create a new route using the wizard button.\n\nEnter the following values in for each step:\n\n1. An Eureka Server instance\n2. Choose the first choice : **BLANK ROUTE** and click on continue\n3. As exposed domain, set `eureka-server.oto.tools/eureka`\n4. As Target URL, set `http://foo.bar` (this value has no importance and will be skip by the Otoroshi Instance plugin)\n5. Validate the creation\n\nOnce created, you can hide with the arrow on the right top of the screen the tester view (which is displayed by default after each route creation).\nIn our case, we want to add a new plugin, called Internal Eureka Instance on our feed.\n\nInside the designer view:\n\n1. Search the `Eureka Instance` in the list of plugins.\n2. Add it to the feed by clicking on it\n3. Set an eviction timeout at 300 seconds (this configuration is used by Otoroshi to automatically check if an Eureka is up. Otherwise Otoroshi will evict the eureka client from the registry)\n\nWell done you have set up an Eureka Server. To check the content of an Eureka Server, you can navigate to this [link]('http://otoroshi.oto.tools:8080/bo/dashboard/eureka-servers'). In all case, none instances or applications are registered, so the registry is currently empty.\n\n### Create a simple Eureka Client and register it\n\n*This tutorial has no vocation to teach you how to write an Spring application and it may exists a newer version of this Spring code.*\n\n\nFor this tutorial, we'll use the following code which initiates an Eureka Client and defines an Spring REST Controller with only one endpoint. This endpoint will return its own exposed port (this value will be useful to check that the Otoroshi load balancing is right working between the multiples Eureka instances registered).\n\n\nLet's fast create a Spring project using [Spring Initializer](https://start.spring.io/). You can use the previous link or directly click on the following link to get the form already filled with the needed dependencies.\n\n````bash\nhttps://start.spring.io/#!type=maven-project&language=java&platformVersion=2.7.3&packaging=jar&jvmVersion=17&groupId=otoroshi.io&artifactId=eureka-client&name=eureka-client&description=A%20simple%20eureka%20client&packageName=otoroshi.io.eureka-client&dependencies=cloud-eureka,web\n````\n\nFeel free to change the project metadata for your use case.\n\nOnce downloaded and uncompressed, let's ahead and start to delete the application.properties and create an application.yml (if you are more comfortable with an application.properties, keep it)\n\n````yaml\neureka:\n client:\n fetch-registry: false # disable the discovery services mechanism for the client\n serviceUrl:\n defaultZone: http://eureka-server.oto.tools:8080/eureka\n\nspring:\n application:\n name: foo_app\n\n````\n\n\nNow, let's define the simple REST controller to expose the client port.\n\nCreate a new file, called PortController.java, in the sources folder of your project with the following content.\n\n````java\npackage otoroshi.io.eurekaclient;\n\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.core.env.Environment;\nimport org.springframework.web.bind.annotation.GetMapping;\nimport org.springframework.web.bind.annotation.RestController;\n\n@RestController\npublic class PortController {\n\n @Autowired\n Environment environment;\n\n @GetMapping(\"/port\")\n public String index() {\n return environment.getProperty(\"local.server.port\");\n }\n}\n````\nThis controller is very simple, we just exposed one endpoint `/port` which returns the port as string. Our client is ready to running. \n\nLet's launch it with the following command:\n\n````sh\nmvn spring-boot:run -Dspring-boot.run.arguments=--server.port=8085\n````\n\n@@@note\nThe port is not required but it will be useful when we will deploy more than one instances in the rest of the tutorial\n@@@\n\n\nOnce the command ran, you can navigate to the eureka server view in the Otoroshi UI. The dashboard should displays one registered app and instance.\nIt should also displays a timer for each application which represents the elapsed time since the last received heartbeat.\n\nLet's define a new route to exposed our registered eureka client.\n\n* Create a new route, named `Eureka client`, exposed on `http://eureka-client.oto.tools:8080` and targeting `http://foo.bar`\n* Search and add the `Internal Eureka server` plugin \n* Edit the plugin and choose your eureka server and your app (in our case, `Eureka Server` and `FOO_APP` respectively)\n* Save your route\n\nNow try to call the new route.\n\n````sh\ncurl 'http://eureka-client.oto.tools:8080/port'\n````\n\nIf everything is working, you should get the port 8085 as the response.The setup is working as expected, but we can improve him by scaling our eureka client.\n\nOpen a new tab in your terminal and run the following command.\n\n````sh\nmvn spring-boot:run -Dspring-boot.run.arguments=--server.port=8083\n````\n\nJust wait a few seconds and retry to call your new route.\n\n````sh\ncurl 'http://eureka-client.oto.tools:8080/port'\n$ 8082\ncurl 'http://eureka-client.oto.tools:8080/port'\n$ 8085\ncurl 'http://eureka-client.oto.tools:8080/port'\n$ 8085\ncurl 'http://eureka-client.oto.tools:8080/port'\n$ 8082\n````\n\nThe configuration is ready and the setup is working, Otoroshi use all instances of your app to dispatch clients on it.\n\n### Connect to an external Eureka server\n\nOtoroshi has the possibility to discover services by connecting to an Eureka Server.\n\nLet's create a route with an Eureka application as Otoroshi target:\n\n* Create a new blank API route\n* Search and add the `External Eureka Server` plugin\n* Set your eureka URL\n* Click on `Fetch Services` button to discover the applications of the Eureka instance\n* In the appeared selector, choose the application to target\n* Once the frontend configured, save your route and try to call it.\n\nWell done, you have exposed your Eureka application through the Otoroshi discovery services.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"},{"name":"experimental.md","id":"/includes/experimental.md","url":"/includes/experimental.html","title":"@@@ warning","content":"@@@ warning\n\nthis feature is **EXPERIMENTAL** and might not work as expected.
\nIf you encounter any bugs, [please fill an issue](https://github.com/MAIF/otoroshi/issues/new), it will help us a lot :)\n\n@@@\n"},{"name":"fetch-and-start.md","id":"/includes/fetch-and-start.md","url":"/includes/fetch-and-start.html","title":"","content":"\nIf you already have an up and running otoroshi instance, you can skip the following instructions\n\nLet's start by downloading the latest Otoroshi.\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar'\n```\n\nthen you can run start Otoroshi :\n\n```sh\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n```\n\nNow you can log into Otoroshi at @link:[http://otoroshi.oto.tools:8080](http://otoroshi.oto.tools:8080) { open=new } with `admin@otoroshi.io/password`\n"},{"name":"initialize.md","id":"/includes/initialize.md","url":"/includes/initialize.html","title":"","content":"\n\nIf you already have an up and running otoroshi instance, you can skip the following instructions\n\n\n@@@div { .instructions }\n\n
\nSet up an Otoroshi\n\n
\n\nLet's start by downloading the latest Otoroshi.\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar'\n```\n\nthen you can run start Otoroshi :\n\n```sh\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n```\n\nNow you can log into Otoroshi at http://otoroshi.oto.tools:8080 with `admin@otoroshi.io/password`\n\nCreate a new route, exposed on `http://myservice.oto.tools:8080`, which will forward all requests to the mirror `https://mirror.otoroshi.io`. Each call to this service will returned the body and the headers received by the mirror.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/routes' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"name\": \"my-service\",\n \"frontend\": {\n \"domains\": [\"myservice.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n }\n}\nEOF\n```\n\n\n@@@\n"},{"name":"index.md","id":"/index.md","url":"/index.html","title":"Otoroshi","content":"# Otoroshi\n\n**Otoroshi** is a layer of lightweight api management on top of a modern http reverse proxy written in Scala and developped by the MAIF OSS team that can handle all the calls to and between your microservices without service locator and let you change configuration dynamicaly at runtime.\n\n\n> *The Otoroshi is a large hairy monster that tends to lurk on the top of the torii gate in front of Shinto shrines. It's a hostile creature, but also said to be the guardian of the shrine and is said to leap down from the top of the gate to devour those who approach the shrine for only self-serving purposes.*\n\n@@@ div { .centered-img }\n[![Join the discord](https://img.shields.io/discord/1089571852940218538?color=f9b000&label=Community&logo=Discord&logoColor=f9b000)](https://discord.gg/dmbwZrfpcQ) [ ![Download](https://img.shields.io/github/release/MAIF/otoroshi.svg) ](hhttps://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar)\n@@@\n\n@@@ div { .centered-img }\n\n@@@\n\n## Installation\n\nYou can download the latest build of Otoroshi as a @ref:[fat jar](./install/get-otoroshi.md#from-jar-file), as a @ref:[zip package](./install/get-otoroshi.md#from-zip) or as a @ref:[docker image](./install/get-otoroshi.md#from-docker).\n\nYou can install and run Otoroshi with this little bash snippet\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar'\njava -jar otoroshi.jar\n```\n\nor using docker\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi:16.5.0-dev\n```\n\nnow open your browser to http://otoroshi.oto.tools:8080/, **log in with the credential generated in the logs** and explore by yourself, if you want better instructions, just go to the @ref:[Quick Start](./getting-started.md) or directly to the @ref:[installation instructions](./install/get-otoroshi.md)\n\n## Documentation\n\n* @ref:[About Otoroshi](./about.md)\n* @ref:[Architecture](./architecture.md)\n* @ref:[Features](./features.md)\n* @ref:[Getting started](./getting-started.md)\n* @ref:[Install Otoroshi](./install/index.md)\n* @ref:[Main entities](./entities/index.md)\n* @ref:[Detailed topics](./topics/index.md)\n* @ref:[How to's](./how-to-s/index.md)\n* @ref:[Plugins](./plugins/index.md)\n* @ref:[Admin REST API](./api.md)\n* @ref:[Deploy to production](./deploy/index.md)\n* @ref:[Developing Otoroshi](./dev.md)\n\n## Discussion\n\nJoin the @link:[Otoroshi server](https://discord.gg/dmbwZrfpcQ) { open=new } Discord\n\n## Sources\n\nThe sources of Otoroshi are available on @link:[Github](https://github.com/MAIF/otoroshi) { open=new }.\n\n## Logo\n\nYou can find the official Otoroshi logo @link:[on GitHub](https://github.com/MAIF/otoroshi/blob/master/resources/otoroshi-logo.png) { open=new }. The Otoroshi logo has been created by François Galioto ([@fgalioto](https://twitter.com/fgalioto))\n\n## Changelog\n\nEvery release, along with the migration instructions, is documented on the @link:[Github Releases](https://github.com/MAIF/otoroshi/releases) { open=new } page. A condensed version of the changelog is available on @link:[github](https://github.com/MAIF/otoroshi/blob/master/CHANGELOG.md) { open=new }\n\n## Patrons\n\nThe work on Otoroshi was funded by MAIF with the help of the community.\n\n## Licence\n\nOtoroshi is Open Source and available under the @link:[Apache 2 License](https://opensource.org/licenses/Apache-2.0) { open=new }\n\n@@@ index\n\n* [About Otoroshi](./about.md)\n* [Architecture](./architecture.md)\n* [Features](./features.md)\n* [Getting started](./getting-started.md)\n* [Install Otoroshi](./install/index.md)\n* [Main entities](./entities/index.md)\n* [Detailed topics](./topics/index.md)\n* [How to's](./how-to-s/index.md)\n* [Plugins](./plugins/index.md)\n* [Admin REST API](./api.md)\n* [Deploy to production](./deploy/index.md)\n* [Developing Otoroshi](./dev.md)\n\n@@@\n\n"},{"name":"get-otoroshi.md","id":"/install/get-otoroshi.md","url":"/install/get-otoroshi.html","title":"Get Otoroshi","content":"# Get Otoroshi\n\nAll release can be bound on the releases page of the @link:[repository](https://github.com/MAIF/otoroshi/releases) { open=new }.\n\n## From zip\n\n```sh\n# Download the latest version\nwget https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi-16.5.0-dev.zip\nunzip ./otoroshi-16.5.0-dev.zip\ncd otoroshi-16.5.0-dev\n```\n\n## From jar file\n\n```sh\n# Download the latest version\nwget https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar\n```\n\n## From Docker\n\n```sh\n# Download the latest version\ndocker pull maif/otoroshi:16.5.0-dev-jdk11\n```\n\n## From Sources\n\nTo build Otoroshi from sources, just go to the @ref:[dev documentation](../dev.md)\n"},{"name":"index.md","id":"/install/index.md","url":"/install/index.html","title":"Install","content":"# Install\n\nIn this sections, you will find informations about how to install and run Otoroshi\n\n* @ref:[Get Otoroshi](./get-otoroshi.md)\n* @ref:[Setup Otoroshi](./setup-otoroshi.md)\n* @ref:[Run Otoroshi](./run-otoroshi.md)\n\n@@@ index\n\n* [Get Otoroshi](./get-otoroshi.md)\n* [Setup Otoroshi](./setup-otoroshi.md)\n* [Run Otoroshi](./run-otoroshi.md)\n\n@@@\n"},{"name":"run-otoroshi.md","id":"/install/run-otoroshi.md","url":"/install/run-otoroshi.html","title":"Run Otoroshi","content":"# Run Otoroshi\n\nNow you are ready to run Otoroshi. You can run the following command with some tweaks depending on the way you want to configure Otoroshi. If you want to pass a custom configuration file, use the `-Dconfig.file=/path/to/file.conf` flag in the following commands.\n\n## From .zip file\n\n```sh\ncd otoroshi-vx.x.x\n./bin/otoroshi\n```\n\n## From .jar file\n\nFor Java 11\n\n```sh\njava -jar otoroshi.jar\n```\n\nif you want to run the jar file for on a JDK above JDK11, you'll have to add the following flags\n\n```sh\njava \\\n --add-opens=java.base/javax.net.ssl=ALL-UNNAMED \\\n --add-opens=java.base/sun.net.www.protocol.file=ALL-UNNAMED \\\n --add-exports=java.base/sun.security.x509=ALL-UNNAMED \\\n --add-opens=java.base/sun.security.ssl=ALL-UNNAMED \\\n -Dlog4j2.formatMsgNoLookups=true \\\n -jar otoroshi.jar\n```\n\n## From docker\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi\n```\n\nYou can also pass useful args like :\n\n```sh\ndocker run -p \"8080:8080\" maif/otoroshi -Dconfig.file=/usr/app/otoroshi/conf/otoroshi.conf -Dlogger.file=/usr/app/otoroshi/conf/otoroshi.xml\n```\n\nIf you want to provide your own config file, you can read @ref:[the documentation about config files](./setup-otoroshi.md).\n\nYou can also provide some ENV variable using the `--env` flag to customize your Otoroshi instance.\n\nThe list of possible env variables is available @ref:[here](./setup-otoroshi.md).\n\nYou can use a volume to provide configuration like :\n\n```sh\ndocker run -p \"8080:8080\" -v \"$(pwd):/usr/app/otoroshi/conf\" maif/otoroshi\n```\n\nYou can also use a volume if you choose to use `filedb` datastore like :\n\n```sh\ndocker run -p \"8080:8080\" -v \"$(pwd)/filedb:/usr/app/otoroshi/filedb\" maif/otoroshi -Dotoroshi.storage=file\n```\n\nYou can also use a volume if you choose to use exports files :\n\n```sh\ndocker run -p \"8080:8080\" -v \"$(pwd):/usr/app/otoroshi/imports\" maif/otoroshi -Dotoroshi.importFrom=/usr/app/otoroshi/imports/export.json\n```\n\n## Run examples\n\n```sh\n$ java \\\n -Xms2G \\\n -Xmx8G \\\n -Dhttp.port=8080 \\\n -Dotoroshi.importFrom=/home/user/otoroshi.json \\\n -Dconfig.file=/home/user/otoroshi.conf \\\n -jar ./otoroshi.jar\n\n[warn] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[warn] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[warn] otoroshi-env - Importing from: /home/user/otoroshi.json\n[info] play.api.Play - Application started (Prod)\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n```\n\nIf you choose to start Otoroshi without importing existing data, Otoroshi will create a new admin user and print the login details in the log. When you will log into the admin dashboard, Otoroshi will ask you to create another account to avoid security issues.\n\n```sh\n$ java \\\n -Xms2G \\\n -Xmx8G \\\n -Dhttp.port=8080 \\\n -jar otoroshi.jar\n\n[warn] otoroshi-in-memory-datastores - Now using InMemory DataStores\n[warn] otoroshi-env - The main datastore seems to be empty, registering some basic services\n[warn] otoroshi-env - You can log into the Otoroshi admin console with the following credentials: admin@otoroshi.io / HHUsiF2UC3OPdmg0lGngEv3RrbIwWV5W\n[info] play.api.Play - Application started (Prod)\n[info] p.c.s.AkkaHttpServer - Listening for HTTP on /0:0:0:0:0:0:0:0:8080\n```\n"},{"name":"setup-otoroshi.md","id":"/install/setup-otoroshi.md","url":"/install/setup-otoroshi.html","title":"Setup Otoroshi","content":"# Setup Otoroshi\n\nin this section we are going to configure otoroshi before running it for the first time\n\n## Setup the database\n\nRight now, Otoroshi supports multiple datastore. You can choose one datastore over another depending on your use case.\n\n@@@div { .plugin .platform } \n
Redis
\n\n
Recommended
\n\nThe **redis** datastore is quite nice when you want to easily deploy several Otoroshi instances.\n\n\n\n@link:[Documentation](https://redis.io/topics/quickstart)\n@@@\n\n@@@div { .plugin .platform } \n
In memory
\n\nThe **in-memory** datastore is kind of interesting. It can be used for testing purposes, but it is also a good candidate for production because of its fastness.\n\n\n\n@ref:[Start with](../getting-started.md)\n@@@\n\n@@@div { .plugin .platform } \n
Cassandra
\n\n
Clustering
\n\nExperimental support, should be used in cluster mode for leaders\n\n\n\n@link:[Documentation](https://cassandra.apache.org/doc/latest/cassandra/getting_started/installing.html)\n@@@\n\n@@@div { .plugin .platform } \n
Postgresql
\n\n
Clustering
\n\nOr any postgresql compatible databse like cockroachdb for instance (experimental support, should be used in cluster mode for leaders)\n\n\n\n@link:[Documentation](https://www.postgresql.org/docs/10/tutorial-install.html)\n@@@\n\n@@@div { .plugin .platform } \n\n
FileDB
\n\nThe **filedb** datastore is pretty handy for testing purposes, but is not supposed to be used in production mode. \nNot suitable for production usage.\n\n\n\n@@@\n\n\n@@@ div { .centered-img }\n\n@@@\n\nthe first thing to setup is what kind of datastore you want to use with the `otoroshi.storage` setting\n\n```conf\notoroshi {\n storage = \"inmemory\" # the storage used by otoroshi. possible values are lettuce (for redis), inmemory, file, http, s3, cassandra, postgresql \n storage = ${?APP_STORAGE} # the storage used by otoroshi. possible values are lettuce (for redis), inmemory, file, http, s3, cassandra, postgresql \n storage = ${?OTOROSHI_STORAGE} # the storage used by otoroshi. possible values are lettuce (for redis), inmemory, file, http, s3, cassandra, postgresql \n}\n```\n\ndepending on the value you chose, you will be able to configure your datastore with the following configuration\n\ninmemory\n: @@snip [inmemory.conf](../snippets/datastores/inmemory.conf) \n\nfile\n: @@snip [file.conf](../snippets/datastores/file.conf) \n\nhttp\n: @@snip [http.conf](../snippets/datastores/http.conf) \n\ns3\n: @@snip [s3.conf](../snippets/datastores/s3.conf) \n\nredis\n: @@snip [lettuce.conf](../snippets/datastores/lettuce.conf) \n\npostgresql\n: @@snip [pg.conf](../snippets/datastores/pg.conf) \n\ncassandra\n: @@snip [inmemory.conf](../snippets/datastores/cassandra.conf) \n\n## Setup your hosts before running\n\nBy default, Otoroshi starts with domain `oto.tools` that automatically targets `127.0.0.1` with no changes to your `/etc/hosts` file. Of course you can change the domain value, you have to add the values in your `/etc/hosts` file according to the setting you put in Otoroshi configuration or define the right ip address at the DNS provider level\n\n* `otoroshi.domain` => `mydomain.org`\n* `otoroshi.backoffice.subdomain` => `otoroshi`\n* `otoroshi.privateapps.subdomain` => `privateapps`\n* `otoroshi.adminapi.exposedSubdomain` => `otoroshi-api`\n* `otoroshi.adminapi.targetSubdomain` => `otoroshi-admin-internal-api`\n\nfor instance if you want to change the default domain and use something like `otoroshi.mydomain.org`, then start otoroshi like \n\n```sh\njava -Dotoroshi.domain=mydomain.org -jar otoroshi.jar\n```\n\n@@@ warning\nOtoroshi cannot be accessed using `http://127.0.0.1:8080` or `http://localhost:8080` because Otoroshi uses Otoroshi to serve it's own UI and API. When otoroshi starts with an empty database, it will create a service descriptor for that using `otoroshi.domain` and the settings listed on this page and in the here that serve Otoroshi API and UI on `http://otoroshi-api.${otoroshi.domain}` and `http://otoroshi.${otoroshi.domain}`.\nOnce the descriptor is saved in database, if you want to change `otoroshi.domain`, you'll have to edit the descriptor in the database or restart Otoroshi with an empty database.\n@@@\n\n@@@ warning\nif your otoroshi instance runs behind a reverse proxy (L4 / L7) or inside a docker container where exposed ports (that you will use to access otoroshi) are not the same that the ones configured in otoroshi (`http.port` and `https.port`), you'll have to configure otoroshi exposed port to avoid bad redirection URLs when using authentication modules and other otoroshi tools. To do that, just set the values of the exposed ports in `otoroshi.exposed-ports.http = $theExposedHttpPort` (OTOROSHI_EXPOSED_PORTS_HTTP) and `otoroshi.exposed-ports.https = $theExposedHttpsPort` (OTOROSHI_EXPOSED_PORTS_HTTPS)\n@@@\n\n## Setup your configuration file\n\nThere is a lot of things you can configure in Otoroshi. By default, Otoroshi provides a configuration that should be enough for testing purpose. But you'll likely need to update this configuration when you'll need to move into production.\n\nIn this page, any configuration property can be set at runtime using a `-D` flag when launching Otoroshi like \n\n```sh\njava -Dhttp.port=8080 -jar otoroshi.jar\n```\n\nor\n\n```sh\n./bin/otoroshi -Dhttp.port=8080 \n```\n\nif you want to define your own config file and use it on an otoroshi instance, use the following flag\n\n```sh\njava -Dconfig.file=/path/to/otoroshi.conf -jar otoroshi.jar\n``` \n\n### Example of a custom. configuration file\n\n```conf\ninclude \"application.conf\"\n\nhttp.port = 8080\n\napp {\n storage = \"inmemory\"\n importFrom = \"./my-state.json\"\n env = \"prod\"\n domain = \"oto.tools\"\n rootScheme = \"http\"\n snowflake {\n seed = 0\n }\n events {\n maxSize = 1000\n }\n backoffice {\n subdomain = \"otoroshi\"\n session {\n exp = 86400000\n }\n }\n privateapps {\n subdomain = \"privateapps\"\n session {\n exp = 86400000\n }\n }\n adminapi {\n targetSubdomain = \"otoroshi-admin-internal-api\"\n exposedSubdomain = \"otoroshi-api\"\n defaultValues {\n backOfficeGroupId = \"admin-api-group\"\n backOfficeApiKeyClientId = \"admin-api-apikey-id\"\n backOfficeApiKeyClientSecret = \"admin-api-apikey-secret\"\n backOfficeServiceId = \"admin-api-service\"\n }\n }\n claim {\n sharedKey = \"mysecret\"\n }\n filedb {\n path = \"./filedb/state.ndjson\"\n }\n}\n\nplay.http {\n session {\n secure = false\n httpOnly = true\n maxAge = 2592000000\n domain = \".oto.tools\"\n cookieName = \"oto-sess\"\n }\n}\n```\n\n### Reference configuration\n\n@@snip [reference.conf](../snippets/reference.conf) \n\n### More config. options\n\nSee default configuration at\n\n* @link:[Base configuration](https://github.com/MAIF/otoroshi/blob/master/otoroshi/conf/base.conf) { open=new }\n* @link:[Application configuration](https://github.com/MAIF/otoroshi/blob/master/otoroshi/conf/application.conf) { open=new }\n\n## Configuration with env. variables\n\nEevery property in the configuration file can be overriden by an environment variable if it has env variable override written like `${?ENV_VARIABLE}`).\n\n## Reference configuration for env. variables\n\n@@snip [reference-env.conf](../snippets/reference-env.conf) \n"},{"name":"built-in-legacy-plugins.md","id":"/plugins/built-in-legacy-plugins.md","url":"/plugins/built-in-legacy-plugins.html","title":"Built-in legacy plugins","content":"# Built-in legacy plugins\n\nOtoroshi provides some plugins out of the box. Here is the available plugins with their documentation and reference configuration\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.accesslog.AccessLog }\n\n## Access log (CLF)\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `AccessLog`\n\n### Description\n\nWith this plugin, any access to a service will be logged in CLF format.\n\nLog format is the following:\n\n`\"$service\" $clientAddress - \"$userId\" [$timestamp] \"$host $method $path $protocol\" \"$status $statusTxt\" $size $snowflake \"$to\" \"$referer\" \"$userAgent\" $http $duration $errorMsg`\n\nThe plugin accepts the following configuration\n\n```json\n{\n \"AccessLog\": {\n \"enabled\": true,\n \"statuses\": [], // list of status to enable logs, if none, log everything\n \"paths\": [], // list of paths to enable logs, if none, log everything\n \"methods\": [], // list of http methods to enable logs, if none, log everything\n \"identities\": [] // list of identities to enable logs, if none, log everything\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"AccessLog\" : {\n \"enabled\" : true,\n \"statuses\" : [ ],\n \"paths\" : [ ],\n \"methods\" : [ ],\n \"identities\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.accesslog.AccessLogJson }\n\n## Access log (JSON)\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `AccessLog`\n\n### Description\n\nWith this plugin, any access to a service will be logged in json format.\n\nThe plugin accepts the following configuration\n\n```json\n{\n \"AccessLog\": {\n \"enabled\": true,\n \"statuses\": [], // list of status to enable logs, if none, log everything\n \"paths\": [], // list of paths to enable logs, if none, log everything\n \"methods\": [], // list of http methods to enable logs, if none, log everything\n \"identities\": [] // list of identities to enable logs, if none, log everything\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"AccessLog\" : {\n \"enabled\" : true,\n \"statuses\" : [ ],\n \"paths\" : [ ],\n \"methods\" : [ ],\n \"identities\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.accesslog.KafkaAccessLog }\n\n## Kafka access log\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `KafkaAccessLog`\n\n### Description\n\nWith this plugin, any access to a service will be logged as an event in a kafka topic.\n\nThe plugin accepts the following configuration\n\n```json\n{\n \"KafkaAccessLog\": {\n \"enabled\": true,\n \"topic\": \"otoroshi-access-log\",\n \"statuses\": [], // list of status to enable logs, if none, log everything\n \"paths\": [], // list of paths to enable logs, if none, log everything\n \"methods\": [], // list of http methods to enable logs, if none, log everything\n \"identities\": [] // list of identities to enable logs, if none, log everything\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"KafkaAccessLog\" : {\n \"enabled\" : true,\n \"topic\" : \"otoroshi-access-log\",\n \"statuses\" : [ ],\n \"paths\" : [ ],\n \"methods\" : [ ],\n \"identities\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.authcallers.BasicAuthCaller }\n\n## Basic Auth. caller\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `BasicAuthCaller`\n\n### Description\n\nThis plugin can be used to call api that are authenticated using basic auth.\n\nThis plugin accepts the following configuration\n\n{\n \"username\" : \"the_username\",\n \"password\" : \"the_password\",\n \"headerName\" : \"Authorization\",\n \"headerValueFormat\" : \"Basic %s\"\n}\n\n\n\n### Default configuration\n\n```json\n{\n \"username\" : \"the_username\",\n \"password\" : \"the_password\",\n \"headerName\" : \"Authorization\",\n \"headerValueFormat\" : \"Basic %s\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.authcallers.OAuth2Caller }\n\n## OAuth2 caller\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `OAuth2Caller`\n\n### Description\n\nThis plugin can be used to call api that are authenticated using OAuth2 client_credential/password flow.\nDo not forget to enable client retry to handle token generation on expire.\n\nThis plugin accepts the following configuration\n\n{\n \"kind\" : \"the oauth2 flow, can be 'client_credentials' or 'password'\",\n \"url\" : \"https://127.0.0.1:8080/oauth/token\",\n \"method\" : \"POST\",\n \"headerName\" : \"Authorization\",\n \"headerValueFormat\" : \"Bearer %s\",\n \"jsonPayload\" : false,\n \"clientId\" : \"the client_id\",\n \"clientSecret\" : \"the client_secret\",\n \"scope\" : \"an optional scope\",\n \"audience\" : \"an optional audience\",\n \"user\" : \"an optional username if using password flow\",\n \"password\" : \"an optional password if using password flow\",\n \"cacheTokenSeconds\" : \"the number of second to wait before asking for a new token\",\n \"tlsConfig\" : \"an optional TLS settings object\"\n}\n\n\n\n### Default configuration\n\n```json\n{\n \"kind\" : \"the oauth2 flow, can be 'client_credentials' or 'password'\",\n \"url\" : \"https://127.0.0.1:8080/oauth/token\",\n \"method\" : \"POST\",\n \"headerName\" : \"Authorization\",\n \"headerValueFormat\" : \"Bearer %s\",\n \"jsonPayload\" : false,\n \"clientId\" : \"the client_id\",\n \"clientSecret\" : \"the client_secret\",\n \"scope\" : \"an optional scope\",\n \"audience\" : \"an optional audience\",\n \"user\" : \"an optional username if using password flow\",\n \"password\" : \"an optional password if using password flow\",\n \"cacheTokenSeconds\" : \"the number of second to wait before asking for a new token\",\n \"tlsConfig\" : \"an optional TLS settings object\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.cache.ResponseCache }\n\n## Response Cache\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `ResponseCache`\n\n### Description\n\nThis plugin can cache responses from target services in the otoroshi datasstore\nIt also provides a debug UI at `/.well-known/otoroshi/bodylogger`.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"ResponseCache\": {\n \"enabled\": true, // enabled cache\n \"ttl\": 300000, // store it for some times (5 minutes by default)\n \"maxSize\": 5242880, // max body size (body will be cut after that)\n \"autoClean\": true, // cleanup older keys when all bigger than maxSize\n \"filter\": { // cache only for some status, method and paths\n \"statuses\": [],\n \"methods\": [],\n \"paths\": [],\n \"not\": {\n \"statuses\": [],\n \"methods\": [],\n \"paths\": []\n }\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"ResponseCache\" : {\n \"enabled\" : true,\n \"ttl\" : 3600000,\n \"maxSize\" : 52428800,\n \"autoClean\" : true,\n \"filter\" : {\n \"statuses\" : [ ],\n \"methods\" : [ ],\n \"paths\" : [ ],\n \"not\" : {\n \"statuses\" : [ ],\n \"methods\" : [ ],\n \"paths\" : [ ]\n }\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.clientcert.ClientCertChainHeader }\n\n## Client certificate header\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `ClientCertChain`\n\n### Description\n\nThis plugin pass client certificate informations to the target in headers.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"ClientCertChain\": {\n \"pem\": { // send client cert as PEM format in a header\n \"send\": false,\n \"header\": \"X-Client-Cert-Pem\"\n },\n \"dns\": { // send JSON array of DNs in a header\n \"send\": false,\n \"header\": \"X-Client-Cert-DNs\"\n },\n \"chain\": { // send JSON representation of client cert chain in a header\n \"send\": true,\n \"header\": \"X-Client-Cert-Chain\"\n },\n \"claims\": { // pass JSON representation of client cert chain in the otoroshi JWT token\n \"send\": false,\n \"name\": \"clientCertChain\"\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"ClientCertChain\" : {\n \"pem\" : {\n \"send\" : false,\n \"header\" : \"X-Client-Cert-Pem\"\n },\n \"dns\" : {\n \"send\" : false,\n \"header\" : \"X-Client-Cert-DNs\"\n },\n \"chain\" : {\n \"send\" : true,\n \"header\" : \"X-Client-Cert-Chain\"\n },\n \"claims\" : {\n \"send\" : false,\n \"name\" : \"clientCertChain\"\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.defer.DeferPlugin }\n\n## Defer Responses\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `DeferPlugin`\n\n### Description\n\nThis plugin will expect a `X-Defer` header or a `defer` query param and defer the response according to the value in milliseconds.\nThis plugin is some kind of inside joke as one a our customer ask us to make slower apis.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"DeferPlugin\": {\n \"defaultDefer\": 0 // default defer in millis\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"DeferPlugin\" : {\n \"defaultDefer\" : 0\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.discovery.DiscoverySelfRegistrationTransformer }\n\n## Self registration endpoints (service discovery)\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `DiscoverySelfRegistration`\n\n### Description\n\nThis plugin add support for self registration endpoint on a specific service.\n\nThis plugin accepts the following configuration:\n\n\n\n### Default configuration\n\n```json\n{\n \"DiscoverySelfRegistration\" : {\n \"hosts\" : [ ],\n \"targetTemplate\" : { },\n \"registrationTtl\" : 60000\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.geoloc.GeolocationInfoEndpoint }\n\n## Geolocation endpoint\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: ``none``\n\n### Description\n\nThis plugin will expose current geolocation informations on the following endpoint.\n\n`/.well-known/otoroshi/plugins/geolocation`\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.geoloc.GeolocationInfoHeader }\n\n## Geolocation header\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `GeolocationInfoHeader`\n\n### Description\n\nThis plugin will send informations extracted by the Geolocation details extractor to the target service in a header.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"GeolocationInfoHeader\": {\n \"headerName\": \"X-Geolocation-Info\" // header in which info will be sent\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"GeolocationInfoHeader\" : {\n \"headerName\" : \"X-Geolocation-Info\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.hmac.HMACCallerPlugin }\n\n## HMAC caller plugin\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `HMACCallerPlugin`\n\n### Description\n\nThis plugin can be used to call a \"protected\" api by an HMAC signature. It will adds a signature with the secret configured on the plugin.\n The signature string will always the content of the header list listed in the plugin configuration.\n\n\n\n### Default configuration\n\n```json\n{\n \"HMACCallerPlugin\" : {\n \"secret\" : \"my-defaut-secret\",\n \"algo\" : \"HMAC-SHA512\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.izanami.IzanamiCanary }\n\n## Izanami Canary Campaign\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `IzanamiCanary`\n\n### Description\n\nThis plugin allow you to perform canary testing based on an izanami experiment campaign (A/B test).\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"IzanamiCanary\" : {\n \"experimentId\" : \"foo:bar:qix\",\n \"configId\" : \"foo:bar:qix:config\",\n \"izanamiUrl\" : \"https://izanami.foo.bar\",\n \"izanamiClientId\" : \"client\",\n \"izanamiClientSecret\" : \"secret\",\n \"timeout\" : 5000,\n \"mtls\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"IzanamiCanary\" : {\n \"experimentId\" : \"foo:bar:qix\",\n \"configId\" : \"foo:bar:qix:config\",\n \"izanamiUrl\" : \"https://izanami.foo.bar\",\n \"izanamiClientId\" : \"client\",\n \"izanamiClientSecret\" : \"secret\",\n \"timeout\" : 5000,\n \"mtls\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.izanami.IzanamiProxy }\n\n## Izanami APIs Proxy\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `IzanamiProxy`\n\n### Description\n\nThis plugin exposes routes to proxy Izanami configuration and features tree APIs.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"IzanamiProxy\" : {\n \"path\" : \"/api/izanami\",\n \"featurePattern\" : \"*\",\n \"configPattern\" : \"*\",\n \"autoContext\" : false,\n \"featuresEnabled\" : true,\n \"featuresWithContextEnabled\" : true,\n \"configurationEnabled\" : false,\n \"izanamiUrl\" : \"https://izanami.foo.bar\",\n \"izanamiClientId\" : \"client\",\n \"izanamiClientSecret\" : \"secret\",\n \"timeout\" : 5000\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"IzanamiProxy\" : {\n \"path\" : \"/api/izanami\",\n \"featurePattern\" : \"*\",\n \"configPattern\" : \"*\",\n \"autoContext\" : false,\n \"featuresEnabled\" : true,\n \"featuresWithContextEnabled\" : true,\n \"configurationEnabled\" : false,\n \"izanamiUrl\" : \"https://izanami.foo.bar\",\n \"izanamiClientId\" : \"client\",\n \"izanamiClientSecret\" : \"secret\",\n \"timeout\" : 5000\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.jq.JqBodyTransformer }\n\n## JQ bodies transformer\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `JqBodyTransformer`\n\n### Description\n\nThis plugin let you transform JSON bodies (in requests and responses) using [JQ filters](https://stedolan.github.io/jq/manual/#Basicfilters).\n\nSome JSON variables are accessible by default :\n\n * `$url`: the request url\n * `$path`: the request path\n * `$domain`: the request domain\n * `$method`: the request method\n * `$headers`: the current request headers (with name in lowercase)\n * `$queryParams`: the current request query params\n * `$otoToken`: the otoroshi protocol token (if one)\n * `$inToken`: the first matched JWT token as is (from verifiers, if one)\n * `$token`: the first matched JWT token as is (from verifiers, if one)\n * `$user`: the current user (if one)\n * `$apikey`: the current apikey (if one)\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"JqBodyTransformer\" : {\n \"request\" : {\n \"filter\" : \".\",\n \"included\" : [ ],\n \"excluded\" : [ ]\n },\n \"response\" : {\n \"filter\" : \".\",\n \"included\" : [ ],\n \"excluded\" : [ ]\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"JqBodyTransformer\" : {\n \"request\" : {\n \"filter\" : \".\",\n \"included\" : [ ],\n \"excluded\" : [ ]\n },\n \"response\" : {\n \"filter\" : \".\",\n \"included\" : [ ],\n \"excluded\" : [ ]\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.jsoup.HtmlPatcher }\n\n## Html Patcher\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `HtmlPatcher`\n\n### Description\n\nThis plugin can inject elements in html pages (in the body or in the head) returned by the service\n\n\n\n### Default configuration\n\n```json\n{\n \"HtmlPatcher\" : {\n \"appendHead\" : [ ],\n \"appendBody\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.log4j.Log4ShellFilter }\n\n## Log4Shell mitigation plugin\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `Log4ShellFilter`\n\n### Description\n\nThis plugin try to detect Log4Shell attacks in request and block them.\n\nThis plugin can accept the following configuration\n\n```javascript\n{\n \"Log4ShellFilter\": {\n \"status\": 200, // the status send back when an attack expression is found\n \"body\": \"\", // the body send back when an attack expression is found\n \"parseBody\": false // enables request body parsing to find attack expression\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"Log4ShellFilter\" : {\n \"status\" : 200,\n \"body\" : \"\",\n \"parseBody\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.loggers.BodyLogger }\n\n## Body logger\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `BodyLogger`\n\n### Description\n\nThis plugin can log body present in request and response. It can just logs it, store in in the redis store with a ttl and send it to analytics.\nIt also provides a debug UI at `/.well-known/otoroshi/bodylogger`.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"BodyLogger\": {\n \"enabled\": true, // enabled logging\n \"log\": true, // just log it\n \"store\": false, // store bodies in datastore\n \"ttl\": 300000, // store it for some times (5 minutes by default)\n \"sendToAnalytics\": false, // send bodies to analytics\n \"maxSize\": 5242880, // max body size (body will be cut after that)\n \"password\": \"password\", // password for the ui, if none, it's public\n \"filter\": { // log only for some status, method and paths\n \"statuses\": [],\n \"methods\": [],\n \"paths\": [],\n \"not\": {\n \"statuses\": [],\n \"methods\": [],\n \"paths\": []\n }\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"BodyLogger\" : {\n \"enabled\" : true,\n \"log\" : true,\n \"store\" : false,\n \"ttl\" : 300000,\n \"sendToAnalytics\" : false,\n \"maxSize\" : 5242880,\n \"password\" : \"password\",\n \"filter\" : {\n \"statuses\" : [ ],\n \"methods\" : [ ],\n \"paths\" : [ ],\n \"not\" : {\n \"statuses\" : [ ],\n \"methods\" : [ ],\n \"paths\" : [ ]\n }\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.mirror.MirroringPlugin }\n\n## Mirroring plugin\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `MirroringPlugin`\n\n### Description\n\nThis plugin will mirror every request to other targets\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"MirroringPlugin\": {\n \"enabled\": true, // enabled mirroring\n \"to\": \"https://foo.bar.dev\", // the url of the service to mirror\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"MirroringPlugin\" : {\n \"enabled\" : true,\n \"to\" : \"https://foo.bar.dev\",\n \"captureResponse\" : false,\n \"generateEvents\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.oauth1.OAuth1CallerPlugin }\n\n## OAuth1 caller\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `OAuth1Caller`\n\n### Description\n\nThis plugin can be used to call api that are authenticated using OAuth1.\n Consumer key, secret, and OAuth token et OAuth token secret can be pass through the metadata of an api key\n or via the configuration of this plugin.\n\n\n\n### Default configuration\n\n```json\n{\n \"OAuth1Caller\" : {\n \"algo\" : \"HmacSHA512\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.oidc.OIDCHeaders }\n\n## OIDC headers\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `OIDCHeaders`\n\n### Description\n\nThis plugin injects headers containing tokens and profile from current OIDC provider.\n\n\n\n### Default configuration\n\n```json\n{\n \"OIDCHeaders\" : {\n \"profile\" : {\n \"send\" : true,\n \"headerName\" : \"X-OIDC-User\"\n },\n \"idtoken\" : {\n \"send\" : false,\n \"name\" : \"id_token\",\n \"headerName\" : \"X-OIDC-Id-Token\",\n \"jwt\" : true\n },\n \"accesstoken\" : {\n \"send\" : false,\n \"name\" : \"access_token\",\n \"headerName\" : \"X-OIDC-Access-Token\",\n \"jwt\" : true\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.security.SecurityTxt }\n\n## Security Txt\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `SecurityTxt`\n\n### Description\n\nThis plugin exposes a special route `/.well-known/security.txt` as proposed at [https://securitytxt.org/](https://securitytxt.org/).\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"SecurityTxt\": {\n \"Contact\": \"contact@foo.bar\", // mandatory, a link or e-mail address for people to contact you about security issues\n \"Encryption\": \"http://url-to-public-key\", // optional, a link to a key which security researchers should use to securely talk to you\n \"Acknowledgments\": \"http://url\", // optional, a link to a web page where you say thank you to security researchers who have helped you\n \"Preferred-Languages\": \"en, fr, es\", // optional\n \"Policy\": \"http://url\", // optional, a link to a policy detailing what security researchers should do when searching for or reporting security issues\n \"Hiring\": \"http://url\", // optional, a link to any security-related job openings in your organisation\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"SecurityTxt\" : {\n \"Contact\" : \"contact@foo.bar\",\n \"Encryption\" : \"https://...\",\n \"Acknowledgments\" : \"https://...\",\n \"Preferred-Languages\" : \"en, fr\",\n \"Policy\" : \"https://...\",\n \"Hiring\" : \"https://...\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.static.StaticResponse }\n\n## Static Response\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `StaticResponse`\n\n### Description\n\nThis plugin returns a static response for any request\n\n\n\n### Default configuration\n\n```json\n{\n \"StaticResponse\" : {\n \"status\" : 200,\n \"headers\" : {\n \"Content-Type\" : \"application/json\"\n },\n \"body\" : \"{\\\"message\\\":\\\"hello world!\\\"}\",\n \"bodyBase64\" : null\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.useragent.UserAgentInfoEndpoint }\n\n## User-Agent endpoint\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: ``none``\n\n### Description\n\nThis plugin will expose current user-agent informations on the following endpoint.\n\n`/.well-known/otoroshi/plugins/user-agent`\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.useragent.UserAgentInfoHeader }\n\n## User-Agent header\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `UserAgentInfoHeader`\n\n### Description\n\nThis plugin will sent informations extracted by the User-Agent details extractor to the target service in a header.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"UserAgentInfoHeader\": {\n \"headerName\": \"X-User-Agent-Info\" // header in which info will be sent\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"UserAgentInfoHeader\" : {\n \"headerName\" : \"X-User-Agent-Info\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-transformer #otoroshi.plugins.workflow.WorkflowEndpoint }\n\n## [DEPRECATED] Workflow endpoint\n\n\n\n### Infos\n\n* plugin type: `transformer`\n* configuration root: `WorkflowEndpoint`\n\n### Description\n\nThis plugin runs a workflow and return the response\n\n\n\n### Default configuration\n\n```json\n{\n \"WorkflowEndpoint\" : {\n \"workflow\" : { }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.biscuit.BiscuitValidator }\n\n## Biscuit token validator\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: ``none``\n\n### Description\n\nThis plugin validates a Biscuit token.\n\n\n\n### Default configuration\n\n```json\n{\n \"publicKey\" : \"xxxxxx\",\n \"checks\" : [ ],\n \"facts\" : [ ],\n \"resources\" : [ ],\n \"rules\" : [ ],\n \"revocation_ids\" : [ ],\n \"enforce\" : false,\n \"extractor\" : {\n \"type\" : \"header\",\n \"name\" : \"Authorization\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.clientcert.HasClientCertMatchingApikeyValidator }\n\n## Client Certificate + Api Key only\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: ``none``\n\n### Description\n\nCheck if a client certificate is present in the request and that the apikey used matches the client certificate.\nYou can set the client cert. DN in an apikey metadata named `allowed-client-cert-dn`\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.clientcert.HasClientCertMatchingHttpValidator }\n\n## Client certificate matching (over http)\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: `HasClientCertMatchingHttpValidator`\n\n### Description\n\nCheck if client certificate matches the following configuration\n\nexpected response from http service is\n\n```json\n{\n \"serialNumbers\": [], // allowed certificated serial numbers\n \"subjectDNs\": [], // allowed certificated DNs\n \"issuerDNs\": [], // allowed certificated issuer DNs\n \"regexSubjectDNs\": [], // allowed certificated DNs matching regex\n \"regexIssuerDNs\": [], // allowed certificated issuer DNs matching regex\n}\n```\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"HasClientCertMatchingValidator\": {\n \"url\": \"...\", // url for the call\n \"headers\": {}, // http header for the call\n \"ttl\": 600000, // cache ttl,\n \"mtlsConfig\": {\n \"certId\": \"xxxxx\",\n \"mtls\": false,\n \"loose\": false\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"HasClientCertMatchingHttpValidator\" : {\n \"url\" : \"http://foo.bar\",\n \"ttl\" : 600000,\n \"headers\" : { },\n \"mtlsConfig\" : {\n \"certId\" : \"...\",\n \"mtls\" : false,\n \"loose\" : false\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.clientcert.HasClientCertMatchingValidator }\n\n## Client certificate matching\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: `HasClientCertMatchingValidator`\n\n### Description\n\nCheck if client certificate matches the following configuration\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"HasClientCertMatchingValidator\": {\n \"serialNumbers\": [], // allowed certificated serial numbers\n \"subjectDNs\": [], // allowed certificated DNs\n \"issuerDNs\": [], // allowed certificated issuer DNs\n \"regexSubjectDNs\": [], // allowed certificated DNs matching regex\n \"regexIssuerDNs\": [], // allowed certificated issuer DNs matching regex\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"HasClientCertMatchingValidator\" : {\n \"serialNumbers\" : [ ],\n \"subjectDNs\" : [ ],\n \"issuerDNs\" : [ ],\n \"regexSubjectDNs\" : [ ],\n \"regexIssuerDNs\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.clientcert.HasClientCertValidator }\n\n## Client Certificate Only\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: ``none``\n\n### Description\n\nCheck if a client certificate is present in the request\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.hmac.HMACValidator }\n\n## HMAC access validator\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: `HMACAccessValidator`\n\n### Description\n\nThis plugin can be used to check if a HMAC signature is present and valid in Authorization header.\n\n\n\n### Default configuration\n\n```json\n{\n \"HMACAccessValidator\" : {\n \"secret\" : \"\"\n }\n}\n```\n\n\n\n### Documentation\n\n\n The HMAC signature needs to be set on the `Authorization` or `Proxy-Authorization` header.\n The format of this header should be : `hmac algorithm=\"\", headers=\"
\", signature=\"\"`\n As example, a simple nodeJS call with the expected header\n ```js\n const crypto = require('crypto');\n const fetch = require('node-fetch');\n\n const date = new Date()\n const secret = \"my-secret\" // equal to the api key secret by default\n\n const algo = \"sha512\"\n const signature = crypto.createHmac(algo, secret)\n .update(date.getTime().toString())\n .digest('base64');\n\n fetch('http://myservice.oto.tools:9999/api/test', {\n headers: {\n \"Otoroshi-Client-Id\": \"my-id\",\n \"Otoroshi-Client-Secret\": \"my-secret\",\n \"Date\": date.getTime().toString(),\n \"Authorization\": `hmac algorithm=\"hmac-${algo}\", headers=\"Date\", signature=\"${signature}\"`,\n \"Accept\": \"application/json\"\n }\n })\n .then(r => r.json())\n .then(console.log)\n ```\n In this example, we have an Otoroshi service deployed on http://myservice.oto.tools:9999/api/test, protected by api keys.\n The secret used is the secret of the api key (by default, but you can change it and define a secret on the plugin configuration).\n We send the base64 encoded date of the day, signed by the secret, in the Authorization header. We specify the headers signed and the type of algorithm used.\n You can sign more than one header but you have to list them in the headers fields (each one separate by a space, example : headers=\"Date KeyId\").\n The algorithm used can be HMAC-SHA1, HMAC-SHA256, HMAC-SHA384 or HMAC-SHA512.\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.oidc.OIDCAccessTokenValidator }\n\n## OIDC access_token validator\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: `OIDCAccessTokenValidator`\n\n### Description\n\nThis plugin will use the third party apikey configuration and apply it while keeping the apikey mecanism of otoroshi.\nUse it to combine apikey validation and OIDC access_token validation.\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"OIDCAccessTokenValidator\": {\n \"enabled\": true,\n \"atLeastOne\": false,\n // config is optional and can be either an object config or an array of objects\n \"config\": {\n \"enabled\" : true,\n \"quotasEnabled\" : true,\n \"uniqueApiKey\" : false,\n \"type\" : \"OIDC\",\n \"oidcConfigRef\" : \"some-oidc-auth-module-id\",\n \"localVerificationOnly\" : false,\n \"mode\" : \"Tmp\",\n \"ttl\" : 0,\n \"headerName\" : \"Authorization\",\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"excludedPatterns\" : [ ],\n \"scopes\" : [ ],\n \"rolesPath\" : [ ],\n \"roles\" : [ ]\n}\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"OIDCAccessTokenValidator\" : {\n \"enabled\" : true,\n \"atLeastOne\" : false,\n \"config\" : {\n \"enabled\" : true,\n \"quotasEnabled\" : true,\n \"uniqueApiKey\" : false,\n \"type\" : \"OIDC\",\n \"oidcConfigRef\" : \"some-oidc-auth-module-id\",\n \"localVerificationOnly\" : false,\n \"mode\" : \"Tmp\",\n \"ttl\" : 0,\n \"headerName\" : \"Authorization\",\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"excludedPatterns\" : [ ],\n \"scopes\" : [ ],\n \"rolesPath\" : [ ],\n \"roles\" : [ ]\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.quotas.ServiceQuotas }\n\n## Public quotas\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: `ServiceQuotas`\n\n### Description\n\nThis plugin will enforce public quotas on the current service\n\n\n\n\n\n\n\n### Default configuration\n\n```json\n{\n \"ServiceQuotas\" : {\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-validator #otoroshi.plugins.users.HasAllowedUsersValidator }\n\n## Allowed users only\n\n\n\n### Infos\n\n* plugin type: `validator`\n* configuration root: `HasAllowedUsersValidator`\n\n### Description\n\nThis plugin only let allowed users pass\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"HasAllowedUsersValidator\": {\n \"usernames\": [], // allowed usernames\n \"emails\": [], // allowed user email addresses\n \"emailDomains\": [], // allowed user email domains\n \"metadataMatch\": [], // json path expressions to match against user metadata. passes if one match\n \"metadataNotMatch\": [], // json path expressions to match against user metadata. passes if none match\n \"profileMatch\": [], // json path expressions to match against user profile. passes if one match\n \"profileNotMatch\": [], // json path expressions to match against user profile. passes if none match\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"HasAllowedUsersValidator\" : {\n \"usernames\" : [ ],\n \"emails\" : [ ],\n \"emailDomains\" : [ ],\n \"metadataMatch\" : [ ],\n \"metadataNotMatch\" : [ ],\n \"profileMatch\" : [ ],\n \"profileNotMatch\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.apikeys.ApikeyAuthModule }\n\n## Apikey auth module\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `ApikeyAuthModule`\n\n### Description\n\nThis plugin adds basic auth on service where credentials are valid apikeys on the current service.\n\n\n\n### Default configuration\n\n```json\n{\n \"ApikeyAuthModule\" : {\n \"realm\" : \"apikey-auth-module-realm\",\n \"noneTagIn\" : [ ],\n \"oneTagIn\" : [ ],\n \"allTagsIn\" : [ ],\n \"noneMetaIn\" : [ ],\n \"oneMetaIn\" : [ ],\n \"allMetaIn\" : [ ],\n \"noneMetaKeysIn\" : [ ],\n \"oneMetaKeyIn\" : [ ],\n \"allMetaKeysIn\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.apikeys.CertificateAsApikey }\n\n## Client certificate as apikey\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `CertificateAsApikey`\n\n### Description\n\nThis plugin uses client certificate as an apikey. The apikey will be stored for classic apikey usage\n\n\n\n### Default configuration\n\n```json\n{\n \"CertificateAsApikey\" : {\n \"readOnly\" : false,\n \"allowClientIdOnly\" : false,\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"constrainedServicesOnly\" : false,\n \"tags\" : [ ],\n \"metadata\" : { }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.apikeys.ClientCredentialFlowExtractor }\n\n## Client Credential Flow ApiKey extractor\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: ``none``\n\n### Description\n\nThis plugin can extract an apikey from an opaque access_token generate by the `ClientCredentialFlow` plugin\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.biscuit.BiscuitExtractor }\n\n## Apikey from Biscuit token extractor\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: ``none``\n\n### Description\n\nThis plugin extract an from a Biscuit token where the biscuit has an #authority fact 'client_id' containing\napikey client_id and an #authority fact 'client_sign' that is the HMAC256 signature of the apikey client_id with the apikey client_secret\n\n\n\n### Default configuration\n\n```json\n{\n \"publicKey\" : \"xxxxxx\",\n \"checks\" : [ ],\n \"facts\" : [ ],\n \"resources\" : [ ],\n \"rules\" : [ ],\n \"revocation_ids\" : [ ],\n \"enforce\" : false,\n \"extractor\" : {\n \"type\" : \"header\",\n \"name\" : \"Authorization\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.discovery.DiscoveryTargetsSelector }\n\n## Service discovery target selector (service discovery)\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `DiscoverySelfRegistration`\n\n### Description\n\nThis plugin select a target in the pool of discovered targets for this service.\nUse in combination with either `DiscoverySelfRegistrationSink` or `DiscoverySelfRegistrationTransformer` to make it work using the `self registration` pattern.\nOr use an implementation of `DiscoveryJob` for the `third party registration pattern`.\n\nThis plugin accepts the following configuration:\n\n\n\n### Default configuration\n\n```json\n{\n \"DiscoverySelfRegistration\" : {\n \"hosts\" : [ ],\n \"targetTemplate\" : { },\n \"registrationTtl\" : 60000\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.geoloc.IpStackGeolocationInfoExtractor }\n\n## Geolocation details extractor (using IpStack api)\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `GeolocationInfo`\n\n### Description\n\nThis plugin extract geolocation informations from ip address using the [IpStack dbs](https://ipstack.com/).\nThe informations are store in plugins attrs for other plugins to use\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"GeolocationInfo\": {\n \"apikey\": \"xxxxxxx\",\n \"timeout\": 2000, // timeout in ms\n \"log\": false // will log geolocation details\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"GeolocationInfo\" : {\n \"apikey\" : \"xxxxxxx\",\n \"timeout\" : 2000,\n \"log\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.geoloc.MaxMindGeolocationInfoExtractor }\n\n## Geolocation details extractor (using Maxmind db)\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `GeolocationInfo`\n\n### Description\n\nThis plugin extract geolocation informations from ip address using the [Maxmind dbs](https://www.maxmind.com/en/geoip2-databases).\nThe informations are store in plugins attrs for other plugins to use\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"GeolocationInfo\": {\n \"path\": \"/foo/bar/cities.mmdb\", // file path, can be \"global\"\n \"log\": false // will log geolocation details\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"GeolocationInfo\" : {\n \"path\" : \"global\",\n \"log\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.jwt.JwtUserExtractor }\n\n## Jwt user extractor\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `JwtUserExtractor`\n\n### Description\n\nThis plugin extract a user from a JWT token\n\n\n\n### Default configuration\n\n```json\n{\n \"JwtUserExtractor\" : {\n \"verifier\" : \"\",\n \"strict\" : true,\n \"namePath\" : \"name\",\n \"emailPath\" : \"email\",\n \"metaPath\" : null\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.oidc.OIDCAccessTokenAsApikey }\n\n## OIDC access_token as apikey\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `OIDCAccessTokenAsApikey`\n\n### Description\n\nThis plugin will use the third party apikey configuration to generate an apikey\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"OIDCAccessTokenValidator\": {\n \"enabled\": true,\n \"atLeastOne\": false,\n // config is optional and can be either an object config or an array of objects\n \"config\": {\n \"enabled\" : true,\n \"quotasEnabled\" : true,\n \"uniqueApiKey\" : false,\n \"type\" : \"OIDC\",\n \"oidcConfigRef\" : \"some-oidc-auth-module-id\",\n \"localVerificationOnly\" : false,\n \"mode\" : \"Tmp\",\n \"ttl\" : 0,\n \"headerName\" : \"Authorization\",\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"excludedPatterns\" : [ ],\n \"scopes\" : [ ],\n \"rolesPath\" : [ ],\n \"roles\" : [ ]\n}\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"OIDCAccessTokenAsApikey\" : {\n \"enabled\" : true,\n \"atLeastOne\" : false,\n \"config\" : {\n \"enabled\" : true,\n \"quotasEnabled\" : true,\n \"uniqueApiKey\" : false,\n \"type\" : \"OIDC\",\n \"oidcConfigRef\" : \"some-oidc-auth-module-id\",\n \"localVerificationOnly\" : false,\n \"mode\" : \"Tmp\",\n \"ttl\" : 0,\n \"headerName\" : \"Authorization\",\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"excludedPatterns\" : [ ],\n \"scopes\" : [ ],\n \"rolesPath\" : [ ],\n \"roles\" : [ ]\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-preroute #otoroshi.plugins.useragent.UserAgentExtractor }\n\n## User-Agent details extractor\n\n\n\n### Infos\n\n* plugin type: `preroute`\n* configuration root: `UserAgentInfo`\n\n### Description\n\nThis plugin extract informations from User-Agent header such as browsser version, OS version, etc.\nThe informations are store in plugins attrs for other plugins to use\n\nThis plugin can accept the following configuration\n\n```json\n{\n \"UserAgentInfo\": {\n \"log\": false // will log user-agent details\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"UserAgentInfo\" : {\n \"log\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-sink #otoroshi.plugins.apikeys.ClientCredentialService }\n\n## Client Credential Service\n\n\n\n### Infos\n\n* plugin type: `sink`\n* configuration root: `ClientCredentialService`\n\n### Description\n\nThis plugin add an an oauth client credentials service (`https://unhandleddomain/.well-known/otoroshi/oauth/token`) to create an access_token given a client id and secret.\n\n```json\n{\n \"ClientCredentialService\" : {\n \"domain\" : \"*\",\n \"expiration\" : 3600000,\n \"defaultKeyPair\" : \"otoroshi-jwt-signing\",\n \"secure\" : true\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"ClientCredentialService\" : {\n \"domain\" : \"*\",\n \"expiration\" : 3600000,\n \"defaultKeyPair\" : \"otoroshi-jwt-signing\",\n \"secure\" : true\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-sink #otoroshi.plugins.discovery.DiscoverySelfRegistrationSink }\n\n## Global self registration endpoints (service discovery)\n\n\n\n### Infos\n\n* plugin type: `sink`\n* configuration root: `DiscoverySelfRegistration`\n\n### Description\n\nThis plugin add support for self registration endpoint on specific hostnames.\n\nThis plugin accepts the following configuration:\n\n\n\n### Default configuration\n\n```json\n{\n \"DiscoverySelfRegistration\" : {\n \"hosts\" : [ ],\n \"targetTemplate\" : { },\n \"registrationTtl\" : 60000\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-sink #otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator }\n\n## Kubernetes admission validator webhook\n\n\n\n### Infos\n\n* plugin type: `sink`\n* configuration root: ``none``\n\n### Description\n\nThis plugin exposes a webhook to kubernetes to handle manifests validation\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-sink #otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector }\n\n## Kubernetes sidecar injector webhook\n\n\n\n### Infos\n\n* plugin type: `sink`\n* configuration root: ``none``\n\n### Description\n\nThis plugin exposes a webhook to kubernetes to inject otoroshi-sidecar in pods\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-job #otoroshi.jobs.StateExporter }\n\n## Otoroshi state exporter job\n\n\n\n### Infos\n\n* plugin type: `job`\n* configuration root: `StateExporter`\n\n### Description\n\nThis job send an event containing the full otoroshi export every n seconds\n\n\n\n### Default configuration\n\n```json\n{\n \"StateExporter\" : {\n \"every_sec\" : 3600,\n \"format\" : \"json\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-job #otoroshi.next.plugins.TailscaleCertificatesFetcherJob }\n\n## Tailscale certificate fetcher job\n\n\n\n### Infos\n\n* plugin type: `job`\n* configuration root: ``none``\n\n### Description\n\nThis job will fetch certificates from Tailscale ACME provider\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-job #otoroshi.next.plugins.TailscaleTargetsJob }\n\n## Tailscale targets job\n\n\n\n### Infos\n\n* plugin type: `job`\n* configuration root: ``none``\n\n### Description\n\nThis job will aggregates Tailscale possible online targets\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-job #otoroshi.plugins.jobs.kubernetes.KubernetesIngressControllerJob }\n\n## Kubernetes Ingress Controller\n\n\n\n### Infos\n\n* plugin type: `job`\n* configuration root: `KubernetesConfig`\n\n### Description\n\nThis plugin enables Otoroshi as an Ingress Controller\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-job #otoroshi.plugins.jobs.kubernetes.KubernetesOtoroshiCRDsControllerJob }\n\n## Kubernetes Otoroshi CRDs Controller\n\n\n\n### Infos\n\n* plugin type: `job`\n* configuration root: `KubernetesConfig`\n\n### Description\n\nThis plugin enables Otoroshi CRDs Controller\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-job #otoroshi.plugins.jobs.kubernetes.KubernetesToOtoroshiCertSyncJob }\n\n## Kubernetes to Otoroshi certs. synchronizer\n\n\n\n### Infos\n\n* plugin type: `job`\n* configuration root: `KubernetesConfig`\n\n### Description\n\nThis plugin syncs. TLS secrets from Kubernetes to Otoroshi\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-job #otoroshi.plugins.jobs.kubernetes.OtoroshiToKubernetesCertSyncJob }\n\n## Otoroshi certs. to Kubernetes secrets synchronizer\n\n\n\n### Infos\n\n* plugin type: `job`\n* configuration root: `KubernetesConfig`\n\n### Description\n\nThis plugin syncs. Otoroshi certs to Kubernetes TLS secrets\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"KubernetesConfig\" : {\n \"endpoint\" : \"https://kube.cluster.dev\",\n \"token\" : \"xxx\",\n \"userPassword\" : \"user:password\",\n \"caCert\" : \"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt\",\n \"trust\" : false,\n \"namespaces\" : [ \"*\" ],\n \"labels\" : { },\n \"namespacesLabels\" : { },\n \"ingressClasses\" : [ \"otoroshi\" ],\n \"defaultGroup\" : \"default\",\n \"ingresses\" : true,\n \"crds\" : true,\n \"coreDnsIntegration\" : false,\n \"coreDnsIntegrationDryRun\" : false,\n \"coreDnsAzure\" : false,\n \"kubeLeader\" : false,\n \"restartDependantDeployments\" : true,\n \"useProxyState\" : false,\n \"watch\" : true,\n \"syncDaikokuApikeysOnly\" : false,\n \"kubeSystemNamespace\" : \"kube-system\",\n \"coreDnsConfigMapName\" : \"coredns\",\n \"coreDnsDeploymentName\" : \"coredns\",\n \"corednsPort\" : 53,\n \"otoroshiServiceName\" : \"otoroshi-service\",\n \"otoroshiNamespace\" : \"otoroshi\",\n \"clusterDomain\" : \"cluster.local\",\n \"syncIntervalSeconds\" : 60,\n \"coreDnsEnv\" : null,\n \"watchTimeoutSeconds\" : 60,\n \"watchGracePeriodSeconds\" : 5,\n \"mutatingWebhookName\" : \"otoroshi-admission-webhook-injector\",\n \"validatingWebhookName\" : \"otoroshi-admission-webhook-validation\",\n \"meshDomain\" : \"otoroshi.mesh\",\n \"openshiftDnsOperatorIntegration\" : false,\n \"openshiftDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"openshiftDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"openshiftDnsOperatorCoreDnsPort\" : 5353,\n \"kubeDnsOperatorIntegration\" : false,\n \"kubeDnsOperatorCoreDnsNamespace\" : \"otoroshi\",\n \"kubeDnsOperatorCoreDnsName\" : \"otoroshi-dns\",\n \"kubeDnsOperatorCoreDnsPort\" : 5353,\n \"connectionTimeout\" : 5000,\n \"idleTimeout\" : 30000,\n \"callAndStreamTimeout\" : 30000,\n \"templates\" : {\n \"service-group\" : { },\n \"service-descriptor\" : { },\n \"apikeys\" : { },\n \"global-config\" : { },\n \"jwt-verifier\" : { },\n \"tcp-service\" : { },\n \"certificate\" : { },\n \"auth-module\" : { },\n \"script\" : { },\n \"data-exporters\" : { },\n \"organizations\" : { },\n \"teams\" : { },\n \"admins\" : { },\n \"webhooks\" : { }\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-request-handler #otoroshi.next.proxy.ProxyEngine }\n\n## Otoroshi next proxy engine (experimental)\n\n\n\n### Infos\n\n* plugin type: `request-handler`\n* configuration root: `NextGenProxyEngine`\n\n### Description\n\nThis plugin holds the next generation otoroshi proxy engine implementation. This engine is **experimental** and may not work as expected !\n\nYou can active this plugin only on some domain names so you can easily A/B test the new engine.\nThe new proxy engine is designed to be more reactive and more efficient generally.\nIt is also designed to be very efficient on path routing where it wasn't the old engines strong suit.\n\nThe idea is to only rely on plugins to work and avoid losing time with features that are not used in service descriptors.\nAn automated conversion happens for every service descriptor. If the exposed domain is handled by this plugin, it will be served by this plugin.\nThis plugin introduces new entities that will replace (one day maybe) service descriptors:\n\n - `routes`: a unique routing rule based on hostname, path, method and headers that will execute a bunch of plugins\n - `route-compositions`: multiple routing rules based on hostname, path, method and headers that will execute the same list of plugins\n - `backends`: a list of targets to contact a backend\n\nas an example, let say you want to use the new engine on your service exposed on `api.foo.bar/api`.\nTo do that, just add the plugin in the `global plugins` section of the danger zone, inject the default configuration,\nenabled it and in `domains` add the value `api.foo.bar` (it is possible to use `*.foo.bar` if that's what you want to do).\nThe next time a request hits the `api.foo.bar` domain, the new engine will handle it instead of the old one.\n\n\n\n### Default configuration\n\n```json\n{\n \"NextGenProxyEngine\" : {\n \"enabled\" : true,\n \"domains\" : [ \"*\" ],\n \"deny_domains\" : [ ],\n \"reporting\" : true,\n \"merge_sync_steps\" : true,\n \"export_reporting\" : false,\n \"apply_legacy_checks\" : true,\n \"debug\" : false,\n \"capture\" : false,\n \"captureMaxEntitySize\" : 4194304,\n \"debug_headers\" : false,\n \"routing_strategy\" : \"tree\"\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .plugin .plugin-hidden .plugin-kind-request-handler #otoroshi.script.ForwardTrafficHandler }\n\n## Forward traffic\n\n\n\n### Infos\n\n* plugin type: `request-handler`\n* configuration root: `ForwardTrafficHandler`\n\n### Description\n\nThis plugin can be use to perform a raw traffic forward to an URL without passing through otoroshi routing\n\n\n\n### Default configuration\n\n```json\n{\n \"ForwardTrafficHandler\" : {\n \"domains\" : {\n \"my.domain.tld\" : {\n \"baseUrl\" : \"https://my.otherdomain.tld\",\n \"secret\" : \"jwt signing secret\",\n \"service\" : {\n \"id\" : \"service id for analytics\",\n \"name\" : \"service name for analytics\"\n }\n }\n }\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n\n\n"},{"name":"built-in-plugins.md","id":"/plugins/built-in-plugins.md","url":"/plugins/built-in-plugins.html","title":"Built-in plugins","content":"# Built-in plugins\n\nOtoroshi next provides some plugins out of the box. Here is the available plugins with their documentation and reference configuration\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.AdditionalHeadersIn }\n\n## Additional headers in\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.AdditionalHeadersIn`\n\n### Description\n\nThis plugin adds headers in the incoming otoroshi request\n\n\n\n### Default configuration\n\n```json\n{\n \"headers\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.AdditionalHeadersOut }\n\n## Additional headers out\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.AdditionalHeadersOut`\n\n### Description\n\nThis plugin adds headers in the otoroshi response\n\n\n\n### Default configuration\n\n```json\n{\n \"headers\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.AllowHttpMethods }\n\n## Allowed HTTP methods\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.AllowHttpMethods`\n\n### Description\n\nThis plugin verifies the current request only uses allowed http methods\n\n\n\n### Default configuration\n\n```json\n{\n \"allowed\" : [ ],\n \"forbidden\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.ApikeyAuthModule }\n\n## Apikey auth module\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.ApikeyAuthModule`\n\n### Description\n\nThis plugin adds basic auth on service where credentials are valid apikeys on the current service.\n\n\n\n### Default configuration\n\n```json\n{\n \"realm\" : \"apikey-auth-module-realm\",\n \"matcher\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.ApikeyCalls }\n\n## Apikeys\n\n### Defined on steps\n\n - `MatchRoute`\n - `ValidateAccess`\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.ApikeyCalls`\n\n### Description\n\nThis plugin expects to find an apikey to allow the request to pass\n\n\n\n### Default configuration\n\n```json\n{\n \"extractors\" : {\n \"basic\" : {\n \"enabled\" : true,\n \"header_name\" : null,\n \"query_name\" : null\n },\n \"custom_headers\" : {\n \"enabled\" : true,\n \"client_id_header_name\" : null,\n \"client_secret_header_name\" : null\n },\n \"client_id\" : {\n \"enabled\" : true,\n \"header_name\" : null,\n \"query_name\" : null\n },\n \"jwt\" : {\n \"enabled\" : true,\n \"secret_signed\" : true,\n \"keypair_signed\" : true,\n \"include_request_attrs\" : false,\n \"max_jwt_lifespan_sec\" : null,\n \"header_name\" : null,\n \"query_name\" : null,\n \"cookie_name\" : null\n }\n },\n \"routing\" : {\n \"enabled\" : false\n },\n \"validate\" : true,\n \"mandatory\" : true,\n \"pass_with_user\" : false,\n \"wipe_backend_request\" : true,\n \"update_quotas\" : true\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.ApikeyQuotas }\n\n## Apikey quotas\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.ApikeyQuotas`\n\n### Description\n\nIncrements quotas for the currents apikey. Useful when 'legacy checks' are disabled on a service/globally or when apikey are extracted in a custom fashion.\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.AuthModule }\n\n## Authentication\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.AuthModule`\n\n### Description\n\nThis plugin applies an authentication module\n\n\n\n### Default configuration\n\n```json\n{\n \"pass_with_apikey\" : false,\n \"auth_module\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.BasicAuthCaller }\n\n## Basic Auth. caller\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.BasicAuthCaller`\n\n### Description\n\nThis plugin can be used to call api that are authenticated using basic auth.\n\n\n\n### Default configuration\n\n```json\n{\n \"username\" : null,\n \"passaword\" : null,\n \"headerName\" : \"Authorization\",\n \"headerValueFormat\" : \"Basic %s\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.BuildMode }\n\n## Build mode\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.BuildMode`\n\n### Description\n\nThis plugin displays a build page\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.CanaryMode }\n\n## Canary mode\n\n### Defined on steps\n\n - `PreRoute`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.CanaryMode`\n\n### Description\n\nThis plugin can split a portion of the traffic to canary backends\n\n\n\n### Default configuration\n\n```json\n{\n \"traffic\" : 0.2,\n \"targets\" : [ ],\n \"root\" : \"/\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.ContextValidation }\n\n## Context validator\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.ContextValidation`\n\n### Description\n\nThis plugin validates the current context using JSONPath validators.\n\nThis plugin let you configure a list of validators that will check if the current call can pass.\nA validator is composed of a [JSONPath](https://goessner.net/articles/JsonPath/) that will tell what to check and a value that is the expected value.\nThe JSONPath will be applied on a document that will look like\n\n```js\n{\n \"snowflake\" : \"1516772930422308903\",\n \"apikey\" : { // current apikey\n \"clientId\" : \"vrmElDerycXrofar\",\n \"clientName\" : \"default-apikey\",\n \"metadata\" : {\n \"foo\" : \"bar\"\n },\n \"tags\" : [ ]\n },\n \"user\" : null, // current user\n \"request\" : {\n \"id\" : 1,\n \"method\" : \"GET\",\n \"headers\" : {\n \"Host\" : \"ctx-validation-next-gen.oto.tools:9999\",\n \"Accept\" : \"*/*\",\n \"User-Agent\" : \"curl/7.64.1\",\n \"Authorization\" : \"Basic dnJtRWxEZXJ5Y1hyb2ZhcjpvdDdOSTkyVGI2Q2J4bWVMYU9UNzJxamdCU2JlRHNLbkxtY1FBcXBjVjZTejh0Z3I1b2RUOHAzYjB5SEVNRzhZ\",\n \"Remote-Address\" : \"127.0.0.1:58929\",\n \"Timeout-Access\" : \"\",\n \"Raw-Request-URI\" : \"/foo\",\n \"Tls-Session-Info\" : \"Session(1650461821330|SSL_NULL_WITH_NULL_NULL)\"\n },\n \"cookies\" : [ ],\n \"tls\" : false,\n \"uri\" : \"/foo\",\n \"path\" : \"/foo\",\n \"version\" : \"HTTP/1.1\",\n \"has_body\" : false,\n \"remote\" : \"127.0.0.1\",\n \"client_cert_chain\" : null\n },\n \"config\" : {\n \"validators\" : [ {\n \"path\" : \"$.apikey.metadata.foo\",\n \"value\" : \"bar\"\n } ]\n },\n \"global_config\" : { ... }, // global config\n \"attrs\" : {\n \"otoroshi.core.SnowFlake\" : \"1516772930422308903\",\n \"otoroshi.core.ElCtx\" : {\n \"requestId\" : \"1516772930422308903\",\n \"requestSnowflake\" : \"1516772930422308903\",\n \"requestTimestamp\" : \"2022-04-20T15:37:01.548+02:00\"\n },\n \"otoroshi.next.core.Report\" : \"otoroshi.next.proxy.NgExecutionReport@277b44e2\",\n \"otoroshi.core.RequestStart\" : 1650461821545,\n \"otoroshi.core.RequestWebsocket\" : false,\n \"otoroshi.core.RequestCounterOut\" : 0,\n \"otoroshi.core.RemainingQuotas\" : {\n \"authorizedCallsPerSec\" : 10000000,\n \"currentCallsPerSec\" : 0,\n \"remainingCallsPerSec\" : 10000000,\n \"authorizedCallsPerDay\" : 10000000,\n \"currentCallsPerDay\" : 2,\n \"remainingCallsPerDay\" : 9999998,\n \"authorizedCallsPerMonth\" : 10000000,\n \"currentCallsPerMonth\" : 269,\n \"remainingCallsPerMonth\" : 9999731\n },\n \"otoroshi.next.core.MatchedRoutes\" : \"MutableList(route_022825450-e97d-42ed-8e22-b23342c1c7c8)\",\n \"otoroshi.core.RequestNumber\" : 1,\n \"otoroshi.next.core.Route\" : { ... }, // current route as json\n \"otoroshi.core.RequestTimestamp\" : \"2022-04-20T15:37:01.548+02:00\",\n \"otoroshi.core.ApiKey\" : { ... }, // current apikey as json\n \"otoroshi.core.User\" : { ... }, // current user as json\n \"otoroshi.core.RequestCounterIn\" : 0\n },\n \"route\" : { ... },\n \"token\" : null // current valid jwt token if one\n}\n```\n\nthe expected value support some syntax tricks like\n\n* `Not(value)` on a string to check if the current value does not equals another value\n* `Regex(regex)` on a string to check if the current value matches the regex\n* `RegexNot(regex)` on a string to check if the current value does not matches the regex\n* `Wildcard(*value*)` on a string to check if the current value matches the value with wildcards\n* `WildcardNot(*value*)` on a string to check if the current value does not matches the value with wildcards\n* `Contains(value)` on a string to check if the current value contains a value\n* `ContainsNot(value)` on a string to check if the current value does not contains a value\n* `Contains(Regex(regex))` on an array to check if one of the item of the array matches the regex\n* `ContainsNot(Regex(regex))` on an array to check if one of the item of the array does not matches the regex\n* `Contains(Wildcard(*value*))` on an array to check if one of the item of the array matches the wildcard value\n* `ContainsNot(Wildcard(*value*))` on an array to check if one of the item of the array does not matches the wildcard value\n* `Contains(value)` on an array to check if the array contains a value\n* `ContainsNot(value)` on an array to check if the array does not contains a value\n\nfor instance to check if the current apikey has a metadata name `foo` with a value containing `bar`, you can write the following validator\n\n```js\n{\n \"path\": \"$.apikey.metadata.foo\",\n \"value\": \"Contains(bar)\"\n}\n```\n\n\n\n### Default configuration\n\n```json\n{\n \"validators\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.Cors }\n\n## CORS\n\n### Defined on steps\n\n - `PreRoute`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.Cors`\n\n### Description\n\nThis plugin applies CORS rules\n\n\n\n### Default configuration\n\n```json\n{\n \"allow_origin\" : \"*\",\n \"expose_headers\" : [ ],\n \"allow_headers\" : [ ],\n \"allow_methods\" : [ ],\n \"excluded_patterns\" : [ ],\n \"max_age\" : null,\n \"allow_credentials\" : true\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.DisableHttp10 }\n\n## Disable HTTP/1.0\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.DisableHttp10`\n\n### Description\n\nThis plugin forbids HTTP/1.0 requests\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.EndlessHttpResponse }\n\n## Endless HTTP responses\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.EndlessHttpResponse`\n\n### Description\n\nThis plugin returns 128 Gb of 0 to the ip addresses is in the list\n\n\n\n### Default configuration\n\n```json\n{\n \"finger\" : false,\n \"addresses\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.EurekaServerSink }\n\n## Eureka instance\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.EurekaServerSink`\n\n### Description\n\nEureka plugin description\n\n\n\n### Default configuration\n\n```json\n{\n \"evictionTimeout\" : 300\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.EurekaTarget }\n\n## Internal Eureka target\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.EurekaTarget`\n\n### Description\n\nThis plugin can be used to used a target that come from an internal Eureka server.\n If you want to use a target which it locate outside of Otoroshi, you must use the External Eureka Server.\n\n\n\n### Default configuration\n\n```json\n{\n \"eureka_server\" : null,\n \"eureka_app\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.ExternalEurekaTarget }\n\n## External Eureka target\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.ExternalEurekaTarget`\n\n### Description\n\nThis plugin can be used to used a target that come from an external Eureka server.\n If you want to use a target that is directly exposed by an implementation of Eureka by Otoroshi,\n you must use the Internal Eureka Server.\n\n\n\n### Default configuration\n\n```json\n{\n \"eureka_server\" : null,\n \"eureka_app\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.ForceHttpsTraffic }\n\n## Force HTTPS traffic\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.ForceHttpsTraffic`\n\n### Description\n\nThis plugin verifies the current request uses HTTPS\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.GlobalMaintenanceMode }\n\n## Global Maintenance mode\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.GlobalMaintenanceMode`\n\n### Description\n\nThis plugin displays a maintenance page for every services. Useful when 'legacy checks' are disabled on a service/globally\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.GlobalPerIpAddressThrottling }\n\n## Global per ip address throttling \n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.GlobalPerIpAddressThrottling`\n\n### Description\n\nEnforce global per ip address throttling. Useful when 'legacy checks' are disabled on a service/globally\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.GlobalThrottling }\n\n## Global throttling \n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.GlobalThrottling`\n\n### Description\n\nEnforce global throttling. Useful when 'legacy checks' are disabled on a service/globally\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.GraphQLBackend }\n\n## GraphQL Composer\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.GraphQLBackend`\n\n### Description\n\nThis plugin exposes a GraphQL API that you can compose with whatever you want\n\n\n\n### Default configuration\n\n```json\n{\n \"schema\" : \"\\n type User {\\n name: String!\\n firstname: String!\\n }\\n\\n type Query {\\n users: [User] @json(data: \\\"[{ \\\\\\\"firstname\\\\\\\": \\\\\\\"Foo\\\\\\\", \\\\\\\"name\\\\\\\": \\\\\\\"Bar\\\\\\\" }, { \\\\\\\"firstname\\\\\\\": \\\\\\\"Bar\\\\\\\", \\\\\\\"name\\\\\\\": \\\\\\\"Foo\\\\\\\" }]\\\")\\n }\\n \",\n \"permissions\" : [ ],\n \"initial_data\" : null,\n \"max_depth\" : 15\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.GraphQLProxy }\n\n## GraphQL Proxy\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.GraphQLProxy`\n\n### Description\n\nThis plugin can apply validations (query, schema, max depth, max complexity) on graphql endpoints\n\n\n\n### Default configuration\n\n```json\n{\n \"endpoint\" : \"https://countries.trevorblades.com/graphql\",\n \"schema\" : null,\n \"max_depth\" : 50,\n \"max_complexity\" : 50000,\n \"path\" : \"/graphql\",\n \"headers\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.GraphQLQuery }\n\n## GraphQL Query to REST\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.GraphQLQuery`\n\n### Description\n\nThis plugin can be used to call GraphQL query endpoints and expose it as a REST endpoint\n\n\n\n### Default configuration\n\n```json\n{\n \"url\" : \"https://some.graphql/endpoint\",\n \"headers\" : { },\n \"method\" : \"POST\",\n \"query\" : \"{\\n\\n}\",\n \"timeout\" : 60000,\n \"response_path\" : null,\n \"response_filter\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.GzipResponseCompressor }\n\n## Gzip compression\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.GzipResponseCompressor`\n\n### Description\n\nThis plugin can compress responses using gzip\n\n\n\n### Default configuration\n\n```json\n{\n \"excluded_patterns\" : [ ],\n \"allowed_list\" : [ \"text/*\", \"application/javascript\", \"application/json\" ],\n \"blocked_list\" : [ ],\n \"buffer_size\" : 8192,\n \"chunked_threshold\" : 102400,\n \"compression_level\" : 5\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.HMACCaller }\n\n## HMAC caller plugin\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.HMACCaller`\n\n### Description\n\nThis plugin can be used to call a \"protected\" api by an HMAC signature. It will adds a signature with the secret configured on the plugin.\n The signature string will always the content of the header list listed in the plugin configuration.\n\n\n\n### Default configuration\n\n```json\n{\n \"secret\" : null,\n \"algo\" : \"HMAC-SHA512\",\n \"authorizationHeader\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.HMACValidator }\n\n## HMAC access validator\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.HMACValidator`\n\n### Description\n\nThis plugin can be used to check if a HMAC signature is present and valid in Authorization header.\n\n\n\n### Default configuration\n\n```json\n{\n \"secret\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.HeadersValidation }\n\n## Headers validation\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.HeadersValidation`\n\n### Description\n\nThis plugin validates the values of incoming request headers\n\n\n\n### Default configuration\n\n```json\n{\n \"headers\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.Http3Switch }\n\n## Http3 traffic switch\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.Http3Switch`\n\n### Description\n\nThis plugin injects additional alt-svc header to switch to the http3 server\n\n\n\n### Default configuration\n\n```json\n{\n \"ma\" : 3600\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.IpAddressAllowedList }\n\n## IP allowed list\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.IpAddressAllowedList`\n\n### Description\n\nThis plugin verifies the current request ip address is in the allowed list\n\n\n\n### Default configuration\n\n```json\n{\n \"addresses\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.IpAddressBlockList }\n\n## IP block list\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.IpAddressBlockList`\n\n### Description\n\nThis plugin verifies the current request ip address is not in the blocked list\n\n\n\n### Default configuration\n\n```json\n{\n \"addresses\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JQ }\n\n## JQ\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JQ`\n\n### Description\n\nThis plugin let you transform JSON bodies (in requests and responses) using [JQ filters](https://stedolan.github.io/jq/manual/#Basicfilters).\n\n\n\n### Default configuration\n\n```json\n{\n \"request\" : \".\",\n \"response\" : \"\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JQRequest }\n\n## JQ transform request\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JQRequest`\n\n### Description\n\nThis plugin let you transform request JSON body using [JQ filters](https://stedolan.github.io/jq/manual/#Basicfilters).\n\n\n\n### Default configuration\n\n```json\n{\n \"filter\" : \".\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JQResponse }\n\n## JQ transform response\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JQResponse`\n\n### Description\n\nThis plugin let you transform JSON response using [JQ filters](https://stedolan.github.io/jq/manual/#Basicfilters).\n\n\n\n### Default configuration\n\n```json\n{\n \"filter\" : \".\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JsonToXmlRequest }\n\n## request body json-to-xml\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JsonToXmlRequest`\n\n### Description\n\nThis plugin transform incoming request body from json to xml and may apply a jq transformation\n\n\n\n### Default configuration\n\n```json\n{\n \"filter\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JsonToXmlResponse }\n\n## response body json-to-xml\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JsonToXmlResponse`\n\n### Description\n\nThis plugin transform response body from json to xml and may apply a jq transformation\n\n\n\n### Default configuration\n\n```json\n{\n \"filter\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JwtSigner }\n\n## Jwt signer\n\n### Defined on steps\n\n - `ValidateAccess`\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JwtSigner`\n\n### Description\n\nThis plugin can only generate token\n\n\n\n### Default configuration\n\n```json\n{\n \"verifier\" : null,\n \"replace_if_present\" : true,\n \"fail_if_present\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JwtVerification }\n\n## Jwt verifiers\n\n### Defined on steps\n\n - `ValidateAccess`\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JwtVerification`\n\n### Description\n\nThis plugin verifies the current request with one or more jwt verifier\n\n\n\n### Default configuration\n\n```json\n{\n \"verifiers\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.JwtVerificationOnly }\n\n## Jwt verification only\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.JwtVerificationOnly`\n\n### Description\n\nThis plugin verifies the current request with one jwt verifier\n\n\n\n### Default configuration\n\n```json\n{\n \"verifier\" : null,\n \"fail_if_absent\" : true\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.MaintenanceMode }\n\n## Maintenance mode\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.MaintenanceMode`\n\n### Description\n\nThis plugin displays a maintenance page\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.MissingHeadersIn }\n\n## Missing headers in\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.MissingHeadersIn`\n\n### Description\n\nThis plugin adds headers (if missing) in the incoming otoroshi request\n\n\n\n### Default configuration\n\n```json\n{\n \"headers\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.MissingHeadersOut }\n\n## Missing headers out\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.MissingHeadersOut`\n\n### Description\n\nThis plugin adds headers (if missing) in the otoroshi response\n\n\n\n### Default configuration\n\n```json\n{\n \"headers\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.MockResponses }\n\n## Mock Responses\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.MockResponses`\n\n### Description\n\nThis plugin returns mock responses\n\n\n\n### Default configuration\n\n```json\n{\n \"responses\" : [ ],\n \"pass_through\" : true\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgAuthModuleExpectedUser }\n\n## User logged in expected\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgAuthModuleExpectedUser`\n\n### Description\n\nThis plugin enforce that a user from any auth. module is logged in\n\n\n\n### Default configuration\n\n```json\n{\n \"only_from\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgAuthModuleUserExtractor }\n\n## User extraction from auth. module\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgAuthModuleUserExtractor`\n\n### Description\n\nThis plugin extracts users from an authentication module without enforcing login\n\n\n\n### Default configuration\n\n```json\n{\n \"auth_module\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgBiscuitExtractor }\n\n## Apikey from Biscuit token extractor\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgBiscuitExtractor`\n\n### Description\n\nThis plugin extract an from a Biscuit token where the biscuit has an #authority fact 'client_id' containing\napikey client_id and an #authority fact 'client_sign' that is the HMAC256 signature of the apikey client_id with the apikey client_secret\n\n\n\n### Default configuration\n\n```json\n{\n \"public_key\" : null,\n \"checks\" : [ ],\n \"facts\" : [ ],\n \"resources\" : [ ],\n \"rules\" : [ ],\n \"revocation_ids\" : [ ],\n \"extractor\" : {\n \"name\" : \"Authorization\",\n \"type\" : \"header\"\n },\n \"enforce\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgBiscuitValidator }\n\n## Biscuit token validator\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgBiscuitValidator`\n\n### Description\n\nThis plugin validates a Biscuit token\n\n\n\n### Default configuration\n\n```json\n{\n \"public_key\" : null,\n \"checks\" : [ ],\n \"facts\" : [ ],\n \"resources\" : [ ],\n \"rules\" : [ ],\n \"revocation_ids\" : [ ],\n \"extractor\" : {\n \"name\" : \"Authorization\",\n \"type\" : \"header\"\n },\n \"enforce\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgCertificateAsApikey }\n\n## Client certificate as apikey\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgCertificateAsApikey`\n\n### Description\n\nThis plugin uses client certificate as an apikey. The apikey will be stored for classic apikey usage\n\n\n\n### Default configuration\n\n```json\n{\n \"read_only\" : false,\n \"allow_client_id_only\" : false,\n \"throttling_quota\" : 100,\n \"daily_quota\" : 10000000,\n \"monthly_quota\" : 10000000,\n \"constrained_services_only\" : false,\n \"tags\" : [ ],\n \"metadata\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgClientCertChainHeader }\n\n## Client certificate header\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgClientCertChainHeader`\n\n### Description\n\nThis plugin pass client certificate informations to the target in headers\n\n\n\n### Default configuration\n\n```json\n{\n \"send_pem\" : false,\n \"pem_header_name\" : \"X-Client-Cert-Pem\",\n \"send_dns\" : false,\n \"dns_header_name\" : \"X-Client-Cert-DNs\",\n \"send_chain\" : false,\n \"chain_header_name\" : \"X-Client-Cert-Chain\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgClientCredentials }\n\n## Client Credential Service\n\n### Defined on steps\n\n - `Sink`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgClientCredentials`\n\n### Description\n\nThis plugin add an an oauth client credentials service (`https://unhandleddomain/.well-known/otoroshi/oauth/token`) to create an access_token given a client id and secret\n\n\n\n### Default configuration\n\n```json\n{\n \"expiration\" : 3600000,\n \"default_key_pair\" : \"otoroshi-jwt-signing\",\n \"domain\" : \"*\",\n \"secure\" : true,\n \"biscuit\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgDefaultRequestBody }\n\n## Default request body\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgDefaultRequestBody`\n\n### Description\n\nThis plugin adds a default request body if none specified\n\n\n\n### Default configuration\n\n```json\n{\n \"bodyBinary\" : \"\",\n \"contentType\" : \"text/plain\",\n \"contentEncoding\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgDeferPlugin }\n\n## Defer Responses\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgDeferPlugin`\n\n### Description\n\nThis plugin will expect a `X-Defer` header or a `defer` query param and defer the response according to the value in milliseconds.\nThis plugin is some kind of inside joke as one a our customer ask us to make slower apis.\n\n\n\n### Default configuration\n\n```json\n{\n \"duration\" : 0\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgDiscoverySelfRegistrationSink }\n\n## Global self registration endpoints (service discovery)\n\n### Defined on steps\n\n - `Sink`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgDiscoverySelfRegistrationSink`\n\n### Description\n\nThis plugin add support for self registration endpoint on specific hostnames\n\n\n\n### Default configuration\n\n```json\n{ }\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgDiscoverySelfRegistrationTransformer }\n\n## Self registration endpoints (service discovery)\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgDiscoverySelfRegistrationTransformer`\n\n### Description\n\nThis plugin add support for self registration endpoint on a specific service\n\n\n\n### Default configuration\n\n```json\n{ }\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgDiscoveryTargetsSelector }\n\n## Service discovery target selector (service discovery)\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgDiscoveryTargetsSelector`\n\n### Description\n\nThis plugin select a target in the pool of discovered targets for this service.\nUse in combination with either `DiscoverySelfRegistrationSink` or `DiscoverySelfRegistrationTransformer` to make it work using the `self registration` pattern.\nOr use an implementation of `DiscoveryJob` for the `third party registration pattern`.\n\n\n\n### Default configuration\n\n```json\n{ }\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgErrorRewriter }\n\n## Error response rewrite\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgErrorRewriter`\n\n### Description\n\nThis plugin catch http response with specific statuses and rewrite the response\n\n\n\n### Default configuration\n\n```json\n{\n \"ranges\" : [ {\n \"from\" : 500,\n \"to\" : 599\n } ],\n \"templates\" : {\n \"default\" : \"\\n \\n

An error occurred with id: ${error_id}

\\n

please contact your administrator with this error id !

\\n \\n\"\n },\n \"log\" : true,\n \"export\" : true\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgGeolocationInfoEndpoint }\n\n## Geolocation endpoint\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgGeolocationInfoEndpoint`\n\n### Description\n\nThis plugin will expose current geolocation informations on the following endpoint `/.well-known/otoroshi/plugins/geolocation`\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgGeolocationInfoHeader }\n\n## Geolocation header\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgGeolocationInfoHeader`\n\n### Description\n\nThis plugin will send informations extracted by the Geolocation details extractor to the target service in a header.\n\n\n\n### Default configuration\n\n```json\n{\n \"header_name\" : \"X-User-Agent-Info\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgHasAllowedUsersValidator }\n\n## Allowed users only\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgHasAllowedUsersValidator`\n\n### Description\n\nThis plugin only let allowed users pass\n\n\n\n### Default configuration\n\n```json\n{\n \"usernames\" : [ ],\n \"emails\" : [ ],\n \"email_domains\" : [ ],\n \"metadata_match\" : [ ],\n \"metadata_not_match\" : [ ],\n \"profile_match\" : [ ],\n \"profile_not_match\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgHasClientCertMatchingApikeyValidator }\n\n## Client Certificate + Api Key only\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgHasClientCertMatchingApikeyValidator`\n\n### Description\n\nCheck if a client certificate is present in the request and that the apikey used matches the client certificate.\nYou can set the client cert. DN in an apikey metadata named `allowed-client-cert-dn`\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgHasClientCertMatchingHttpValidator }\n\n## Client certificate matching (over http)\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgHasClientCertMatchingHttpValidator`\n\n### Description\n\nCheck if client certificate matches the following fetched from an http endpoint\n\n\n\n### Default configuration\n\n```json\n{\n \"serial_numbers\" : [ ],\n \"subject_dns\" : [ ],\n \"issuer_dns\" : [ ],\n \"regex_subject_dns\" : [ ],\n \"regex_issuer_dns\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgHasClientCertMatchingValidator }\n\n## Client certificate matching\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgHasClientCertMatchingValidator`\n\n### Description\n\nCheck if client certificate matches the following configuration\n\n\n\n### Default configuration\n\n```json\n{\n \"serial_numbers\" : [ ],\n \"subject_dns\" : [ ],\n \"issuer_dns\" : [ ],\n \"regex_subject_dns\" : [ ],\n \"regex_issuer_dns\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgHasClientCertValidator }\n\n## Client Certificate Only\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgHasClientCertValidator`\n\n### Description\n\nCheck if a client certificate is present in the request\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgHtmlPatcher }\n\n## Html Patcher\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgHtmlPatcher`\n\n### Description\n\nThis plugin can inject elements in html pages (in the body or in the head) returned by the service\n\n\n\n### Default configuration\n\n```json\n{\n \"append_head\" : [ ],\n \"append_body\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgHttpClientCache }\n\n## HTTP Client Cache\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgHttpClientCache`\n\n### Description\n\nThis plugin add cache headers to responses\n\n\n\n### Default configuration\n\n```json\n{\n \"max_age_seconds\" : 86400,\n \"methods\" : [ \"GET\" ],\n \"status\" : [ 200 ],\n \"mime_types\" : [ \"text/html\" ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgIpStackGeolocationInfoExtractor }\n\n## Geolocation details extractor (using IpStack api)\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgIpStackGeolocationInfoExtractor`\n\n### Description\n\nThis plugin extract geolocation informations from ip address using the [IpStack dbs](https://ipstack.com/).\nThe informations are store in plugins attrs for other plugins to use\n\n\n\n### Default configuration\n\n```json\n{\n \"apikey\" : null,\n \"timeout\" : 2000,\n \"log\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgIzanamiV1Canary }\n\n## Izanami V1 Canary Campaign\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgIzanamiV1Canary`\n\n### Description\n\nThis plugin allow you to perform canary testing based on an izanami experiment campaign (A/B test)\n\n\n\n### Default configuration\n\n```json\n{\n \"experiment_id\" : \"foo:bar:qix\",\n \"config_id\" : \"foo:bar:qix:config\",\n \"izanami_url\" : \"https://izanami.foo.bar\",\n \"tls\" : {\n \"certs\" : [ ],\n \"trusted_certs\" : [ ],\n \"enabled\" : false,\n \"loose\" : false,\n \"trust_all\" : false\n },\n \"client_id\" : \"client\",\n \"client_secret\" : \"secret\",\n \"timeout\" : 5000,\n \"route_config\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgIzanamiV1Proxy }\n\n## Izanami v1 APIs Proxy\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgIzanamiV1Proxy`\n\n### Description\n\nThis plugin exposes routes to proxy Izanami configuration and features tree APIs\n\n\n\n### Default configuration\n\n```json\n{\n \"path\" : \"/api/izanami\",\n \"feature_pattern\" : \"*\",\n \"config_pattern\" : \"*\",\n \"auto_context\" : false,\n \"features_enabled\" : true,\n \"features_with_context_enabled\" : true,\n \"configuration_enabled\" : false,\n \"tls\" : {\n \"certs\" : [ ],\n \"trusted_certs\" : [ ],\n \"enabled\" : false,\n \"loose\" : false,\n \"trust_all\" : false\n },\n \"izanami_url\" : \"https://izanami.foo.bar\",\n \"client_id\" : \"client\",\n \"client_secret\" : \"secret\",\n \"timeout\" : 500\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgJwtUserExtractor }\n\n## Jwt user extractor\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgJwtUserExtractor`\n\n### Description\n\nThis plugin extract a user from a JWT token\n\n\n\n### Default configuration\n\n```json\n{\n \"verifier\" : \"none\",\n \"strict\" : true,\n \"strip\" : false,\n \"name_path\" : null,\n \"email_path\" : null,\n \"meta_path\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgLegacyApikeyCall }\n\n## Legacy apikeys\n\n### Defined on steps\n\n - `MatchRoute`\n - `ValidateAccess`\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgLegacyApikeyCall`\n\n### Description\n\nThis plugin expects to find an apikey to allow the request to pass. This plugin behaves exactly like the service descriptor does\n\n\n\n### Default configuration\n\n```json\n{\n \"public_patterns\" : [ ],\n \"private_patterns\" : [ ],\n \"extractors\" : {\n \"basic\" : {\n \"enabled\" : true,\n \"header_name\" : null,\n \"query_name\" : null\n },\n \"custom_headers\" : {\n \"enabled\" : true,\n \"client_id_header_name\" : null,\n \"client_secret_header_name\" : null\n },\n \"client_id\" : {\n \"enabled\" : true,\n \"header_name\" : null,\n \"query_name\" : null\n },\n \"jwt\" : {\n \"enabled\" : true,\n \"secret_signed\" : true,\n \"keypair_signed\" : true,\n \"include_request_attrs\" : false,\n \"max_jwt_lifespan_sec\" : null,\n \"header_name\" : null,\n \"query_name\" : null,\n \"cookie_name\" : null\n }\n },\n \"routing\" : {\n \"enabled\" : false\n },\n \"validate\" : true,\n \"mandatory\" : true,\n \"pass_with_user\" : false,\n \"wipe_backend_request\" : true,\n \"update_quotas\" : true\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgLegacyAuthModuleCall }\n\n## Legacy Authentication\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgLegacyAuthModuleCall`\n\n### Description\n\nThis plugin applies an authentication module the same way service descriptor does\n\n\n\n### Default configuration\n\n```json\n{\n \"public_patterns\" : [ ],\n \"private_patterns\" : [ ],\n \"pass_with_apikey\" : false,\n \"auth_module\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgLog4ShellFilter }\n\n## Log4Shell mitigation plugin\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgLog4ShellFilter`\n\n### Description\n\nThis plugin try to detect Log4Shell attacks in request and block them\n\n\n\n### Default configuration\n\n```json\n{\n \"status\" : 200,\n \"body\" : \"\",\n \"parse_body\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgMaxMindGeolocationInfoExtractor }\n\n## Geolocation details extractor (using Maxmind db)\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgMaxMindGeolocationInfoExtractor`\n\n### Description\n\nThis plugin extract geolocation informations from ip address using the [Maxmind dbs](https://www.maxmind.com/en/geoip2-databases).\nThe informations are store in plugins attrs for other plugins to use\n\n\n\n### Default configuration\n\n```json\n{\n \"path\" : \"global\",\n \"log\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgResponseCache }\n\n## Response Cache\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgResponseCache`\n\n### Description\n\nThis plugin can cache responses from target services in the otoroshi datasstore\nIt also provides a debug UI at `/.well-known/otoroshi/bodylogger`.\n\n\n\n### Default configuration\n\n```json\n{\n \"ttl\" : 3600000,\n \"maxSize\" : 52428800,\n \"autoClean\" : true,\n \"filter\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgSecurityTxt }\n\n## Security Txt\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgSecurityTxt`\n\n### Description\n\nThis plugin exposes a special route `/.well-known/security.txt` as proposed at [https://securitytxt.org/](https://securitytxt.org/)\n\n\n\n### Default configuration\n\n```json\n{\n \"contact\" : \"contact@foo.bar\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgServiceQuotas }\n\n## Public quotas\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgServiceQuotas`\n\n### Description\n\nThis plugin will enforce public quotas on the current route\n\n\n\n### Default configuration\n\n```json\n{\n \"throttling_quota\" : 10000000,\n \"daily_quota\" : 10000000,\n \"monthly_quota\" : 10000000\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgTrafficMirroring }\n\n## Traffic Mirroring\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgTrafficMirroring`\n\n### Description\n\nThis plugin will mirror every request to other targets\n\n\n\n### Default configuration\n\n```json\n{\n \"to\" : \"https://foo.bar.dev\",\n \"enabled\" : true,\n \"capture_response\" : false,\n \"generate_events\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgUserAgentExtractor }\n\n## User-Agent details extractor\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgUserAgentExtractor`\n\n### Description\n\nThis plugin extract informations from User-Agent header such as browsser version, OS version, etc.\nThe informations are store in plugins attrs for other plugins to use\n\n\n\n### Default configuration\n\n```json\n{\n \"log\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgUserAgentInfoEndpoint }\n\n## User-Agent endpoint\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgUserAgentInfoEndpoint`\n\n### Description\n\nThis plugin will expose current user-agent informations on the following endpoint: /.well-known/otoroshi/plugins/user-agent\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.NgUserAgentInfoHeader }\n\n## User-Agent header\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.NgUserAgentInfoHeader`\n\n### Description\n\nThis plugin will sent informations extracted by the User-Agent details extractor to the target service in a header\n\n\n\n### Default configuration\n\n```json\n{\n \"header_name\" : \"X-User-Agent-Info\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OAuth1Caller }\n\n## OAuth1 caller\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OAuth1Caller`\n\n### Description\n\nThis plugin can be used to call api that are authenticated using OAuth1.\n Consumer key, secret, and OAuth token et OAuth token secret can be pass through the metadata of an api key\n or via the configuration of this plugin.\n\n\n\n### Default configuration\n\n```json\n{\n \"consumerKey\" : null,\n \"consumerSecret\" : null,\n \"token\" : null,\n \"tokenSecret\" : null,\n \"algo\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OAuth2Caller }\n\n## OAuth2 caller\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OAuth2Caller`\n\n### Description\n\nThis plugin can be used to call api that are authenticated using OAuth2 client_credential/password flow.\nDo not forget to enable client retry to handle token generation on expire.\n\n\n\n### Default configuration\n\n```json\n{\n \"kind\" : \"client_credentials\",\n \"url\" : \"https://127.0.0.1:8080/oauth/token\",\n \"method\" : \"POST\",\n \"headerName\" : \"Authorization\",\n \"headerValueFormat\" : \"Bearer %s\",\n \"jsonPayload\" : false,\n \"clientId\" : \"the client_id\",\n \"clientSecret\" : \"the client_secret\",\n \"scope\" : null,\n \"audience\" : null,\n \"user\" : null,\n \"password\" : null,\n \"cacheTokenSeconds\" : 600000,\n \"tlsConfig\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OIDCAccessTokenAsApikey }\n\n## OIDC access_token as apikey\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OIDCAccessTokenAsApikey`\n\n### Description\n\nThis plugin will use the third party apikey configuration to generate an apikey\n\n\n\n### Default configuration\n\n```json\n{\n \"enabled\" : true,\n \"atLeastOne\" : false,\n \"config\" : {\n \"enabled\" : true,\n \"quotasEnabled\" : true,\n \"uniqueApiKey\" : false,\n \"type\" : \"OIDC\",\n \"oidcConfigRef\" : \"some-oidc-auth-module-id\",\n \"localVerificationOnly\" : false,\n \"mode\" : \"Tmp\",\n \"ttl\" : 0,\n \"headerName\" : \"Authorization\",\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"excludedPatterns\" : [ ],\n \"scopes\" : [ ],\n \"rolesPath\" : [ ],\n \"roles\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OIDCAccessTokenValidator }\n\n## OIDC access_token validator\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OIDCAccessTokenValidator`\n\n### Description\n\nThis plugin will use the third party apikey configuration and apply it while keeping the apikey mecanism of otoroshi.\nUse it to combine apikey validation and OIDC access_token validation.\n\n\n\n### Default configuration\n\n```json\n{\n \"enabled\" : true,\n \"atLeastOne\" : false,\n \"config\" : {\n \"enabled\" : true,\n \"quotasEnabled\" : true,\n \"uniqueApiKey\" : false,\n \"type\" : \"OIDC\",\n \"oidcConfigRef\" : \"some-oidc-auth-module-id\",\n \"localVerificationOnly\" : false,\n \"mode\" : \"Tmp\",\n \"ttl\" : 0,\n \"headerName\" : \"Authorization\",\n \"throttlingQuota\" : 100,\n \"dailyQuota\" : 10000000,\n \"monthlyQuota\" : 10000000,\n \"excludedPatterns\" : [ ],\n \"scopes\" : [ ],\n \"rolesPath\" : [ ],\n \"roles\" : [ ]\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OIDCHeaders }\n\n## OIDC headers\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OIDCHeaders`\n\n### Description\n\nThis plugin injects headers containing tokens and profile from current OIDC provider.\n\n\n\n### Default configuration\n\n```json\n{\n \"profile\" : {\n \"send\" : false,\n \"headerName\" : \"X-OIDC-User\"\n },\n \"idToken\" : {\n \"send\" : false,\n \"name\" : \"id_token\",\n \"headerName\" : \"X-OIDC-Id-Token\",\n \"jwt\" : true\n },\n \"accessToken\" : {\n \"send\" : false,\n \"name\" : \"access_token\",\n \"headerName\" : \"X-OIDC-Access-Token\",\n \"jwt\" : true\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OtoroshiChallenge }\n\n## Otoroshi challenge token\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OtoroshiChallenge`\n\n### Description\n\nThis plugin adds a jwt challenge token to the request to a backend and expects a response with a matching token\n\n\n\n### Default configuration\n\n```json\n{\n \"version\" : \"V2\",\n \"ttl\" : 30,\n \"request_header_name\" : null,\n \"response_header_name\" : null,\n \"algo_to_backend\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"algo_from_backend\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n },\n \"state_resp_leeway\" : 10\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OtoroshiHeadersIn }\n\n## Otoroshi headers in\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OtoroshiHeadersIn`\n\n### Description\n\nThis plugin adds Otoroshi specific headers to the request\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OtoroshiInfos }\n\n## Otoroshi info. token\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OtoroshiInfos`\n\n### Description\n\nThis plugin adds a jwt token with informations about the caller to the backend\n\n\n\n### Default configuration\n\n```json\n{\n \"version\" : \"Latest\",\n \"ttl\" : 30,\n \"header_name\" : null,\n \"add_fields\" : null,\n \"algo\" : {\n \"type\" : \"HSAlgoSettings\",\n \"size\" : 512,\n \"secret\" : \"secret\",\n \"base64\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.OverrideHost }\n\n## Override host header\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.OverrideHost`\n\n### Description\n\nThis plugin override the current Host header with the Host of the backend target\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.PublicPrivatePaths }\n\n## Public/Private paths\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.PublicPrivatePaths`\n\n### Description\n\nThis plugin allows or forbid request based on path patterns\n\n\n\n### Default configuration\n\n```json\n{\n \"strict\" : false,\n \"private_patterns\" : [ ],\n \"public_patterns\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.QueryTransformer }\n\n## Query param transformer\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.QueryTransformer`\n\n### Description\n\nThis plugin can modify the query params of the request\n\n\n\n### Default configuration\n\n```json\n{\n \"remove\" : [ ],\n \"rename\" : { },\n \"add\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.RBAC }\n\n## RBAC\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.RBAC`\n\n### Description\n\nThis plugin check if current user/apikey/jwt token has the right role\n\n\n\n### Default configuration\n\n```json\n{\n \"allow\" : [ ],\n \"deny\" : [ ],\n \"allow_all\" : false,\n \"deny_all\" : false,\n \"jwt_path\" : null,\n \"apikey_path\" : null,\n \"user_path\" : null,\n \"role_prefix\" : null,\n \"roles\" : \"roles\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.ReadOnlyCalls }\n\n## Read only requests\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.ReadOnlyCalls`\n\n### Description\n\nThis plugin verifies the current request only reads data\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.Redirection }\n\n## Redirection\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.Redirection`\n\n### Description\n\nThis plugin redirects the current request elsewhere\n\n\n\n### Default configuration\n\n```json\n{\n \"code\" : 303,\n \"to\" : \"https://www.otoroshi.io\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.RemoveHeadersIn }\n\n## Remove headers in\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.RemoveHeadersIn`\n\n### Description\n\nThis plugin removes headers in the incoming otoroshi request\n\n\n\n### Default configuration\n\n```json\n{\n \"header_names\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.RemoveHeadersOut }\n\n## Remove headers out\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.RemoveHeadersOut`\n\n### Description\n\nThis plugin removes headers in the otoroshi response\n\n\n\n### Default configuration\n\n```json\n{\n \"header_names\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.Robots }\n\n## Robots\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.Robots`\n\n### Description\n\nThis plugin provides all the necessary tool to handle search engine robots\n\n\n\n### Default configuration\n\n```json\n{\n \"robot_txt_enabled\" : true,\n \"robot_txt_content\" : \"User-agent: *\\nDisallow: /\\n\",\n \"meta_enabled\" : true,\n \"meta_content\" : \"noindex,nofollow,noarchive\",\n \"header_enabled\" : true,\n \"header_content\" : \"noindex, nofollow, noarchive\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.RoutingRestrictions }\n\n## Routing Restrictions\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.RoutingRestrictions`\n\n### Description\n\nThis plugin apply routing restriction `method domain/path` on the current request/route\n\n\n\n### Default configuration\n\n```json\n{\n \"allow_last\" : true,\n \"allowed\" : [ ],\n \"forbidden\" : [ ],\n \"not_found\" : [ ]\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.S3Backend }\n\n## S3 Static backend\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.S3Backend`\n\n### Description\n\nThis plugin is able to S3 bucket with file content\n\n\n\n### Default configuration\n\n```json\n{\n \"bucket\" : \"\",\n \"endpoint\" : \"\",\n \"region\" : \"eu-west-1\",\n \"access\" : \"client\",\n \"secret\" : \"secret\",\n \"key\" : \"\",\n \"chunkSize\" : 8388608,\n \"v4auth\" : true,\n \"writeEvery\" : 60000,\n \"acl\" : \"private\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.SOAPAction }\n\n## SOAP action\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.SOAPAction`\n\n### Description\n\nThis plugin is able to call SOAP actions and expose it as a rest endpoint\n\n\n\n### Default configuration\n\n```json\n{\n \"url\" : null,\n \"envelope\" : \"\",\n \"action\" : null,\n \"preserve_query\" : true,\n \"charset\" : null,\n \"jq_request_filter\" : null,\n \"jq_response_filter\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.SendOtoroshiHeadersBack }\n\n## Send otoroshi headers back\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.SendOtoroshiHeadersBack`\n\n### Description\n\nThis plugin adds response header containing useful informations about the current call\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.SnowMonkeyChaos }\n\n## Snow Monkey Chaos\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.SnowMonkeyChaos`\n\n### Description\n\nThis plugin introduce some chaos into you life\n\n\n\n### Default configuration\n\n```json\n{\n \"large_request_fault\" : null,\n \"large_response_fault\" : null,\n \"latency_injection_fault\" : null,\n \"bad_responses_fault\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.StaticBackend }\n\n## Static backend\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.StaticBackend`\n\n### Description\n\nThis plugin is able to serve a static folder with file content\n\n\n\n### Default configuration\n\n```json\n{\n \"root_path\" : \"/tmp\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.StaticResponse }\n\n## Static Response\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.StaticResponse`\n\n### Description\n\nThis plugin returns static responses\n\n\n\n### Default configuration\n\n```json\n{\n \"status\" : 200,\n \"headers\" : { },\n \"body\" : \"\"\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.TailscaleSelectTargetByName }\n\n## Tailscale select target by name\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.TailscaleSelectTargetByName`\n\n### Description\n\nThis plugin selects a machine instance on Tailscale network based on its name\n\n\n\n### Default configuration\n\n```json\n{\n \"machine_name\" : \"my-machine\",\n \"use_ip_address\" : false\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.TcpTunnel }\n\n## TCP Tunnel\n\n### Defined on steps\n\n - `HandlesTunnel`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.TcpTunnel`\n\n### Description\n\nThis plugin creates TCP tunnels through otoroshi\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.UdpTunnel }\n\n## UDP Tunnel\n\n### Defined on steps\n\n - `HandlesTunnel`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.UdpTunnel`\n\n### Description\n\nThis plugin creates UDP tunnels through otoroshi\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.W3CTracing }\n\n## W3C Trace Context\n\n### Defined on steps\n\n - `TransformRequest`\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.W3CTracing`\n\n### Description\n\nThis plugin propagates W3C Trace Context spans and can export it to Jaeger or Zipkin\n\n\n\n### Default configuration\n\n```json\n{\n \"kind\" : \"noop\",\n \"endpoint\" : \"http://localhost:3333/spans\",\n \"timeout\" : 30000,\n \"baggage\" : { }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmAccessValidator }\n\n## Wasm Access control\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmAccessValidator`\n\n### Description\n\nDelegate route access to a wasm plugin\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmBackend }\n\n## Wasm Backend\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmBackend`\n\n### Description\n\nThis plugin can be used to use a wasm plugin as backend\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmOPA }\n\n## Open Policy Agent (OPA)\n\n### Defined on steps\n\n - `ValidateAccess`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmOPA`\n\n### Description\n\nRepo policies as WASM modules\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : true,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmPreRoute }\n\n## Wasm pre-route\n\n### Defined on steps\n\n - `PreRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmPreRoute`\n\n### Description\n\nThis plugin can be used to use a wasm plugin as in pre-route phase\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmRequestTransformer }\n\n## Wasm Request Transformer\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmRequestTransformer`\n\n### Description\n\nTransform the content of the request with a wasm plugin\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmResponseTransformer }\n\n## Wasm Response Transformer\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmResponseTransformer`\n\n### Description\n\nTransform the content of a response with a wasm plugin\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmRouteMatcher }\n\n## Wasm Route Matcher\n\n### Defined on steps\n\n - `MatchRoute`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmRouteMatcher`\n\n### Description\n\nThis plugin can be used to use a wasm plugin as route matcher\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmRouter }\n\n## Wasm Router\n\n### Defined on steps\n\n - `Router`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmRouter`\n\n### Description\n\nCan decide for routing with a wasm plugin\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.WasmSink }\n\n## Wasm Sink\n\n### Defined on steps\n\n - `Sink`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.WasmSink`\n\n### Description\n\nHandle unmatched requests with a wasm plugin\n\n\n\n### Default configuration\n\n```json\n{\n \"source\" : {\n \"kind\" : \"Unknown\",\n \"path\" : \"\",\n \"opts\" : { }\n },\n \"memoryPages\" : 4,\n \"functionName\" : null,\n \"config\" : { },\n \"allowedHosts\" : [ ],\n \"allowedPaths\" : { },\n \"wasi\" : false,\n \"opa\" : false,\n \"lifetime\" : \"Forever\",\n \"authorizations\" : {\n \"httpAccess\" : false,\n \"proxyHttpCallTimeout\" : 5000,\n \"globalDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginDataStoreAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"globalMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"pluginMapAccess\" : {\n \"read\" : false,\n \"write\" : false\n },\n \"proxyStateAccess\" : false,\n \"configurationAccess\" : false\n }\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.XForwardedHeaders }\n\n## X-Forwarded-* headers\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.XForwardedHeaders`\n\n### Description\n\nThis plugin adds all the X-Forwarder-* headers to the request for the backend target\n\n\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.XmlToJsonRequest }\n\n## request body xml-to-json\n\n### Defined on steps\n\n - `TransformRequest`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.XmlToJsonRequest`\n\n### Description\n\nThis plugin transform incoming request body from xml to json and may apply a jq transformation\n\n\n\n### Default configuration\n\n```json\n{\n \"filter\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.plugins.XmlToJsonResponse }\n\n## response body xml-to-json\n\n### Defined on steps\n\n - `TransformResponse`\n\n### Plugin reference\n\n`cp:otoroshi.next.plugins.XmlToJsonResponse`\n\n### Description\n\nThis plugin transform response body from xml to json and may apply a jq transformation\n\n\n\n### Default configuration\n\n```json\n{\n \"filter\" : null\n}\n```\n\n\n\n\n\n@@@\n\n\n@@@ div { .ng-plugin .plugin-hidden .pl #otoroshi.next.tunnel.TunnelPlugin }\n\n## Remote tunnel calls\n\n### Defined on steps\n\n - `CallBackend`\n\n### Plugin reference\n\n`cp:otoroshi.next.tunnel.TunnelPlugin`\n\n### Description\n\nThis plugin can contact remote service using tunnels\n\n\n\n### Default configuration\n\n```json\n{\n \"tunnel_id\" : \"default\"\n}\n```\n\n\n\n\n\n@@@\n\n\n\n\n"},{"name":"create-plugins.md","id":"/plugins/create-plugins.md","url":"/plugins/create-plugins.html","title":"Create plugins","content":"# Create plugins\n\n@@@ warning\nThis section is under rewrite. The following content is deprecated\n@@@\n\nWhen everything has failed and you absolutely need a feature in Otoroshi to make your use case work, there is a solution. Plugins is the feature in Otoroshi that allow you to code how Otoroshi should behave when receiving, validating and routing an http request. With request plugin, you can change request / response headers and request / response body the way you want, provide your own apikey, etc.\n\n## Plugin types\n\nthere are many plugin types explained @ref:[here](./plugins.md) \n\n## Code and signatures\n\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/requestsink.scala#L14-L19\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/routing.scala#L75-L78\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/accessvalidator.scala#L65-L85\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/script.scala#269-L540\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/eventlistener.scala#L27-L48\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/job.scala#L69-L164\n* https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/script/job.scala#L108-L110\n\n\nfor more information about APIs you can use\n\n* https://www.playframework.com/documentation/2.8.x/api/scala/index.html#package\n* https://www.playframework.com/documentation/2.8.x/api/scala/index.html#play.api.mvc.Results\n* https://github.com/MAIF/otoroshi\n* https://doc.akka.io/docs/akka/2.5/stream/index.html\n* https://doc.akka.io/api/akka/current/akka/stream/index.html\n* https://doc.akka.io/api/akka/current/akka/stream/scaladsl/Source.html\n\n## Plugin examples\n\n@ref:[A lot of plugins](./built-in-plugins.md) comes with otoroshi, you can find them on [github](https://github.com/MAIF/otoroshi/tree/master/otoroshi/app/plugins)\n\n## Writing a plugin from Otoroshi UI\n\nLog into Otoroshi and go to `Settings (cog icon) / Plugins`. Here you can create multiple request transformer scripts and associate it with service descriptor later.\n\n@@@ div { .centered-img }\n\n@@@\n\nwhen you write for instance a transformer in the Otoroshi UI, do the following\n\n```scala\nimport akka.stream.Materializer\nimport env.Env\nimport models.{ApiKey, PrivateAppsUser, ServiceDescriptor}\nimport otoroshi.script._\nimport play.api.Logger\nimport play.api.mvc.{Result, Results}\nimport scala.util._\nimport scala.concurrent.{ExecutionContext, Future}\n\nclass MyTransformer extends RequestTransformer {\n\n val logger = Logger(\"my-transformer\")\n\n // implements the methods you want\n}\n\n// WARN: do not forget this line to provide a working instance of your transformer to Otoroshi\nnew MyTransformer()\n```\n\nYou can use the compile button to check if the script compiles, or code the transformer in your IDE (see next point).\n\nThen go to a service descriptor, scroll to the bottom of the page, and select your transformer in the list\n\n@@@ div { .centered-img }\n\n@@@\n\n## Providing a transformer from Java classpath\n\nYou can write your own transformer using your favorite IDE. Just create an SBT project with the following dependencies. It can be quite handy to manage the source code like any other piece of code, and it avoid the compilation time for the script at Otoroshi startup.\n\n```scala\nlazy val root = (project in file(\".\")).\n settings(\n inThisBuild(List(\n organization := \"com.example\",\n scalaVersion := \"2.12.7\",\n version := \"0.1.0-SNAPSHOT\"\n )),\n name := \"request-transformer-example\",\n libraryDependencies += \"fr.maif\" %% \"otoroshi\" % \"1.x.x\"\n )\n```\n\n@@@ warning\nyou MUST provide plugins that lies in the `otoroshi_plugins` package or in a sub-package of `otoroshi_plugins`. If you do not, your plugin will not be found by otoroshi. for example\n\n```scala\npackage otoroshi_plugins.com.my.company.myplugin\n```\n\nalso you don't have to instantiate your plugin at the end of the file like in the Otoroshi UI\n@@@\n\nWhen your code is ready, create a jar file \n\n```\nsbt package\n```\n\nand add the jar file to the Otoroshi classpath\n\n```sh\njava -cp \"/path/to/transformer.jar:$/path/to/otoroshi.jar\" play.core.server.ProdServerStart\n```\n\nthen, in your service descriptor, you can chose your transformer in the list. If you want to do it from the API, you have to defined the transformerRef using `cp:` prefix like \n\n```json\n{\n \"transformerRef\": \"cp:otoroshi_plugins.my.class.package.MyTransformer\"\n}\n```\n\n## Getting custom configuration from the Otoroshi config. file\n\nLet say you need to provide custom configuration values for a script, then you can customize a configuration file of Otoroshi\n\n```hocon\ninclude \"application.conf\"\n\notoroshi {\n scripts {\n enabled = true\n }\n}\n\nmy-transformer {\n env = \"prod\"\n maxRequestBodySize = 2048\n maxResponseBodySize = 2048\n}\n```\n\nthen start Otoroshi like\n\n```sh\njava -Dconfig.file=/path/to/custom.conf -jar otoroshi.jar\n```\n\nthen, in your transformer, you can write something like \n\n```scala\npackage otoroshi_plugins.com.example.otoroshi\n\nimport akka.stream.Materializer\nimport akka.stream.scaladsl._\nimport akka.util.ByteString\nimport env.Env\nimport models.{ApiKey, PrivateAppsUser, ServiceDescriptor}\nimport otoroshi.script._\nimport play.api.Logger\nimport play.api.mvc.{Result, Results}\nimport scala.util._\nimport scala.concurrent.{ExecutionContext, Future}\n\nclass BodyLengthLimiter extends RequestTransformer {\n\n override def def transformResponseWithCtx(ctx: TransformerResponseContext)(implicit env: Env, ec: ExecutionContext, mat: Materializer): Source[ByteString, _] = {\n val max = env.configuration.getOptional[Long](\"my-transformer.maxResponseBodySize\").getOrElse(Long.MaxValue)\n ctx.body.limitWeighted(max)(_.size)\n }\n\n override def transformRequestWithCtx(ctx: TransformerRequestContext)(implicit env: Env, ec: ExecutionContext, mat: Materializer): Source[ByteString, _] = {\n val max = env.configuration.getOptional[Long](\"my-transformer.maxRequestBodySize\").getOrElse(Long.MaxValue)\n ctx.body.limitWeighted(max)(_.size)\n }\n}\n```\n\n## Using a library that is not embedded in Otoroshi\n\nJust use the `classpath` option when running Otoroshi\n\n```sh\njava -cp \"/path/to/library.jar:$/path/to/otoroshi.jar\" play.core.server.ProdServerStart\n```\n\nBe carefull as your library can conflict with other libraries used by Otoroshi and affect its stability\n\n## Enabling plugins\n\nplugins can be enabled per service from the service settings page or globally from the danger zone in the plugins section.\n\n## Full example\n\na full external plugin example can be found @link:[here](https://github.com/mathieuancelin/otoroshi-wasmer-plugin)\n"},{"name":"index.md","id":"/plugins/index.md","url":"/plugins/index.html","title":"Otoroshi plugins","content":"# Otoroshi plugins\n\nIn this sections, you will find informations about Otoroshi plugins system\n\n* @ref:[Plugins system](./plugins.md)\n* @ref:[Create plugins](./create-plugins.md)\n* @ref:[Built in plugins](./built-in-plugins.md)\n* @ref:[Built in legacy plugins](./built-in-legacy-plugins.md)\n\n@@@ index\n\n* [Plugins system](./plugins.md)\n* [Create plugins](./create-plugins.md)\n* [Built in plugins](./built-in-plugins.md)\n* [Built in legacy plugins](./built-in-legacy-plugins.md)\n\n@@@"},{"name":"plugins.md","id":"/plugins/plugins.md","url":"/plugins/plugins.html","title":"Otoroshi plugins system","content":"# Otoroshi plugins system\n\nOtoroshi includes several extension points that allows you to create your own plugins and support stuff not supported by default\n\n## Available plugins\n\n@@@ div { .plugin .script }\n## Request Sink\n### Description\nUsed when no services are matched in Otoroshi. Can reply with any content.\n@@@\n\n@@@ div { .plugin .script }\n## Pre routing\n### Description\nUsed to extract values (like custom apikeys) and provide them to other plugins or Otoroshi engine\n@@@\n\n@@@ div { .plugin .script }\n## Access Validator\n### Description\nUsed to validate if a request can pass or not based on whatever you want\n@@@\n\n@@@ div { .plugin .script }\n## Request Transformer\n### Description\nUsed to transform request, responses and their body. Can be used to return arbitrary content\n@@@\n\n@@@ div { .plugin .script }\n## Event listener\n### Description\nAny plugin type can listen to Otoroshi internal events and react to thems\n@@@\n\n@@@ div { .plugin .script }\n## Job\n### Description\nTasks that can run automatically once, on be scheduled with a cron expression or every defined interval\n@@@\n\n@@@ div { .plugin .script }\n## Exporter\n### Description\nUsed to export events and Otoroshi alerts to an external source\n@@@\n\n@@@ div { .plugin .script }\n## Request handler\n### Description\nUsed to handle traffic without passing through Otoroshi routing and apply own rules\n@@@\n\n@@@ div { .plugin .script }\n## Nano app\n### Description\nUsed to write an api directly in Otoroshi in Scala language\n@@@"},{"name":"anonymous-reporting.md","id":"/topics/anonymous-reporting.md","url":"/topics/anonymous-reporting.html","title":"Anonymous reporting","content":"# Anonymous reporting\n\nThe best way of supporting us in Otoroshi developement is to enable Anonymous reporting.\n\n## Details\n\nWhen this feature is active, Otoroshi perdiodically send anonymous information about its configuration.\n\nThis information helps us to know how Otoroshi is used, it's a precious hint to prioritise our roadmap.\n\nBelow is an example of what is send by Otoroshi. You can find more information about these fields either on @ref:[entities documentation](../entities/index.md) or [by reading the source code](https://github.com/MAIF/otoroshi/blob/master/otoroshi/app/jobs/reporting.scala#L174-L458).\n\n```json\n{\n \"@timestamp\": 1679514537259,\n \"timestamp_str\": \"2023-03-22T20:48:57.259+01:00\",\n \"@id\": \"4edb54171-8156-4947-b821-41d6c2bd1ba7\",\n \"otoroshi_cluster_id\": \"1148aee35-a487-47b0-b494-a2a44862c618\",\n \"otoroshi_version\": \"16.0.0-dev\",\n \"java_version\": {\n \"version\": \"11.0.16.1\",\n \"vendor\": \"Eclipse Adoptium\"\n },\n \"os\": {\n \"name\": \"Mac OS X\",\n \"version\": \"13.1\",\n \"arch\": \"x86_64\"\n },\n \"datastore\": \"file\",\n \"env\": \"dev\",\n \"features\": {\n \"snow_monkey\": false,\n \"clever_cloud\": false,\n \"kubernetes\": false,\n \"elastic_read\": true,\n \"lets_encrypt\": false,\n \"auto_certs\": false,\n \"wasm_manager\": true,\n \"backoffice_login\": false\n },\n \"stats\": {\n \"calls\": 3823,\n \"data_in\": 480406,\n \"data_out\": 4698261,\n \"rate\": 0,\n \"duration\": 35.89899494949495,\n \"overhead\": 24.696984848484846,\n \"data_in_rate\": 0,\n \"data_out_rate\": 0,\n \"concurrent_requests\": 0\n },\n \"engine\": {\n \"uses_new\": true,\n \"uses_new_full\": false\n },\n \"cluster\": {\n \"mode\": \"Leader\",\n \"all_nodes\": 1,\n \"alive_nodes\": 1,\n \"leaders_count\": 1,\n \"workers_count\": 0,\n \"nodes\": [\n {\n \"id\": \"node_15ac62ec3-3e0d-48c1-a8ea-15de97088e3c\",\n \"os\": {\n \"name\": \"Mac OS X\",\n \"version\": \"13.1\",\n \"arch\": \"x86_64\"\n },\n \"java_version\": {\n \"version\": \"11.0.16.1\",\n \"vendor\": \"Eclipse Adoptium\"\n },\n \"version\": \"16.0.0-dev\",\n \"type\": \"Leader\",\n \"cpu_usage\": 10.992902320605205,\n \"load_average\": 44.38720703125,\n \"heap_used\": 527,\n \"heap_size\": 2048,\n \"relay\": true,\n \"tunnels\": 0\n }\n ]\n },\n \"entities\": {\n \"scripts\": {\n \"count\": 0,\n \"by_kind\": {}\n },\n \"routes\": {\n \"count\": 24,\n \"plugins\": {\n \"min\": 1,\n \"max\": 26,\n \"avg\": 4\n }\n },\n \"router_routes\": {\n \"count\": 27,\n \"http_clients\": {\n \"ahc\": 25,\n \"akka\": 2,\n \"netty\": 0,\n \"akka_ws\": 0\n },\n \"plugins\": {\n \"min\": 1,\n \"max\": 26,\n \"avg\": 4\n }\n },\n \"route_compositions\": {\n \"count\": 1,\n \"plugins\": {\n \"min\": 1,\n \"max\": 1,\n \"avg\": 1\n },\n \"by_kind\": {\n \"global\": 1\n }\n },\n \"apikeys\": {\n \"count\": 6,\n \"by_kind\": {\n \"disabled\": 0,\n \"with_rotation\": 0,\n \"with_read_only\": 0,\n \"with_client_id_only\": 0,\n \"with_constrained_services\": 0,\n \"with_meta\": 2,\n \"with_tags\": 1\n },\n \"authorized_on\": {\n \"min\": 1,\n \"max\": 4,\n \"avg\": 2\n }\n },\n \"jwt_verifiers\": {\n \"count\": 6,\n \"by_strategy\": {\n \"pass_through\": 6\n },\n \"by_alg\": {\n \"HSAlgoSettings\": 6\n }\n },\n \"certificates\": {\n \"count\": 9,\n \"by_kind\": {\n \"auto_renew\": 6,\n \"exposed\": 6,\n \"client\": 1,\n \"keypair\": 1\n }\n },\n \"auth_modules\": {\n \"count\": 8,\n \"by_kind\": {\n \"basic\": 7,\n \"oauth2\": 1\n }\n },\n \"service_descriptors\": {\n \"count\": 3,\n \"plugins\": {\n \"old\": 0,\n \"new\": 0\n },\n \"by_kind\": {\n \"disabled\": 1,\n \"fault_injection\": 0,\n \"health_check\": 1,\n \"gzip\": 0,\n \"jwt\": 0,\n \"cors\": 1,\n \"auth\": 0,\n \"protocol\": 0,\n \"restrictions\": 0\n }\n },\n \"teams\": {\n \"count\": 2\n },\n \"tenants\": {\n \"count\": 2\n },\n \"service_groups\": {\n \"count\": 2\n },\n \"data_exporters\": {\n \"count\": 10,\n \"by_kind\": {\n \"elastic\": 5,\n \"file\": 2,\n \"metrics\": 1,\n \"console\": 1,\n \"s3\": 1\n }\n },\n \"otoroshi_admins\": {\n \"count\": 5,\n \"by_kind\": {\n \"simple\": 2,\n \"webauthn\": 3\n }\n },\n \"backoffice_sessions\": {\n \"count\": 1,\n \"by_kind\": {\n \"simple\": 1\n }\n },\n \"private_apps_sessions\": {\n \"count\": 0,\n \"by_kind\": {}\n },\n \"tcp_services\": {\n \"count\": 0\n }\n },\n \"plugins_usage\": {\n \"cp:otoroshi.next.plugins.AdditionalHeadersOut\": 2,\n \"cp:otoroshi.next.plugins.DisableHttp10\": 2,\n \"cp:otoroshi.next.plugins.OverrideHost\": 27,\n \"cp:otoroshi.next.plugins.TailscaleFetchCertificate\": 1,\n \"cp:otoroshi.next.plugins.OtoroshiInfos\": 6,\n \"cp:otoroshi.next.plugins.MissingHeadersOut\": 2,\n \"cp:otoroshi.next.plugins.Redirection\": 2,\n \"cp:otoroshi.next.plugins.OtoroshiChallenge\": 5,\n \"cp:otoroshi.next.plugins.BuildMode\": 2,\n \"cp:otoroshi.next.plugins.XForwardedHeaders\": 2,\n \"cp:otoroshi.next.plugins.NgLegacyAuthModuleCall\": 2,\n \"cp:otoroshi.next.plugins.Cors\": 4,\n \"cp:otoroshi.next.plugins.OtoroshiHeadersIn\": 2,\n \"cp:otoroshi.next.plugins.NgDefaultRequestBody\": 1,\n \"cp:otoroshi.next.plugins.NgHttpClientCache\": 1,\n \"cp:otoroshi.next.plugins.ReadOnlyCalls\": 2,\n \"cp:otoroshi.next.plugins.RemoveHeadersIn\": 2,\n \"cp:otoroshi.next.plugins.JwtVerificationOnly\": 1,\n \"cp:otoroshi.next.plugins.ApikeyCalls\": 3,\n \"cp:otoroshi.next.plugins.WasmAccessValidator\": 3,\n \"cp:otoroshi.next.plugins.WasmBackend\": 3,\n \"cp:otoroshi.next.plugins.IpAddressAllowedList\": 2,\n \"cp:otoroshi.next.plugins.AuthModule\": 4,\n \"cp:otoroshi.next.plugins.RemoveHeadersOut\": 2,\n \"cp:otoroshi.next.plugins.IpAddressBlockList\": 2,\n \"cp:otoroshi.next.proxy.ProxyEngine\": 1,\n \"cp:otoroshi.next.plugins.JwtVerification\": 3,\n \"cp:otoroshi.next.plugins.GzipResponseCompressor\": 2,\n \"cp:otoroshi.next.plugins.SendOtoroshiHeadersBack\": 3,\n \"cp:otoroshi.next.plugins.AdditionalHeadersIn\": 4,\n \"cp:otoroshi.next.plugins.SOAPAction\": 1,\n \"cp:otoroshi.next.plugins.NgLegacyApikeyCall\": 6,\n \"cp:otoroshi.next.plugins.ForceHttpsTraffic\": 2,\n \"cp:otoroshi.next.plugins.NgErrorRewriter\": 1,\n \"cp:otoroshi.next.plugins.MissingHeadersIn\": 2,\n \"cp:otoroshi.next.plugins.MaintenanceMode\": 3,\n \"cp:otoroshi.next.plugins.RoutingRestrictions\": 2,\n \"cp:otoroshi.next.plugins.HeadersValidation\": 2\n }\n}\n```\n\n## Toggling\n\nAnonymous reporting can be toggled any time using :\n\n- the UI (Features > Danger zone > Send anonymous reports)\n- `otoroshi.anonymous-reporting.enabled` configuration\n- `OTOROSHI_ANONYMOUS_REPORTING_ENABLED` env variable\n"},{"name":"chaos-engineering.md","id":"/topics/chaos-engineering.md","url":"/topics/chaos-engineering.html","title":"Chaos engineering with the Snow Monkey","content":"# Chaos engineering with the Snow Monkey\n\nNihonzaru (the Snow Monkey) is the chaos engineering tool provided by Otoroshi. You can access it at `Settings (cog icon) / Snow Monkey`.\n\n@@@ div { .centered-img }\n\n@@@\n\n## Chaos engineering\n\nOtoroshi offers some tools to introduce [chaos engineering](https://principlesofchaos.org/) in your everyday life. With chaos engineering, you will improve the resilience of your architecture by creating faults in production on running systems. With [Nihonzaru (the snow monkey)](https://en.wikipedia.org/wiki/Japanese_macaque) Otoroshi helps you to create faults on http request/response handled by Otoroshi. \n\n@@@ div { .centered-img }\n\n@@@\n\n## Settings\n\n@@@ div { .centered-img }\n\n@@@\n\nThe snow monkey let you define a few settings to work properly :\n\n* **Include user facing apps.**: you want to create fault in production, but maybe you don't want your users to enjoy some nice snow monkey generated error pages. Using this switch let you include of not user facing apps (ui apps). Each service descriptor has a `User facing app switch` that will be used by the snow monkey.\n* **Dry run**: when dry run is enabled, outages will be registered and will generate events and alerts (in the otoroshi eventing system) but requests won't be actualy impacted. It's a good way to prepare applications to the snow monkey habits\n* **Outage strategy**: Either `AllServicesPerGroup` or `OneServicePerGroup`. It means that only one service per group or all services per groups will have `n` outages (see next bullet point) during the snow monkey working period\n* **Outages per day**: during snow monkey working period, each service per group or one service per group will have only `n` outages registered \n* **Working period**: the snow monkey only works during a working period. Here you can defined when it starts and when it stops\n* **Outage duration**: here you can defined the bounds for the random outage duration when an outage is created on a service\n* **Impacted groups**: here you can define a list of service groups impacted by the snow monkey. If none is specified, then all service groups will be impacted\n\n## Faults\n\nWith the snow monkey, you can generate four types of faults\n\n* **Large request fault**: Add trailing bytes at the end of the request body (if one)\n* **Large response fault**: Add trailing bytes at the end of the response body\n* **Latency injection fault**: Add random response latency between two bounds\n* **Bad response injection fault**: Create predefined responses with custom headers, body and status code\n\nEach fault let you define a ratio for impacted requests. If you specify a ratio of `0.2`, then 20% of the requests for the impacte service will be impacted by this fault\n\n@@@ div { .centered-img }\n\n@@@\n\nThen you juste have to start the snow monkey and enjoy the show ;)\n\n@@@ div { .centered-img }\n\n@@@\n\n## Current outages\n\nIn the last section of the snow monkey page, you can see current outages (per service), when they started, their duration, etc ...\n\n@@@ div { .centered-img }\n\n@@@"},{"name":"dev-portal.md","id":"/topics/dev-portal.md","url":"/topics/dev-portal.html","title":"Developer portal with Daikoku","content":"# Developer portal with Daikoku\n\nWhile Otoroshi is the perfect tool to manage your webapps in a technical point of view it lacked of business perspective. This is not the case anymore with Daikoku.\n\nWhile Otoroshi is a standalone, Daikoku is a developer portal which stands in front of Otoroshi and provides some business feature.\n\nWhether you want to use Daikoku for your public APIs, you want to monetize or with your private APIs to provide some documentation, facilitation and self-service feature, it will be the perfect portal for Otoroshi.\n\n@@@div { .plugin .platform }\n## Daikoku\n\nRun your first Daikoku with a simple jar or with one Docker command.\n\n\n
\nTry Daikoku \n
\n@link:[With jar](https://maif.github.io/daikoku/devmanual/getdaikoku/frombinaries.html)\n@link:[With Docker](https://maif.github.io/daikoku/devmanual/getdaikoku/fromdocker.html)\n@@@\n\n@@@div { .plugin .platform }\n## Contribute\n\nDaikoku is opensource, so all contributions are welcome.\n\n\n@link:[Show the repository](https://github.com/MAIF/daikoku)\n@@@\n\n@@@div { .plugin .platform }\n## Documentation\n\nDaikoku and its UI are fully documented.\n\n\n@link:[Read the documentation](https://maif.github.io/daikoku/devmanual/)\n@@@\n\n"},{"name":"engine.md","id":"/topics/engine.md","url":"/topics/engine.html","title":"Proxy engine","content":"# Proxy engine\n\nStarting from the `1.5.3` release, otoroshi offers a new plugin that implements the next generation of the proxy engine. \nThis engine has been designed based on our 5 years experience building, maintaining and running the previous one.\nIt tries to fix all the drawback we may have encountered during those years and highly improve performances, user experience, reporting and debugging capabilities. \n\nThe new engine is fully plugin oriented in order to spend CPU cycles only on useful stuff. You can enable this plugin only on some domain names so you can easily A/B test the new engine. The new proxy engine is designed to be more reactive and more efficient generally. It is also designed to be very efficient on path routing where it wasn't the old engines strong suit.\n\nStarting from version `16.0.0`, this engine will be enabled by default on any new otoroshi cluster. In a future version, the engine will be enabled for any new or exisiting otoroshi cluster.\n\n## Enabling the new engine\n\nBy default, all freshly started Otoroshi instances have the new proxy engine enabled by default, for the other, to enable the new proxy engine on an otoroshi instance, just add the plugin in the `global plugins` section of the danger zone, inject the default configuration, enable it and in `domains` add the values of the desired domains (let say we want to use the new engine on `api.foo.bar`. It is possible to use `*.foo.bar` if that's what you want to do).\n\nThe next time a request hits the `api.foo.bar` domain, the new engine will handle it instead of the previous one.\n\n```json\n{\n \"NextGenProxyEngine\" : {\n \"enabled\" : true,\n \"debug_headers\" : false,\n \"reporting\": true,\n \"domains\" : [ \"api.foo.bar\" ],\n \"deny_domains\" : [ ],\n }\n}\n```\n\nif you need to enable global plugin with the new engine, you can add the following configuration in the `global plugins` configuration object \n\n```javascript\n{\n ...\n \"ng\": {\n \"slots\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.W3CTracing\",\n \"enabled\": true,\n \"include\": [],\n \"exclude\": [],\n \"config\": {\n \"baggage\": {\n \"foo\": \"bar\"\n }\n }\n },\n {\n \"plugin\": \"cp:otoroshi.next.plugins.wrappers.RequestSinkWrapper\",\n \"enabled\": true,\n \"include\": [],\n \"exclude\": [],\n \"config\": {\n \"plugin\": \"cp:otoroshi.plugins.apikeys.ClientCredentialService\",\n \"ClientCredentialService\": {\n \"domain\": \"ccs-next-gen.oto.tools\",\n \"expiration\": 3600000,\n \"defaultKeyPair\": \"otoroshi-jwt-signing\",\n \"secure\": false\n }\n }\n }\n ]\n }\n ...\n}\n```\n\n## Entities\n\nThis plugin introduces new entities that will replace (one day maybe) service descriptors:\n\n - `routes`: a unique routing rule based on hostname, path, method and headers that will execute a bunch of plugins\n - `backends`: a list of targets to contact a backend\n\n## Entities sync\n\nA new behavior introduced for the new proxy engine is the entities sync job. To avoid unecessary operations on the underlying datastore when routing requests, a new job has been setup in otoroshi that synchronize the content of the datastore (at least a part of it) with an in-memory cache. Because of it, the propagation of changes between an admin api call and the actual result on routing can be longer than before. When a node creates, updates, or deletes an entity via the admin api, other nodes need to wait for the next poll to purge the old cached entity and start using the new one. You can change the interval between syncs with the configuration key `otoroshi.next.state-sync-interval` or the env. variable `OTOROSHI_NEXT_STATE_SYNC_INTERVAL`. The default value is `10000` and the unit is `milliseconds`\n\n@@@ warning\nBecause of entities sync, memory consumption of otoroshi will be significantly higher than previous versions. You can use `otoroshi.next.monitor-proxy-state-size=true` config (or `OTOROSHI_NEXT_MONITOR_PROXY_STATE_SIZE` env. variable) to monitor the actual memory size of the entities cache. This will produce the `ng-proxy-state-size-monitoring` metric in standard otoroshi metrics\n@@@\n\n## Automatic conversion\n\nThe new engine uses new entities for its configuration, but in order to facilitate transition between the old world and the new world, all the `service descriptors` of an otoroshi instance are automatically converted live into `routes` periodically. Any `service descriptor` should still work as expected through the new engine while enjoying all the perks.\n\n@@@ warning\nthe experimental nature of the engine can imply unexpected behaviors for converted service descriptors\n@@@\n\n## Routing\n\nthe new proxy engine introduces a new router that has enhanced capabilities and performances. The router can handle thousands of routes declarations without compromising performances.\n\nThe new route allow routes to be matched on a combination of\n\n* hostname\n* path\n* header values\n * where values can be `exact_value`, or `Regex(value_regex)`, or `Wildcard(value_with_*)`\n* query param values\n * where values can be `exact_value`, or `Regex(value_regex)`, or `Wildcard(value_with_*)`\n\npatch matching works \n\n* exactly\n * matches `/api/foo` with `/api/foo` and not with `/api/foo/bar`\n* starting with value (default behavior, like the previous engine)\n * matches `/api/foo` with `/api/foo` but also with `/api/foo/bar`\n\npath matching can also include wildcard paths and even path params\n\n* plain old path: `subdomain.domain.tld/api/users`\n* wildcard path: `subdomain.domain.tld/api/users/*/bills`\n* named path params: `subdomain.domain.tld/api/users/:id/bills`\n* named regex path params: `subdomain.domain.tld/api/users/$id<[0-9]+>/bills`\n\nhostname matching works on \n\n* exact values\n * `subdomain.domain.tld`\n* wildcard values like\n * `*.domain.tld`\n * `subdomain.*.tld`\n\nas path matching can now include named path params, it is possible to perform a ful url rewrite on the target path like \n\n* input: `subdomain.domain.tld/api/users/$id<[0-9]+>/bills`\n* output: `target.domain.tld/apis/v1/basic_users/${req.pathparams.id}/all_bills`\n\n## Plugins\n\nthe new route entity defines a plugin pipline where any plugin can be enabled or not and can be active only on some paths. \nEach plugin slot in the pipeline holds the plugin id and the plugin configuration. \n\nYou can also enable debugging only on a plugin instance instead of the whole route (see [the debugging section](#debugging))\n\n```javascript\n{ \n ...\n \"plugins\" : [ {\n \"enabled\" : true,\n \"debug\" : false,\n \"plugin\" : \"cp:otoroshi.next.plugins.OverrideHost\",\n \"include\" : [ ],\n \"exclude\" : [ ],\n \"config\" : { }\n }, {\n \"enabled\" : true,\n \"debug\" : false,\n \"plugin\" : \"cp:otoroshi.next.plugins.ApikeyCalls\",\n \"include\" : [ ],\n \"exclude\" : [ \"/openapi.json\" ],\n \"config\" : { }\n } ]\n}\n```\n\nyou can find the list of built-in plugins @ref:[here](../plugins/built-in-plugins.md)\n\n## Using legacy plugins\n\nif you need to use legacy otoroshi plugins with the new engine, you can use several wrappers in order to do so\n\n* `otoroshi.next.plugins.wrappers.PreRoutingWrapper`\n* `otoroshi.next.plugins.wrappers.AccessValidatorWrapper`\n* `otoroshi.next.plugins.wrappers.RequestSinkWrapper`\n* `otoroshi.next.plugins.wrappers.RequestTransformerWrapper`\n* `otoroshi.next.plugins.wrappers.CompositeWrapper`\n\nto use it, just declare a plugin slot with the right wrapper and in the config, declare the `plugin` you want to use and its configuration like:\n\n```javascript\n{\n \"plugin\": \"cp:otoroshi.next.plugins.wrappers.PreRoutingWrapper\",\n \"enabled\": true,\n \"include\": [],\n \"exclude\": [],\n \"config\": {\n \"plugin\": \"cp:otoroshi.plugins.jwt.JwtUserExtractor\",\n \"JwtUserExtractor\": {\n \"verifier\" : \"$ref\",\n \"strict\" : true,\n \"namePath\" : \"name\",\n \"emailPath\": \"email\",\n \"metaPath\" : null\n }\n }\n}\n```\n\n## Reporting\n\nby default, any request hiting the new engine will generate an execution report with informations about how the request pipeline steps were performed. It is possible to export those reports as `RequestFlowReport` events using classical data exporter. By default, exporting for reports is not enabled, you must enable the `export_reporting` flag on a `route` or `service`.\n\n```javascript\n{\n \"@id\": \"8efac472-07bc-4a80-8d27-4236309d7d01\",\n \"@timestamp\": \"2022-02-15T09:51:25.402+01:00\",\n \"@type\": \"RequestFlowReport\",\n \"@product\": \"otoroshi\",\n \"@serviceId\": \"service_548f13bb-a809-4b1d-9008-fae3b1851092\",\n \"@service\": \"demo-service\",\n \"@env\": \"prod\",\n \"route\": {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"service_dev_d54f11d0-18e2-4da4-9316-cf47733fd29a\",\n \"name\" : \"hey\",\n \"description\" : \"hey\",\n \"tags\" : [ \"env:prod\" ],\n \"metadata\" : { },\n \"enabled\" : true,\n \"debug_flow\" : true,\n \"export_reporting\" : false,\n \"groups\" : [ \"default\" ],\n \"frontend\" : {\n \"domains\" : [ \"hey-next-gen.oto.tools/\", \"hey.oto.tools/\" ],\n \"strip_path\" : true,\n \"exact\" : false,\n \"headers\" : { },\n \"methods\" : [ ]\n },\n \"backend\" : {\n \"targets\" : [ {\n \"id\" : \"127.0.0.1:8081\",\n \"hostname\" : \"127.0.0.1\",\n \"port\" : 8081,\n \"tls\" : false,\n \"weight\" : 1,\n \"protocol\" : \"HTTP/1.1\",\n \"ip_address\" : null,\n \"tls_config\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n } ],\n \"target_refs\" : [ ],\n \"root\" : \"/\",\n \"rewrite\" : false,\n \"load_balancing\" : {\n \"type\" : \"RoundRobin\"\n },\n \"client\" : {\n \"useCircuitBreaker\" : true,\n \"retries\" : 1,\n \"maxErrors\" : 20,\n \"retryInitialDelay\" : 50,\n \"backoffFactor\" : 2,\n \"callTimeout\" : 30000,\n \"callAndStreamTimeout\" : 120000,\n \"connectionTimeout\" : 10000,\n \"idleTimeout\" : 60000,\n \"globalTimeout\" : 30000,\n \"sampleInterval\" : 2000,\n \"proxy\" : { },\n \"customTimeouts\" : [ ],\n \"cacheConnectionSettings\" : {\n \"enabled\" : false,\n \"queueSize\" : 2048\n }\n }\n },\n \"backend_ref\" : null,\n \"plugins\" : [ ]\n },\n \"report\": {\n \"id\" : \"ab73707b3-946b-4853-92d4-4c38bbaac6d6\",\n \"creation\" : \"2022-02-15T09:51:25.402+01:00\",\n \"termination\" : \"2022-02-15T09:51:25.408+01:00\",\n \"duration\" : 5,\n \"duration_ns\" : 5905522,\n \"overhead\" : 4,\n \"overhead_ns\" : 4223215,\n \"overhead_in\" : 2,\n \"overhead_in_ns\" : 2687750,\n \"overhead_out\" : 1,\n \"overhead_out_ns\" : 1535465,\n \"state\" : \"Successful\",\n \"steps\" : [ {\n \"task\" : \"start-handling\",\n \"start\" : 1644915085402,\n \"start_fmt\" : \"2022-02-15T09:51:25.402+01:00\",\n \"stop\" : 1644915085402,\n \"stop_fmt\" : \"2022-02-15T09:51:25.402+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 177430,\n \"ctx\" : null\n }, {\n \"task\" : \"check-concurrent-requests\",\n \"start\" : 1644915085402,\n \"start_fmt\" : \"2022-02-15T09:51:25.402+01:00\",\n \"stop\" : 1644915085402,\n \"stop_fmt\" : \"2022-02-15T09:51:25.402+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 145242,\n \"ctx\" : null\n }, {\n \"task\" : \"find-route\",\n \"start\" : 1644915085402,\n \"start_fmt\" : \"2022-02-15T09:51:25.402+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 497119,\n \"ctx\" : {\n \"found_route\" : {\n \"_loc\" : {\n \"tenant\" : \"default\",\n \"teams\" : [ \"default\" ]\n },\n \"id\" : \"service_dev_d54f11d0-18e2-4da4-9316-cf47733fd29a\",\n \"name\" : \"hey\",\n \"description\" : \"hey\",\n \"tags\" : [ \"env:prod\" ],\n \"metadata\" : { },\n \"enabled\" : true,\n \"debug_flow\" : true,\n \"export_reporting\" : false,\n \"groups\" : [ \"default\" ],\n \"frontend\" : {\n \"domains\" : [ \"hey-next-gen.oto.tools/\", \"hey.oto.tools/\" ],\n \"strip_path\" : true,\n \"exact\" : false,\n \"headers\" : { },\n \"methods\" : [ ]\n },\n \"backend\" : {\n \"targets\" : [ {\n \"id\" : \"127.0.0.1:8081\",\n \"hostname\" : \"127.0.0.1\",\n \"port\" : 8081,\n \"tls\" : false,\n \"weight\" : 1,\n \"protocol\" : \"HTTP/1.1\",\n \"ip_address\" : null,\n \"tls_config\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n } ],\n \"target_refs\" : [ ],\n \"root\" : \"/\",\n \"rewrite\" : false,\n \"load_balancing\" : {\n \"type\" : \"RoundRobin\"\n },\n \"client\" : {\n \"useCircuitBreaker\" : true,\n \"retries\" : 1,\n \"maxErrors\" : 20,\n \"retryInitialDelay\" : 50,\n \"backoffFactor\" : 2,\n \"callTimeout\" : 30000,\n \"callAndStreamTimeout\" : 120000,\n \"connectionTimeout\" : 10000,\n \"idleTimeout\" : 60000,\n \"globalTimeout\" : 30000,\n \"sampleInterval\" : 2000,\n \"proxy\" : { },\n \"customTimeouts\" : [ ],\n \"cacheConnectionSettings\" : {\n \"enabled\" : false,\n \"queueSize\" : 2048\n }\n }\n },\n \"backend_ref\" : null,\n \"plugins\" : [ ]\n },\n \"matched_path\" : \"\",\n \"exact\" : true,\n \"params\" : { },\n \"matched_routes\" : [ \"service_dev_d54f11d0-18e2-4da4-9316-cf47733fd29a\" ]\n }\n }, {\n \"task\" : \"compute-plugins\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 105151,\n \"ctx\" : {\n \"disabled_plugins\" : [ ],\n \"filtered_plugins\" : [ ]\n }\n }, {\n \"task\" : \"tenant-check\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 26097,\n \"ctx\" : null\n }, {\n \"task\" : \"check-global-maintenance\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 14132,\n \"ctx\" : null\n }, {\n \"task\" : \"call-before-request-callbacks\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 56671,\n \"ctx\" : null\n }, {\n \"task\" : \"extract-tracking-id\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 5207,\n \"ctx\" : null\n }, {\n \"task\" : \"call-pre-route-plugins\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 39786,\n \"ctx\" : null\n }, {\n \"task\" : \"call-access-validator-plugins\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085403,\n \"stop_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 25311,\n \"ctx\" : null\n }, {\n \"task\" : \"enforce-global-limits\",\n \"start\" : 1644915085403,\n \"start_fmt\" : \"2022-02-15T09:51:25.403+01:00\",\n \"stop\" : 1644915085404,\n \"stop_fmt\" : \"2022-02-15T09:51:25.404+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 296617,\n \"ctx\" : {\n \"remaining_quotas\" : {\n \"authorizedCallsPerSec\" : 10000000,\n \"currentCallsPerSec\" : 10000000,\n \"remainingCallsPerSec\" : 10000000,\n \"authorizedCallsPerDay\" : 10000000,\n \"currentCallsPerDay\" : 10000000,\n \"remainingCallsPerDay\" : 10000000,\n \"authorizedCallsPerMonth\" : 10000000,\n \"currentCallsPerMonth\" : 10000000,\n \"remainingCallsPerMonth\" : 10000000\n }\n }\n }, {\n \"task\" : \"choose-backend\",\n \"start\" : 1644915085404,\n \"start_fmt\" : \"2022-02-15T09:51:25.404+01:00\",\n \"stop\" : 1644915085404,\n \"stop_fmt\" : \"2022-02-15T09:51:25.404+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 368899,\n \"ctx\" : {\n \"backend\" : {\n \"id\" : \"127.0.0.1:8081\",\n \"hostname\" : \"127.0.0.1\",\n \"port\" : 8081,\n \"tls\" : false,\n \"weight\" : 1,\n \"protocol\" : \"HTTP/1.1\",\n \"ip_address\" : null,\n \"tls_config\" : {\n \"certs\" : [ ],\n \"trustedCerts\" : [ ],\n \"mtls\" : false,\n \"loose\" : false,\n \"trustAll\" : false\n }\n }\n }\n }, {\n \"task\" : \"transform-request\",\n \"start\" : 1644915085404,\n \"start_fmt\" : \"2022-02-15T09:51:25.404+01:00\",\n \"stop\" : 1644915085404,\n \"stop_fmt\" : \"2022-02-15T09:51:25.404+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 506363,\n \"ctx\" : null\n }, {\n \"task\" : \"call-backend\",\n \"start\" : 1644915085404,\n \"start_fmt\" : \"2022-02-15T09:51:25.404+01:00\",\n \"stop\" : 1644915085407,\n \"stop_fmt\" : \"2022-02-15T09:51:25.407+01:00\",\n \"duration\" : 2,\n \"duration_ns\" : 2163470,\n \"ctx\" : null\n }, {\n \"task\" : \"transform-response\",\n \"start\" : 1644915085407,\n \"start_fmt\" : \"2022-02-15T09:51:25.407+01:00\",\n \"stop\" : 1644915085407,\n \"stop_fmt\" : \"2022-02-15T09:51:25.407+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 279887,\n \"ctx\" : null\n }, {\n \"task\" : \"stream-response\",\n \"start\" : 1644915085407,\n \"start_fmt\" : \"2022-02-15T09:51:25.407+01:00\",\n \"stop\" : 1644915085407,\n \"stop_fmt\" : \"2022-02-15T09:51:25.407+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 382952,\n \"ctx\" : null\n }, {\n \"task\" : \"trigger-analytics\",\n \"start\" : 1644915085407,\n \"start_fmt\" : \"2022-02-15T09:51:25.407+01:00\",\n \"stop\" : 1644915085408,\n \"stop_fmt\" : \"2022-02-15T09:51:25.408+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 812036,\n \"ctx\" : null\n }, {\n \"task\" : \"request-success\",\n \"start\" : 1644915085408,\n \"start_fmt\" : \"2022-02-15T09:51:25.408+01:00\",\n \"stop\" : 1644915085408,\n \"stop_fmt\" : \"2022-02-15T09:51:25.408+01:00\",\n \"duration\" : 0,\n \"duration_ns\" : 0,\n \"ctx\" : null\n } ]\n }\n}\n```\n\n## Debugging\n\nwith the new reporting capabilities, the new engine also have debugging capabilities built in. In you enable the `debug_flow` flag on a route (or service), the resulting `RequestFlowReport` will be enriched with contextual informations between each plugins of the route plugin pipeline\n\n@@@ note\nyou can also use the `Try it` feature of the new route designer UI to get debug reports automatically for a specific call\n@@@\n\n## HTTP traffic capture\n\nusing the `capture` flag, a `TrafficCaptureEvent` is generated for each http request/response. This event will contains request and response body. Those events can be exported using @ref:[data exporters](../entities/data-exporters.md) as usual. You can also use the @ref:[GoReplay file exporter](../entities/data-exporters.md#goreplay-file) that is specifically designed to ingest those events and create [GoReplay](https://goreplay.org/) files (`.gor`)\n\n@@@ warning\nthis feature can have actual impact on CPU and RAM consumption\n@@@\n\n```json\n{\n \"@id\": \"d5998b0c4-cb08-43e6-9921-27472c7a56e0\",\n \"@timestamp\": 1651828801115,\n \"@type\": \"TrafficCaptureEvent\",\n \"@product\": \"otoroshi\",\n \"@serviceId\": \"route_2b2670879-131c-423d-b755-470c7b1c74b1\",\n \"@service\": \"test-server\",\n \"@env\": \"prod\",\n \"route\": {\n \"id\": \"route_2b2670879-131c-423d-b755-470c7b1c74b1\",\n \"name\": \"test-server\"\n },\n \"request\": {\n \"id\": \"152250645825034725600000\",\n \"int_id\": 115,\n \"method\": \"POST\",\n \"headers\": {\n \"Host\": \"test-server-next-gen.oto.tools:9999\",\n \"Accept\": \"*/*\",\n \"Cookie\": \"fifoo=fibar\",\n \"User-Agent\": \"curl/7.64.1\",\n \"Content-Type\": \"application/json\",\n \"Content-Length\": \"13\",\n \"Remote-Address\": \"127.0.0.1:57660\",\n \"Timeout-Access\": \"\",\n \"Raw-Request-URI\": \"/\",\n \"Tls-Session-Info\": \"Session(1651828041285|SSL_NULL_WITH_NULL_NULL)\"\n },\n \"cookies\": [\n {\n \"name\": \"fifoo\",\n \"value\": \"fibar\",\n \"path\": \"/\",\n \"domain\": null,\n \"http_only\": true,\n \"max_age\": null,\n \"secure\": false,\n \"same_site\": null\n }\n ],\n \"tls\": false,\n \"uri\": \"/\",\n \"path\": \"/\",\n \"version\": \"HTTP/1.1\",\n \"has_body\": true,\n \"remote\": \"127.0.0.1\",\n \"client_cert_chain\": null,\n \"body\": \"{\\\"foo\\\":\\\"bar\\\"}\"\n },\n \"backend_request\": {\n \"url\": \"http://localhost:3000/\",\n \"method\": \"POST\",\n \"headers\": {\n \"Host\": \"localhost\",\n \"Accept\": \"*/*\",\n \"Cookie\": \"fifoo=fibar\",\n \"User-Agent\": \"curl/7.64.1\",\n \"Content-Type\": \"application/json\",\n \"Content-Length\": \"13\"\n },\n \"version\": \"HTTP/1.1\",\n \"client_cert_chain\": null,\n \"cookies\": [\n {\n \"name\": \"fifoo\",\n \"value\": \"fibar\",\n \"domain\": null,\n \"path\": \"/\",\n \"maxAge\": null,\n \"secure\": false,\n \"httpOnly\": true\n }\n ],\n \"id\": \"152260631569472064900000\",\n \"int_id\": 33,\n \"body\": \"{\\\"foo\\\":\\\"bar\\\"}\"\n },\n \"backend_response\": {\n \"status\": 200,\n \"headers\": {\n \"Date\": \"Fri, 06 May 2022 09:20:01 GMT\",\n \"Connection\": \"keep-alive\",\n \"Set-Cookie\": \"foo=bar\",\n \"Content-Type\": \"application/json\",\n \"Transfer-Encoding\": \"chunked\"\n },\n \"cookies\": [\n {\n \"name\": \"foo\",\n \"value\": \"bar\",\n \"domain\": null,\n \"path\": null,\n \"maxAge\": null,\n \"secure\": false,\n \"httpOnly\": false\n }\n ],\n \"id\": \"152260631569472064900000\",\n \"status_txt\": \"OK\",\n \"http_version\": \"HTTP/1.1\",\n \"body\": \"{\\\"headers\\\":{\\\"host\\\":\\\"localhost\\\",\\\"accept\\\":\\\"*/*\\\",\\\"user-agent\\\":\\\"curl/7.64.1\\\",\\\"content-type\\\":\\\"application/json\\\",\\\"cookie\\\":\\\"fifoo=fibar\\\",\\\"content-length\\\":\\\"13\\\"},\\\"method\\\":\\\"POST\\\",\\\"path\\\":\\\"/\\\",\\\"body\\\":\\\"{\\\\\\\"foo\\\\\\\":\\\\\\\"bar\\\\\\\"}\\\"}\"\n },\n \"response\": {\n \"id\": \"152250645825034725600000\",\n \"status\": 200,\n \"headers\": {\n \"Date\": \"Fri, 06 May 2022 09:20:01 GMT\",\n \"Connection\": \"keep-alive\",\n \"Set-Cookie\": \"foo=bar\",\n \"Content-Type\": \"application/json\",\n \"Transfer-Encoding\": \"chunked\"\n },\n \"cookies\": [\n {\n \"name\": \"foo\",\n \"value\": \"bar\",\n \"domain\": null,\n \"path\": null,\n \"maxAge\": null,\n \"secure\": false,\n \"httpOnly\": false\n }\n ],\n \"status_txt\": \"OK\",\n \"http_version\": \"HTTP/1.1\",\n \"body\": \"{\\\"headers\\\":{\\\"host\\\":\\\"localhost\\\",\\\"accept\\\":\\\"*/*\\\",\\\"user-agent\\\":\\\"curl/7.64.1\\\",\\\"content-type\\\":\\\"application/json\\\",\\\"cookie\\\":\\\"fifoo=fibar\\\",\\\"content-length\\\":\\\"13\\\"},\\\"method\\\":\\\"POST\\\",\\\"path\\\":\\\"/\\\",\\\"body\\\":\\\"{\\\\\\\"foo\\\\\\\":\\\\\\\"bar\\\\\\\"}\\\"}\"\n },\n \"user-agent-details\": null,\n \"origin-details\": null,\n \"instance-number\": 0,\n \"instance-name\": \"dev\",\n \"instance-zone\": \"local\",\n \"instance-region\": \"local\",\n \"instance-dc\": \"local\",\n \"instance-provider\": \"local\",\n \"instance-rack\": \"local\",\n \"cluster-mode\": \"Leader\",\n \"cluster-name\": \"otoroshi-leader-9hnv5HUXpbCZD7Ee\"\n}\n```\n\n## openapi import\n\nas the new router offers possibility to match exactly on a single path and a single method, and with the help of the `service` entity, it is now pretty easy to import openapi document as `route-compositions` entities. To do that, a new api has been made available to perform the translation. Be aware that this api **DOES NOT** save the entity and just return the result of the translation. \n\n```sh\ncurl -X POST \\\n -H 'Content-Type: application/json' \\\n -u admin-api-apikey-id:admin-api-apikey-secret \\\n 'http://otoroshi-api.oto.tools:8080/api/route-compositions/_openapi' \\\n -d '{\"domain\":\"oto-api-proxy.oto.tools\",\"openapi\":\"https://raw.githubusercontent.com/MAIF/otoroshi/master/otoroshi/public/openapi.json\"}'\n```\n\n@@@ div { .centered-img }\n\n@@@\n\n"},{"name":"events-and-analytics.md","id":"/topics/events-and-analytics.md","url":"/topics/events-and-analytics.html","title":"Events and analytics","content":"# Events and analytics\n\nOtoroshi is a solution fully traced : calls to services, access to UI, creation of resources, etc.\n\n@@@ warning\nYou have to use [Elastic](https://www.elastic.co) to enable analytics features in Otoroshi\n@@@\n\n## Events\n\n* Analytics event\n* Gateway event\n* TCP event\n* Healthcheck event\n\n## Event log\n\nOtoroshi can read his own exported events from an Elasticsearch instance, set up in the danger zone. Theses events are available from the UI, at the following route: `https://xxxxx/bo/dashboard/events`.\n\nThe `Global events` page display all events of **GatewayEvent** type. This page is a way to quickly read an interval of events and can be used in addition of a Kibana instance.\n\nFor each event, a list of information will be displayed and an additional button `content` to watch the full content of the event, at the JSON format. \n\n## Alerts \n\n* `MaxConcurrentRequestReachedAlert`: happening when the handled requests number are greater than the limit of concurrent requests indicated in the global configuration of Otoroshi\n* `CircuitBreakerOpenedAlert`: happening when the circuit breaker pass from closed to opened\n* `CircuitBreakerClosedAlert`: happening when the circuit breaker pass from opened to closed\n* `SessionDiscardedAlert`: send when an admin discarded an admin sessions\n* `SessionsDiscardedAlert`: send when an admin discarded all admin sessions\n* `PanicModeAlert`: send when panic mode is enabled\n* `OtoroshiExportAlert`: send when otoroshi global configuration is exported\n* `U2FAdminDeletedAlert`: send when an admin has deleted an other admin user\n* `BlackListedBackOfficeUserAlert`: send when a blacklisted user has tried to acccess to the UI\n* `AdminLoggedInAlert`: send when an user admin has logged to the UI\n* `AdminFirstLogin`: send when an user admin has successfully logged to the UI for the first time\n* `AdminLoggedOutAlert`: send when an user admin has logged out from Otoroshi\n* `GlobalConfigModification`: send when an user amdin has changed the global configuration of Otoroshi\n* `RevokedApiKeyUsageAlert`: send when an user admin has revoked an apikey\n* `ServiceGroupCreatedAlert`: send when an user admin has created a service group\n* `ServiceGroupUpdatedAlert`: send when an user admin has updated a service group\n* `ServiceGroupDeletedAlert`: send when an user admin has deleted a service group\n* `ServiceCreatedAlert`: send when an user admin has created a tcp service\n* `ServiceUpdatedAlert`: send when an user admin has updated a tcp service\n* `ServiceDeletedAlert`: send when an user admin has deleted a tcp service\n* `ApiKeyCreatedAlert`: send when an user admin has crated a new apikey\n* `ApiKeyUpdatedAlert`: send when an user admin has updated a new apikey\n* `ApiKeyDeletedAlert`: send when an user admin has deleted a new apikey\n\n## Audit\n\nWith Otoroshi, any admin action and any sucpicious/alert action is recorded. These records are stored in Otoroshi’s datastore (only the last n records, defined by the `otoroshi.events.maxSize` config key). All the records can be send through the analytics mechanism (WebHook, Kafka, Elastic) for external and/or further usage. We recommand sending away those records for security reasons.\n\nOtoroshi keep the following list of information for each executed action:\n\n* `Date`: moment of the action\n* `User`: name of the owner\n* `From`: IP of the concerned user\n* `Action`: action performed by the person. The possible actions are:\n\n * `ACCESS_APIKEY`: User accessed a apikey\n * `ACCESS_ALL_APIKEYS`: User accessed all apikeys\n * `CREATE_APIKEY`: User created a apikey\n * `UPDATE_APIKEY`: User updated a apikey\n * `DELETE_APIKEY`: User deleted a apikey\n * `ACCESS_AUTH_MODULE`: User accessed an Auth. module\n * `ACCESS_ALL_AUTH_MODULES`: User accessed all Auth. modules\n * `CREATE_AUTH_MODULE`: User created an Auth. module\n * `UPDATE_AUTH_MODULE`: User updated an Auth. module\n * `DELETE_AUTH_MODULE`: User deleted an Auth. module\n * `ACCESS_CERTIFICATE`: User accessed a certificate\n * `ACCESS_ALL_CERTIFICATES`: User accessed all certificates\n * `CREATE_CERTIFICATE`: User created a certificate\n * `UPDATE_CERTIFICATE`: User updated a certificate\n * `DELETE_CERTIFICATE`: User deleted a certificate\n * `ACCESS_CLIENT_CERT_VALIDATOR`: User accessed a client cert. validator\n * `ACCESS_ALL_CLIENT_CERT_VALIDATORS`: User accessed all client cert. validators\n * `CREATE_CLIENT_CERT_VALIDATOR`: User created a client cert. validator\n * `UPDATE_CLIENT_CERT_VALIDATOR`: User updated a client cert. validator\n * `DELETE_CLIENT_CERT_VALIDATOR`: User deleted a client cert. validator\n * `ACCESS_DATA_EXPORTER_CONFIG`: User accessed a data exporter config\n * `ACCESS_ALL_DATA_EXPORTER_CONFIG`: User accessed all data exporter config\n * `CREATE_DATA_EXPORTER_CONFIG`: User created a data exporter config\n * `UPDATE_DATA_EXPORTER_CONFIG`: User updated a data exporter config\n * `DELETE_DATA_EXPORTER_CONFIG`: User deleted a data exporter config\n * `ACCESS_GLOBAL_JWT_VERIFIER`: User accessed a global jwt verifier\n * `ACCESS_ALL_GLOBAL_JWT_VERIFIERS`: User accessed all global jwt verifiers\n * `CREATE_GLOBAL_JWT_VERIFIER`: User created a global jwt verifier\n * `UPDATE_GLOBAL_JWT_VERIFIER`: User updated a global jwt verifier\n * `DELETE_GLOBAL_JWT_VERIFIER`: User deleted a global jwt verifier\n * `ACCESS_SCRIPT`: User accessed a script\n * `ACCESS_ALL_SCRIPTS`: User accessed all scripts\n * `CREATE_SCRIPT`: User created a script\n * `UPDATE_SCRIPT`: User updated a script\n * `DELETE_SCRIPT`: User deleted a Script\n * `ACCESS_SERVICES_GROUP`: User accessed a service group\n * `ACCESS_ALL_SERVICES_GROUPS`: User accessed all services groups\n * `CREATE_SERVICE_GROUP`: User created a service group\n * `UPDATE_SERVICE_GROUP`: User updated a service group\n * `DELETE_SERVICE_GROUP`: User deleted a service group\n * `ACCESS_SERVICES_FROM_SERVICES_GROUP`: User accessed all services from a services group\n * `ACCESS_TCP_SERVICE`: User accessed a tcp service\n * `ACCESS_ALL_TCP_SERVICES`: User accessed all tcp services\n * `CREATE_TCP_SERVICE`: User created a tcp service\n * `UPDATE_TCP_SERVICE`: User updated a tcp service\n * `DELETE_TCP_SERVICE`: User deleted a tcp service\n * `ACCESS_TEAM`: User accessed a Team\n * `ACCESS_ALL_TEAMS`: User accessed all teams\n * `CREATE_TEAM`: User created a team\n * `UPDATE_TEAM`: User updated a team\n * `DELETE_TEAM`: User deleted a team\n * `ACCESS_TENANT`: User accessed a Tenant\n * `ACCESS_ALL_TENANTS`: User accessed all tenants\n * `CREATE_TENANT`: User created a tenant\n * `UPDATE_TENANT`: User updated a tenant\n * `DELETE_TENANT`: User deleted a tenant\n * `SERVICESEARCH`: User searched for a service\n * `ACTIVATE_PANIC_MODE`: Admin activated panic mode\n\n\n* `Message`: explicit message about the action (example: the `SERVICESEARCH` action happened when an `user searched for a service`)\n* `Content`: all information at JSON format\n\n## Global metrics\n\nThe global metrics are displayed on the index page of the Otoroshi UI. Otoroshi provides information about :\n\n* the number of requests served\n* the amount of data received and sended\n* the number of concurrent requests\n* the number of requests per second\n* the current overhead\n\nMore metrics can be found on the **Global analytics** page (available at https://xxxxxx/bo/dashboard/stats).\n\n## Monitoring services\n\nOnce you have declared services, you can monitor them with Otoroshi. \n\nLet's starting by setup Otoroshi to push events to an elastic cluster via a data exporter. Then you will can setup Otoroshi events read from an elastic cluster. Go to `settings (cog icon) / Danger Zone` and expand the `Analytics: Elastic cluster (read)` section.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Service healthcheck\n\nIf you have defined an health check URL in the service descriptor, you can access the health check page from the sidebar of the service page.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Service live stats\n\nYou can also monitor live stats like total of served request, average response time, average overhead, etc. The live stats page can be accessed from the sidebar of the service page.\n\n@@@ div { .centered-img }\n\n@@@\n\n### Service analytics\n\nYou can also get some aggregated metrics. The analytics page can be accessed from the sidebar of the service page.\n\n@@@ div { .centered-img }\n\n@@@\n\n## New proxy engine\n\n### Debug reporting\n\nwhen using the @ref:[new proxy engine](./engine.md), when a route or the global config. enables traffic capture using the `debug_flow` flag, events of type `RequestFlowReport` are generated\n\n### Traffic capture\n\nwhen using the @ref:[new proxy engine](./engine.md), when a route or the global config. enables traffic capture using the `capture` flag, events of type `TrafficCaptureEvent` are generated. It contains everything that compose otoroshi input http request and output http responses\n"},{"name":"expression-language.md","id":"/topics/expression-language.md","url":"/topics/expression-language.html","title":"Expression language","content":"# Expression language\n\n- [Documentation and examples](#documentation-and-examples)\n- [Test the expression language](#test-the-expression-language)\n\nThe expression language provides an important mechanism for accessing and manipulating Otoroshi data on different inputs. For example, with this mechanism, you can mapping a claim of an inconming token directly in a claim of a generated token (using @ref:[JWT verifiers](../entities/jwt-verifiers.md)). You can add information of the service descriptor traversed such as the domain of the service or the name of the service. This information can be useful on the backend service.\n\n## Documentation and examples\n\n@@@div { #expressions }\n \n@@@\n\nIf an input contains a string starting by `${`, Otoroshi will try to evaluate the content. If the content doesn't match a known expression,\nthe 'bad-expr' value will be set.\n\n## Test the expression language\n\nYou can test to get the same values than the right part by creating these following services. \n\n```sh\n# Let's start by downloading the latest Otoroshi.\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v16.5.0-dev/otoroshi.jar'\n\n# Once downloading, run Otoroshi.\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n\n# Create an authentication module to protect the following route.\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/auths \\\n-H \"Otoroshi-Client-Id: admin-api-apikey-id\" \\\n-H \"Otoroshi-Client-Secret: admin-api-apikey-secret\" \\\n-H 'Content-Type: application/json; charset=utf-8' \\\n-d @- <<'EOF'\n{\"type\":\"basic\",\"id\":\"auth_mod_in_memory_auth\",\"name\":\"in-memory-auth\",\"desc\":\"in-memory-auth\",\"users\":[{\"name\":\"User Otoroshi\",\"password\":\"$2a$10$oIf4JkaOsfiypk5ZK8DKOumiNbb2xHMZUkYkuJyuIqMDYnR/zXj9i\",\"email\":\"user@foo.bar\",\"metadata\":{\"username\":\"roger\"},\"tags\":[\"foo\"],\"webauthn\":null,\"rights\":[{\"tenant\":\"*:r\",\"teams\":[\"*:r\"]}]}],\"sessionCookieValues\":{\"httpOnly\":true,\"secure\":false}}\nEOF\n\n\n# Create a proxy of the mirror.otoroshi.io on http://api.oto.tools:8080\ncurl -X POST http://otoroshi-api.oto.tools:8080/api/routes \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-H 'Content-Type: application/json; charset=utf-8' \\\n-d @- <<'EOF'\n{\n \"id\": \"expression-language-api-service\",\n \"name\": \"expression-language\",\n \"enabled\": true,\n \"frontend\": {\n \"domains\": [\n \"api.oto.tools/\"\n ]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ]\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.OverrideHost\"\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.ApikeyCalls\",\n \"config\": {\n \"validate\": true,\n \"mandatory\": true,\n \"pass_with_user\": true,\n \"wipe_backend_request\": true,\n \"update_quotas\": true\n },\n \"plugin_index\": {\n \"validate_access\": 1,\n \"transform_request\": 2,\n \"match_route\": 0\n }\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.AuthModule\",\n \"config\": {\n \"pass_with_apikey\": true,\n \"auth_module\": null,\n \"module\": \"auth_mod_in_memory_auth\"\n },\n \"plugin_index\": {\n \"validate_access\": 1\n }\n },\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.AdditionalHeadersIn\",\n \"config\": {\n \"headers\": {\n \"my-expr-header.apikey.unknown-tag\": \"${apikey.tags['0':'no-found-tag']}\",\n \"my-expr-header.request.uri\": \"${req.uri}\",\n \"my-expr-header.ctx.replace-field-all-value\": \"${ctx.foo.replaceAll('o','a')}\",\n \"my-expr-header.env.unknown-field\": \"${env.java_h:not-found-java_h}\",\n \"my-expr-header.service-id\": \"${service.id}\",\n \"my-expr-header.ctx.unknown-fields\": \"${ctx.foob|ctx.foot:not-found}\",\n \"my-expr-header.apikey.metadata\": \"${apikey.metadata.foo}\",\n \"my-expr-header.request.protocol\": \"${req.protocol}\",\n \"my-expr-header.service-domain\": \"${service.domain}\",\n \"my-expr-header.token.unknown-foo-field\": \"${token.foob:not-found-foob}\",\n \"my-expr-header.service-unknown-group\": \"${service.groups['0':'unkown group']}\",\n \"my-expr-header.env.path\": \"${env.PATH}\",\n \"my-expr-header.request.unknown-header\": \"${req.headers.foob:default value}\",\n \"my-expr-header.service-name\": \"${service.name}\",\n \"my-expr-header.token.foo-field\": \"${token.foob|token.foo}\",\n \"my-expr-header.request.path\": \"${req.path}\",\n \"my-expr-header.ctx.geolocation\": \"${ctx.geolocation.foo}\",\n \"my-expr-header.token.unknown-fields\": \"${token.foob|token.foob2:not-found}\",\n \"my-expr-header.request.unknown-query\": \"${req.query.foob:default value}\",\n \"my-expr-header.service-subdomain\": \"${service.subdomain}\",\n \"my-expr-header.date\": \"${date}\",\n \"my-expr-header.ctx.replace-field-value\": \"${ctx.foo.replace('o','a')}\",\n \"my-expr-header.apikey.name\": \"${apikey.name}\",\n \"my-expr-header.request.full-url\": \"${req.fullUrl}\",\n \"my-expr-header.ctx.default-value\": \"${ctx.foob:other}\",\n \"my-expr-header.service-tld\": \"${service.tld}\",\n \"my-expr-header.service-metadata\": \"${service.metadata.foo}\",\n \"my-expr-header.ctx.useragent\": \"${ctx.useragent.foo}\",\n \"my-expr-header.service-env\": \"${service.env}\",\n \"my-expr-header.request.host\": \"${req.host}\",\n \"my-expr-header.config.unknown-port-field\": \"${config.http.ports:not-found}\",\n \"my-expr-header.request.domain\": \"${req.domain}\",\n \"my-expr-header.token.replace-header-value\": \"${token.foo.replace('o','a')}\",\n \"my-expr-header.service-group\": \"${service.groups['0']}\",\n \"my-expr-header.ctx.foo\": \"${ctx.foo}\",\n \"my-expr-header.apikey.tag\": \"${apikey.tags['0']}\",\n \"my-expr-header.service-unknown-metadata\": \"${service.metadata.test:default-value}\",\n \"my-expr-header.apikey.id\": \"${apikey.id}\",\n \"my-expr-header.request.header\": \"${req.headers.foo}\",\n \"my-expr-header.request.method\": \"${req.method}\",\n \"my-expr-header.ctx.foo-field\": \"${ctx.foob|ctx.foo}\",\n \"my-expr-header.config.port\": \"${config.http.port}\",\n \"my-expr-header.token.unknown-foo\": \"${token.foo}\",\n \"my-expr-header.date-with-format\": \"${date.format('yyy-MM-dd')}\",\n \"my-expr-header.apikey.unknown-metadata\": \"${apikey.metadata.myfield:default value}\",\n \"my-expr-header.request.query\": \"${req.query.foo}\",\n \"my-expr-header.token.replace-header-all-value\": \"${token.foo.replaceAll('o','a')}\"\n }\n }\n }\n ]\n}\nEOF\n```\n\nCreate an apikey or use the default generate apikey.\n\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/apikeys' \\\n-H \"Content-type: application/json\" \\\n-u admin-api-apikey-id:admin-api-apikey-secret \\\n-d @- <<'EOF'\n{\n \"clientId\": \"api-apikey-id\",\n \"clientSecret\": \"api-apikey-secret\",\n \"clientName\": \"api-apikey-name\",\n \"description\": \"api-apikey-id-description\",\n \"authorizedGroup\": \"default\",\n \"enabled\": true,\n \"throttlingQuota\": 10,\n \"dailyQuota\": 10,\n \"monthlyQuota\": 10,\n \"tags\": [\"foo\"],\n \"metadata\": {\n \"fii\": \"bar\"\n }\n}\nEOF\n```\n\nThen try to call the first service.\n\n```sh\ncurl http://api.oto.tools:8080/api/\\?foo\\=bar \\\n-H \"Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyLCJmb28iOiJiYXIifQ.lV130dFXR3bNtWBkwwf9dLmfsRVmnZhfYF9gvAaRzF8\" \\\n-H \"Otoroshi-Client-Id: api-apikey-id\" \\\n-H \"Otoroshi-Client-Secret: api-apikey-secret\" \\\n-H \"foo: bar\" | jq\n```\n\nThis will returns the list of the received headers by the mirror.\n\n```json\n{\n ...\n \"headers\": {\n ...\n \"my-expr-header.date\": \"2021-11-26T10:54:51.112+01:00\",\n \"my-expr-header.ctx.foo\": \"no-ctx-foo\",\n \"my-expr-header.env.path\": \"/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin\",\n \"my-expr-header.apikey.id\": \"admin-api-apikey-id\",\n \"my-expr-header.apikey.tag\": \"one-tag\",\n \"my-expr-header.service-id\": \"expression-language-api-service\",\n \"my-expr-header.apikey.name\": \"Otoroshi Backoffice ApiKey\",\n \"my-expr-header.config.port\": \"8080\",\n \"my-expr-header.request.uri\": \"/api/?foo=bar\",\n \"my-expr-header.service-env\": \"prod\",\n \"my-expr-header.service-tld\": \"oto.tools\",\n \"my-expr-header.request.host\": \"api.oto.tools:8080\",\n \"my-expr-header.request.path\": \"/api/\",\n \"my-expr-header.service-name\": \"expression-language\",\n \"my-expr-header.ctx.foo-field\": \"no-ctx-foob-foo\",\n \"my-expr-header.ctx.useragent\": \"no-ctx-useragent.foo\",\n \"my-expr-header.request.query\": \"bar\",\n \"my-expr-header.service-group\": \"default\",\n \"my-expr-header.request.domain\": \"api.oto.tools\",\n \"my-expr-header.request.header\": \"bar\",\n \"my-expr-header.request.method\": \"GET\",\n \"my-expr-header.service-domain\": \"api.oto.tools\",\n \"my-expr-header.apikey.metadata\": \"bar\",\n \"my-expr-header.ctx.geolocation\": \"no-ctx-geolocation.foo\",\n \"my-expr-header.token.foo-field\": \"no-token-foob-foo\",\n \"my-expr-header.date-with-format\": \"2021-11-26\",\n \"my-expr-header.request.full-url\": \"http://api.oto.tools:8080/api/?foo=bar\",\n \"my-expr-header.request.protocol\": \"http\",\n \"my-expr-header.service-metadata\": \"no-meta-foo\",\n \"my-expr-header.ctx.default-value\": \"other\",\n \"my-expr-header.env.unknown-field\": \"not-found-java_h\",\n \"my-expr-header.service-subdomain\": \"api\",\n \"my-expr-header.token.unknown-foo\": \"no-token-foo\",\n \"my-expr-header.apikey.unknown-tag\": \"one-tag\",\n \"my-expr-header.ctx.unknown-fields\": \"not-found\",\n \"my-expr-header.token.unknown-fields\": \"not-found\",\n \"my-expr-header.request.unknown-query\": \"default value\",\n \"my-expr-header.service-unknown-group\": \"default\",\n \"my-expr-header.request.unknown-header\": \"default value\",\n \"my-expr-header.apikey.unknown-metadata\": \"default value\",\n \"my-expr-header.ctx.replace-field-value\": \"no-ctx-foo\",\n \"my-expr-header.token.unknown-foo-field\": \"not-found-foob\",\n \"my-expr-header.service-unknown-metadata\": \"default-value\",\n \"my-expr-header.config.unknown-port-field\": \"not-found\",\n \"my-expr-header.token.replace-header-value\": \"no-token-foo\",\n \"my-expr-header.ctx.replace-field-all-value\": \"no-ctx-foo\",\n \"my-expr-header.token.replace-header-all-value\": \"no-token-foo\",\n }\n}\n```\n\nThen try the second call to the webapp. Navigate on your browser to `http://webapp.oto.tools:8080`. Continue with `user@foo.bar` as user and `password` as credential.\n\nThis should output:\n\n```json\n{\n ...\n \"headers\": {\n ...\n \"my-expr-header.user\": \"User Otoroshi\",\n \"my-expr-header.user.email\": \"user@foo.bar\",\n \"my-expr-header.user.metadata\": \"roger\",\n \"my-expr-header.user.profile-field\": \"User Otoroshi\",\n \"my-expr-header.user.unknown-metadata\": \"not-found\",\n \"my-expr-header.user.unknown-profile-field\": \"not-found\",\n }\n}\n```"},{"name":"graphql-composer.md","id":"/topics/graphql-composer.md","url":"/topics/graphql-composer.html","title":"GraphQL Composer Plugin","content":"# GraphQL Composer Plugin\n\n@@include[experimental.md](../includes/experimental.md) { .experimental-feature }\n\n> GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.\n[Official GraphQL website](https://graphql.org/)\n\nAPIs RESTful and GraphQL development has become one of the most popular activities for companies as well as users in recent times. In fast scaling companies, the multiplication of clients can cause the number of API needs to grow at scale.\n\nOtoroshi comes with a solution to create and meet your customers' needs without constantly creating and recreating APIs: the `GraphQL composer plugin`. The GraphQL Composer is an useful plugin to build an GraphQL API from multiples differents sources. These sources can be REST apis, GraphQL api or anything that supports the HTTP protocol. In fact, the plugin can define and expose for each of your client a specific GraphQL schema, which only corresponds to the needs of the customers.\n\n@@@ div { .centered-img }\n\n@@@\n\n\n## Tutorial\n\nLet's take an example to get a better view of this plugin. We want to build a schema with two types: \n\n* an user with a name and a password \n* an country with a name and its users.\n\nTo build this schema, we need to use three custom directives. A `directive` decorates part of a GraphQL schema or operation with additional configuration. Directives are preceded by the @ character, like so:\n\n* @ref:[rest](#directives) : to call a http rest service with dynamic path params\n* @ref:[permission](#directives) : to restrict the access to the sensitive field\n* @ref:[graphql](#directives) : to call a graphQL service by passing a url and the associated query\n\nThe final schema of our tutorial should look like this\n```graphql\ntype Country {\n name: String\n users: [User] @rest(url: \"http://localhost:5000/countries/${item.name}/users\")\n}\n\ntype User {\n name: String\n password: String @password(value: \"ADMIN\")\n}\n\ntype Query {\n users: [User] @rest(url: \"http://localhost:5000/users\", paginate: true)\n user(id: String): User @rest(url: \"http://localhost:5000/users/${params.id}\")\n countries: [Country] @graphql(url: \"https://countries.trevorblades.com\", query: \"{ countries { name }}\", paginate: true)\n}\n```\n\nNow you know the GraphQL Composer basics and how it works, let's configure it on our project:\n\n* create a route using the new Otoroshi router describing the previous countries API\n* add the GraphQL composer plugin\n* configure the plugin with the schema\n* try to call it\n\n@@@ div { .centered-img }\n\n@@@\n\n### Setup environment\n\nFirst of all, we need to download the latest Otoroshi.\n\n```sh\ncurl -L -o otoroshi.jar 'https://github.com/MAIF/otoroshi/releases/download/v1.5.15/otoroshi.jar'\n```\n\nNow, just run the command belows to start the Otoroshi, and look the console to see the output.\n\n```sh\njava -Dotoroshi.adminPassword=password -jar otoroshi.jar \n```\n\nNow, login to [the UI](http://otoroshi.oto.tools:8080) with \n```sh\nuser = admin@otoroshi.io\npassword = password\n```\n\n### Create our countries API\n\nFirst thing to do in any new API is of course creating a `route`. We need 4 informations which are:\n\n* name: `My countries API`\n* frontend: exposed on `countries-api.oto.tools`\n* plugins: the list of plugins with only the `GraphQL composer` plugin\n\nLet's make a request call through the Otoroshi Admin API (with the default apikey), like the example below\n```sh\ncurl -X POST 'http://otoroshi-api.oto.tools:8080/api/routes' \\\n -d '{\n \"id\": \"countries-api\",\n \"name\": \"My countries API\",\n \"frontend\": {\n \"domains\": [\"countries.oto.tools\"]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ],\n \"load_balancing\": {\n \"type\": \"RoundRobin\"\n }\n },\n \"plugins\": [\n {\n \"plugin\": \"cp:otoroshi.next.plugins.GraphQLBackend\"\n }\n ]\n}' \\\n -H \"Content-type: application/json\" \\\n -u admin-api-apikey-id:admin-api-apikey-secret\n```\n\n### Build the countries API \n\nLet's continue our API by patching the configuration of the GraphQL plugin with the complete schema.\n\n```sh\ncurl -X PUT 'http://otoroshi-api.oto.tools:8080/api/routes/countries-api' \\\n -d '{\n \"id\": \"countries-api\",\n \"name\": \"My countries API\",\n \"frontend\": {\n \"domains\": [\n \"countries.oto.tools\"\n ]\n },\n \"backend\": {\n \"targets\": [\n {\n \"hostname\": \"mirror.otoroshi.io\",\n \"port\": 443,\n \"tls\": true\n }\n ],\n \"load_balancing\": {\n \"type\": \"RoundRobin\"\n }\n },\n \"plugins\": [\n {\n \"enabled\": true,\n \"plugin\": \"cp:otoroshi.next.plugins.GraphQLBackend\",\n \"config\": {\n \"schema\": \"type Country {\\n name: String\\n users: [User] @rest(url: \\\"http://localhost:8181/countries/${item.name}/users\\\", headers: \\\"{}\\\")\\n}\\n\\ntype Query {\\n users: [User] @rest(url: \\\"http://localhost:8181/users\\\", paginate: true, headers: \\\"{}\\\")\\n user(id: String): User @rest(url: \\\"http://localhost:8181/users/${params.id}\\\")\\n countries: [Country] @graphql(url: \\\"https://countries.trevorblades.com\\\", query: \\\"{ countries { name }}\\\", paginate: true)\\ntype User {\\n name: String\\n password: String }\\n\"\n }\n }\n ]\n}' \\\n -H \"Content-type: application/json\" \\\n -u admin-api-apikey-id:admin-api-apikey-secret\n```\n\nThe route is created but it expects an API, exposed on the localhost:8181, to work. \n\nLet's create this simple API which returns a list of users and of countries. This should look like the following snippet.\nThe API uses express as http server.\n\n```js\nconst express = require('express')\n\nconst app = express()\n\nconst users = [\n {\n name: 'Joe',\n password: 'password'\n },\n {\n name: 'John',\n password: 'password2'\n }\n]\n\nconst countries = [\n {\n name: 'Andorra',\n users: [users[0]]\n },\n {\n name: 'United Arab Emirates',\n users: [users[1]]\n }\n]\n\napp.get('/users', (_, res) => {\n return res.json(users)\n})\n\napp.get(`/users/:name`, (req, res) => {\n res.json(users.find(u => u.name === req.params.name))\n})\n\napp.get('/countries/:id/users', (req, res) => {\n const country = countries.find(c => c.name === req.params.id)\n\n if (country) \n return res.json(country.users)\n else \n return res.json([])\n})\n\napp.listen(8181, () => {\n console.log(`Listening on 8181`)\n});\n\n```\n\nLet's try to make a first call to our countries API.\n\n```sh\ncurl 'countries.oto.tools:9999/' \\\n--header 'Content-Type: application/json' \\\n--data-binary @- << EOF\n{\n \"query\": \"{\\n countries {\\n name\\n users {\\n name\\n }\\n }\\n}\"\n}\nEOF\n```\n\nYou should see the following content in your terminal.\n\n```json\n{\n \"data\": { \n \"countries\": [\n { \n \"name\":\"Andorra\",\n \"users\": [\n { \"name\":\"Joe\" }\n ]\n }\n ]\n }\n}\n```\n\nThe call graph should looks like\n\n```\n1. Calls https://countries.trevorblades.com\n2. For each country:\n - extract the field name\n - calls http://localhost:8181/countries/${country}/users to get the list of users for this country\n```\n\nYou may have noticed that we added an argument at the end of the graphql directive named `paginate`. It enabled the paging for the client accepting limit and offset parameters. These parameters are used by the plugin to filter and reduce the content.\n\nLet's make a new call that does not accept any country.\n\n```sh\ncurl 'countries.oto.tools:9999/' \\\n--header 'Content-Type: application/json' \\\n--data-binary @- << EOF\n{\n \"query\": \"{\\n countries(limit: 0) {\\n name\\n users {\\n name\\n }\\n }\\n}\"\n}\nEOF\n```\n\nYou should see the following content in your terminal.\n\n```json\n{\n \"data\": { \n \"countries\": []\n }\n}\n```\n\nLet's move on to the next section to secure sensitive field of our API.\n\n### Basics of permissions \n\nThe permission directives has been created to protect the fields of the graphql schema. The validation process starts by create a `context` for all incoming requests, based on the list of paths defined in the permissions field of the plugin. The permissions paths can refer to the request data (url, headers, etc), user credentials (api key, etc) and informations about the matched route. Then the process can validate that the value or values are present in the `context`.\n\n@@@div { .simple-block }\n\n
\nPermission\n\n
\n\n*Arguments : value and unauthorized_value*\n\nThe permission directive can be used to secure a field on **one** value. The directive checks that a specific value is present in the `context`.\n\nTwo arguments are available, the first, named `value`, is required and designates the value found. The second optional value, `unauthorized_value`, can be used to indicates, in the outcoming response, the rejection message.\n\n**Example**\n```js\ntype User {\n id: String @permission(\n value: \"FOO\", \n unauthorized_value: \"You're not authorized to get this field\")\n}\n```\n@@@\n\n@@@div { .simple-block }\n\n
\nAll permissions\n\n
\n\n*Arguments : values and unauthorized_value*\n\nThis directive is presumably the same as the previous one except that it takes a list of values.\n\n**Example**\n```js\ntype User {\n id: String @allpermissions(\n values: [\"FOO\", \"BAR\"], \n unauthorized_value: \"FOO and BAR could not be found\")\n}\n```\n@@@\n\n@@@div { .simple-block }\n\n
\nOne permissions of\n\n
\n*Arguments : values and unauthorized_value*\n\nThis directive takes a list of values and validate that one of them is in the context.\n\n**Example**\n```js\ntype User {\n id: String @onePermissionsOf(\n values: [\"FOO\", \"BAR\"], \n unauthorized_value: \"FOO or BAR could not be found\")\n}\n```\n@@@\n\n@@@div { .simple-block }\n\n
\nAuthorize\n\n
\n\n*Arguments : path, value and unauthorized_value*\n\nThe authorize directive has one more required argument, named `path`, which indicates the path to value, in the context. Unlike the last three directives, the authorize directive doesn't search in the entire context but at the specified path.\n\n**Example**\n```js\ntype User {\n id: String @authorize(\n path: \"$.raw_request.headers.foo\", \n value: \"BAR\", \n unauthorized_value: \"Bar could not be found in the foo header\")\n}\n```\n@@@\n\nLet's restrict the password field to the users that comes with a `role` header of the value `ADMIN`.\n\n1. Patch the configuration of the API by adding the permissions in the configuration of the plugin.\n```json\n...\n \"permissions\": [\"$.raw_request.headers.role\"]\n...\n```\n\n1. Add an directive on the password field in the schema\n```graphql\ntype User {\n name: String\n password: String @permission(value: \"ADMIN\")\n}\n```\n\nLet's make a call with the role header\n\n```sh\ncurl 'countries.oto.tools:9999/' \\\n--header 'Content-Type: application/json' \\\n--header 'role: ADMIN'\n--data-binary @- << EOF\n{\n \"query\": \"{\\n countries(limit: 0) {\\n name\\n users {\\n name\\n password\\n }\\n }\\n}\"\n}\nEOF\n```\n\nNow try to change the value of the role header\n\n```sh\ncurl 'countries.oto.tools:9999/' \\\n--header 'Content-Type: application/json' \\\n--header 'role: USER'\n--data-binary @- << EOF\n{\n \"query\": \"{\\n countries(limit: 0) {\\n name\\n users {\\n name\\n password\\n }\\n }\\n}\"\n}\nEOF\n```\n\nThe error message should look like \n\n```json\n{\n \"errors\": [\n {\n \"message\": \"You're not authorized\",\n \"path\": [\n \"countries\",\n 0,\n \"users\",\n 0,\n \"password\"\n ],\n ...\n }\n ]\n}\n```\n\n\n# Glossary\n\n## Directives\n\n@@@div { .simple-block }\n\n
\nRest\n\n
\n\n*Arguments : url, method, headers, timeout, data, response_path, response_filter, limit, offset, paginate*\n\nThe rest directive is used to expose servers that communicate using the http protocol. The only required argument is the `url`.\n\n**Example**\n```js\ntype Query {\n users(limit: Int, offset: Int): [User] @rest(url: \"http://foo.oto.tools/users\", method: \"GET\")\n}\n```\n\nIt can be placed on the field of a query and type. To custom your url queries, you can use the path parameter and another field with respectively, `params` and `item` variables.\n\n**Example**\n```js\ntype Country {\n name: String\n phone: String\n users: [User] @rest(url: \"http://foo.oto.tools/users/${item.name}\")\n}\n\ntype Query {\n user(id: String): User @rest(url: \"http://foo.oto.tools/users/${params.id}\")\n}\n```\n@@@\n\n@@@div { .simple-block }\n\n
\nGraphQL\n\n
\n\n*Arguments : url, method, headers, timeout, query, data, response_path, response_filter, limit, offset, paginate*\n\nThe rest directive is used to call an other graphql server.\n\nThe required argument are the `url` and the `query`.\n\n**Example**\n```js\ntype Query {\n countries: [Country] @graphql(url: \"https://countries.trevorblades.com/\", query: \"{ countries { name phone }}\")\n}\n\ntype Country {\n name: String\n phone: String\n}\n```\n@@@\n\n@@@div { .simple-block }\n\n
\nSoap\n\n
\n*Arguments: all following arguments*\n\nThe soap directive is used to call a soap service. \n\n```js\ntype Query {\n randomNumber: String @soap(\n jq_response_filter: \".[\\\"soap:Envelope\\\"] | .[\\\"soap:Body\\\"] | .[\\\"m:NumberToWordsResponse\\\"] | .[\\\"m:NumberToWordsResult\\\"]\", \n url: \"https://www.dataaccess.com/webservicesserver/numberconversion.wso\", \n envelope: \" \\n \\n \\n \\n 12 \\n \\n \\n\")\n}\n```\n\n\n##### Specific arguments\n\n| Argument | Type | Optional | Default value |\n| --------------------------- | --------- | -------- | ------------- |\n| envelope | *STRING* | Required | |\n| url | *STRING* | x | |\n| action | *STRING* | x | |\n| preserve_query | *BOOLEAN* | Required | true |\n| charset | *STRING* | x | |\n| convert_request_body_to_xml | *BOOLEAN* | Required | true |\n| jq_request_filter | *STRING* | x | |\n| jq_response_filter | *STRING* | x | |\n\n@@@\n\n@@@div { .simple-block }\n\n
\nJSON\n\n
\n*Arguments: path, json, paginate*\n\nThe json directive can be used to expose static data or mocked data. The first usage is to defined a raw stringify JSON in the `data` argument. The second usage is to set data in the predefined field of the GraphQL plugin composer and to specify a path in the `path` argument.\n\n**Example**\n```js\ntype Query {\n users_from_raw_data: [User] @json(data: \"[{\\\"firstname\\\":\\\"Foo\\\",\\\"name\\\":\\\"Bar\\\"}]\")\n users_from_predefined_data: [User] @json(path: \"users\")\n}\n```\n@@@\n\n@@@div { .simple-block }\n\n
\nMock\n\n
\n*Arguments: url*\n\nThe mock directive is to used with the Mock Responses Plugin, also named `Charlatan`. This directive can be interesting to mock your schema and start to use your Otoroshi route before starting to develop the underlying service.\n\n**Example**\n```js\ntype Query {\n users: @mock(url: \"/users\")\n}\n```\n\nThis example supposes that the Mock Responses plugin is set on the route's feed, and that an endpoint `/users` is available.\n\n@@@\n\n### List of directive arguments\n\n| Argument | Type | Optional | Default value |\n| ------------------ | ---------------- | --------------------------- | ------------- |\n| url | *STRING* | | |\n| method | *STRING* | x | GET |\n| headers | *STRING* | x | |\n| timeout | *INT* | x | 5000 |\n| data | *STRING* | x | |\n| path | *STRING* | x (only for json directive) | |\n| query | *STRING* | x | |\n| response_path | *STRING* | x | |\n| response_filter | *STRING* | x | |\n| limit | *INT* | x | |\n| offset | *INT* | x | |\n| value | *STRING* | | |\n| values | LIST of *STRING* | |\n| path | *STRING* | | |\n| paginate | *BOOLEAN* | x | |\n| unauthorized_value | *STRING* | x (only for permissions directive) | |\n"},{"name":"http3.md","id":"/topics/http3.md","url":"/topics/http3.html","title":"HTTP3 support","content":"# HTTP3 support\n\n@@include[experimental.md](../includes/experimental.md) { .experimental-feature }\n\nHTTP3 server and client previews are available in otoroshi since version 1.5.14\n\n\n## Server\n\nto enable http3 server preview, you need to enable the following flags\n\n```conf\notoroshi.next.experimental.netty-server.enabled = true\notoroshi.next.experimental.netty-server.http3.enabled = true\notoroshi.next.experimental.netty-server.http3.port = 10048\n```\n\nthen you will be able to send HTTP3 request on port 10048. For instance, using [quiche-client](https://github.com/cloudflare/quiche)\n\n```sh\ncargo run --bin quiche-client -- --no-verify 'https://my-service.oto.tools:10048'\n```\n\n## Client\n\nto consume services exposed with HTTP3, just select the `HTTP/3.0` protocol in the backend target."},{"name":"index.md","id":"/topics/index.md","url":"/topics/index.html","title":"Detailed topics","content":"# Detailed topics\n\nIn this sections, you will find informations about various Otoroshi topics \n\n* @ref:[Proxy engine](./engine.md)\n* @ref:[WASM support](./wasm-usage.md)\n* @ref:[Chaos engineering](./chaos-engineering.md)\n* @ref:[TLS](./tls.md)\n* @ref:[Otoroshi's PKI](./pki.md)\n* @ref:[Monitoring](./monitoring.md)\n* @ref:[Events and analytics](./events-and-analytics.md)\n* @ref:[Developer portal with Daikoku](./dev-portal.md)\n* @ref:[Sessions management](./sessions-mgmt.md)\n* @ref:[The Otoroshi communication protocol](./otoroshi-protocol.md)\n* @ref:[Expression language](./expression-language.md)\n* @ref:[Otoroshi user rights](./user-rights.md)\n* @ref:[GraphQL composer](./graphql-composer.md)\n* @ref:[Secret vaults](./secrets.md)\n* @ref:[Otoroshi tunnels](./tunnels.md)\n* @ref:[Relay routing](./relay-routing.md)\n* @ref:[Alternative http backend](./netty-server.md)\n* @ref:[HTTP3 support](./http3.md)\n* @ref:[Anonymous reporting](./anonymous-reporting.md)\n\n@@@ index\n\n* [Proxy engine](./engine.md)\n* [WASM support](./wasm-usage.md)\n* [Chaos engineering](./chaos-engineering.md)\n* [TLS](./tls.md)\n* [Otoroshi's PKI](./pki.md)\n* [Monitoring](./monitoring.md)\n* [Events and analytics](./events-and-analytics.md)\n* [Developer portal with Daikoku](./dev-portal.md)\n* [Sessions management](./sessions-mgmt.md)\n* [The Otoroshi communication protocol](./otoroshi-protocol.md)\n* [Expression language](./expression-language.md)\n* [Otoroshi user rights](./user-rights.md)\n* [GraphQL composer](./graphql-composer.md)\n* [Secret vaults](./secrets.md)\n* [Otoroshi tunnels](./tunnels.md)\n* [Relay routing](./relay-routing.md)\n* [Alternative http backend](./netty-server.md)\n* [HTTP3 support](./http3.md)\n* [Anonymous reporting](./anonymous-reporting.md)\n \n@@@\n"},{"name":"monitoring.md","id":"/topics/monitoring.md","url":"/topics/monitoring.html","title":"Monitoring","content":"# Monitoring\n\nThe Otoroshi API exposes two endpoints to know more about instance health. All the following endpoint are exposed on the instance host through it's ip address. It is also exposed on the otoroshi api hostname and the otoroshi backoffice hostname\n\n* `/health`: the health of the Otoroshi instance\n* `/metrics`: the metrics of the Otoroshi instance, either in JSON or Prometheus format using the `Accept` header (with `application/json` / `application/prometheus` values) or the `format` query param (with `json` or `prometheus` values)\n* `/live`: returns an http 200 response `{\"live\": true}` when the service is alive\n* `/ready`: return an http 200 response `{\"ready\": true}` when the instance is ready to accept traffic (certs synced, plugins compiled, etc). if not, returns http 503 `{\"ready\": false}`\n* `/startup`: return an http 200 response `{\"started\": true}` when the instance is ready to accept traffic (certs synced, plugins compiled, etc). if not, returns http 503 `{\"started\": false}`\n\nthose routes are also available on any hostname leading to otoroshi with a twist in the URL\n\n* http://xxxxxxxx.xxxxx.xx/.well-known/otoroshi/monitoring/health\n* http://xxxxxxxx.xxxxx.xx/.well-known/otoroshi/monitoring/metrics\n* http://xxxxxxxx.xxxxx.xx/.well-known/otoroshi/monitoring/live\n* http://xxxxxxxx.xxxxx.xx/.well-known/otoroshi/monitoring/ready\n* http://xxxxxxxx.xxxxx.xx/.well-known/otoroshi/monitoring/startup\n\n## Endpoints security\n\nThe two endpoints are exposed publicly on the Otoroshi admin api. But you can remove the corresponding public pattern and query the endpoints using standard apikeys. If you don't want to use apikeys but don't want to expose the endpoints publicly, you can defined two config. variables (`otoroshi.health.accessKey` or `HEALTH_ACCESS_KEY` and `otoroshi.metrics.accessKey` or `OTOROSHI_METRICS_ACCESS_KEY`) that will hold an access key for the endpoints. Then you can call the endpoints with an `access_key` query param with the value defined in the config. If you don't defined `otoroshi.metrics.accessKey` but define `otoroshi.health.accessKey`, `otoroshi.metrics.accessKey` will have the value of `otoroshi.health.accessKey`.\n \n## Examples\n\nlet say `otoroshi.health.accessKey` has value `MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY`\n\n```sh\n$ curl http://otoroshi-api.oto.tools:8080/health\\?access_key\\=MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY\n{\"otoroshi\":\"healthy\",\"datastore\":\"healthy\"}\n\n$ curl -H 'Accept: application/json' http://otoroshi-api.oto.tools:8080/metrics\\?access_key\\=MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY\n{\"version\":\"4.0.0\",\"gauges\":{\"attr.app.commit\":{\"value\":\"xxxx\"},\"attr.app.id\":{\"value\":\"xxxx\"},\"attr.cluster.mode\":{\"value\":\"Leader\"},\"attr.cluster.name\":{\"value\":\"otoroshi-leader-0\"},\"attr.instance.env\":{\"value\":\"prod\"},\"attr.instance.id\":{\"value\":\"xxxx\"},\"attr.instance.number\":{\"value\":\"0\"},\"attr.jvm.cpu.usage\":{\"value\":136},\"attr.jvm.heap.size\":{\"value\":1409},\"attr.jvm.heap.used\":{\"value\":112},\"internals.0.concurrent-requests\":{\"value\":1},\"internals.global.throttling-quotas\":{\"value\":2},\"jvm.attr.name\":{\"value\":\"2085@xxxx\"},\"jvm.attr.uptime\":{\"value\":2296900},\"jvm.attr.vendor\":{\"value\":\"JDK11\"},\"jvm.gc.PS-MarkSweep.count\":{\"value\":3},\"jvm.gc.PS-MarkSweep.time\":{\"value\":261},\"jvm.gc.PS-Scavenge.count\":{\"value\":12},\"jvm.gc.PS-Scavenge.time\":{\"value\":161},\"jvm.memory.heap.committed\":{\"value\":1477967872},\"jvm.memory.heap.init\":{\"value\":1690304512},\"jvm.memory.heap.max\":{\"value\":3005218816},\"jvm.memory.heap.usage\":{\"value\":0.03916456777568639},\"jvm.memory.heap.used\":{\"value\":117698096},\"jvm.memory.non-heap.committed\":{\"value\":166445056},\"jvm.memory.non-heap.init\":{\"value\":7667712},\"jvm.memory.non-heap.max\":{\"value\":994050048},\"jvm.memory.non-heap.usage\":{\"value\":0.1523920694986979},\"jvm.memory.non-heap.used\":{\"value\":151485344},\"jvm.memory.pools.CodeHeap-'non-nmethods'.committed\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'non-nmethods'.init\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'non-nmethods'.max\":{\"value\":5832704},\"jvm.memory.pools.CodeHeap-'non-nmethods'.usage\":{\"value\":0.28408093398876405},\"jvm.memory.pools.CodeHeap-'non-nmethods'.used\":{\"value\":1656960},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.committed\":{\"value\":11796480},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.init\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.max\":{\"value\":122912768},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.usage\":{\"value\":0.09536102872567315},\"jvm.memory.pools.CodeHeap-'non-profiled-nmethods'.used\":{\"value\":11721088},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.committed\":{\"value\":37355520},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.init\":{\"value\":2555904},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.max\":{\"value\":122912768},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.usage\":{\"value\":0.2538573047187417},\"jvm.memory.pools.CodeHeap-'profiled-nmethods'.used\":{\"value\":31202304},\"jvm.memory.pools.Compressed-Class-Space.committed\":{\"value\":14942208},\"jvm.memory.pools.Compressed-Class-Space.init\":{\"value\":0},\"jvm.memory.pools.Compressed-Class-Space.max\":{\"value\":367001600},\"jvm.memory.pools.Compressed-Class-Space.usage\":{\"value\":0.033858838762555805},\"jvm.memory.pools.Compressed-Class-Space.used\":{\"value\":12426248},\"jvm.memory.pools.Metaspace.committed\":{\"value\":99794944},\"jvm.memory.pools.Metaspace.init\":{\"value\":0},\"jvm.memory.pools.Metaspace.max\":{\"value\":375390208},\"jvm.memory.pools.Metaspace.usage\":{\"value\":0.25168142904782426},\"jvm.memory.pools.Metaspace.used\":{\"value\":94478744},\"jvm.memory.pools.PS-Eden-Space.committed\":{\"value\":349700096},\"jvm.memory.pools.PS-Eden-Space.init\":{\"value\":422576128},\"jvm.memory.pools.PS-Eden-Space.max\":{\"value\":1110966272},\"jvm.memory.pools.PS-Eden-Space.usage\":{\"value\":0.07505125052077188},\"jvm.memory.pools.PS-Eden-Space.used\":{\"value\":83379408},\"jvm.memory.pools.PS-Eden-Space.used-after-gc\":{\"value\":0},\"jvm.memory.pools.PS-Old-Gen.committed\":{\"value\":1127219200},\"jvm.memory.pools.PS-Old-Gen.init\":{\"value\":1127219200},\"jvm.memory.pools.PS-Old-Gen.max\":{\"value\":2253914112},\"jvm.memory.pools.PS-Old-Gen.usage\":{\"value\":0.014950035505168354},\"jvm.memory.pools.PS-Old-Gen.used\":{\"value\":33696096},\"jvm.memory.pools.PS-Old-Gen.used-after-gc\":{\"value\":23791152},\"jvm.memory.pools.PS-Survivor-Space.committed\":{\"value\":1048576},\"jvm.memory.pools.PS-Survivor-Space.init\":{\"value\":70254592},\"jvm.memory.pools.PS-Survivor-Space.max\":{\"value\":1048576},\"jvm.memory.pools.PS-Survivor-Space.usage\":{\"value\":0.59375},\"jvm.memory.pools.PS-Survivor-Space.used\":{\"value\":622592},\"jvm.memory.pools.PS-Survivor-Space.used-after-gc\":{\"value\":622592},\"jvm.memory.total.committed\":{\"value\":1644412928},\"jvm.memory.total.init\":{\"value\":1697972224},\"jvm.memory.total.max\":{\"value\":3999268864},\"jvm.memory.total.used\":{\"value\":269184904},\"jvm.thread.blocked.count\":{\"value\":0},\"jvm.thread.count\":{\"value\":82},\"jvm.thread.daemon.count\":{\"value\":11},\"jvm.thread.deadlock.count\":{\"value\":0},\"jvm.thread.deadlocks\":{\"value\":[]},\"jvm.thread.new.count\":{\"value\":0},\"jvm.thread.runnable.count\":{\"value\":25},\"jvm.thread.terminated.count\":{\"value\":0},\"jvm.thread.timed_waiting.count\":{\"value\":10},\"jvm.thread.waiting.count\":{\"value\":47}},\"counters\":{},\"histograms\":{},\"meters\":{},\"timers\":{}}\n\n$ curl -H 'Accept: application/prometheus' http://otoroshi-api.oto.tools:8080/metrics\\?access_key\\=MILpkVv6f2kG9Xmnc4mFIYRU4rTxHVGkxvB0hkQLZwEaZgE2hgbOXiRsN1DBnbtY\n# TYPE attr_jvm_cpu_usage gauge\nattr_jvm_cpu_usage 83.0\n# TYPE attr_jvm_heap_size gauge\nattr_jvm_heap_size 1409.0\n# TYPE attr_jvm_heap_used gauge\nattr_jvm_heap_used 220.0\n# TYPE internals_0_concurrent_requests gauge\ninternals_0_concurrent_requests 1.0\n# TYPE internals_global_throttling_quotas gauge\ninternals_global_throttling_quotas 3.0\n# TYPE jvm_attr_uptime gauge\njvm_attr_uptime 2372614.0\n# TYPE jvm_gc_PS_MarkSweep_count gauge\njvm_gc_PS_MarkSweep_count 3.0\n# TYPE jvm_gc_PS_MarkSweep_time gauge\njvm_gc_PS_MarkSweep_time 261.0\n# TYPE jvm_gc_PS_Scavenge_count gauge\njvm_gc_PS_Scavenge_count 12.0\n# TYPE jvm_gc_PS_Scavenge_time gauge\njvm_gc_PS_Scavenge_time 161.0\n# TYPE jvm_memory_heap_committed gauge\njvm_memory_heap_committed 1.477967872E9\n# TYPE jvm_memory_heap_init gauge\njvm_memory_heap_init 1.690304512E9\n# TYPE jvm_memory_heap_max gauge\njvm_memory_heap_max 3.005218816E9\n# TYPE jvm_memory_heap_usage gauge\njvm_memory_heap_usage 0.07680553268571043\n# TYPE jvm_memory_heap_used gauge\njvm_memory_heap_used 2.30817432E8\n# TYPE jvm_memory_non_heap_committed gauge\njvm_memory_non_heap_committed 1.66510592E8\n# TYPE jvm_memory_non_heap_init gauge\njvm_memory_non_heap_init 7667712.0\n# TYPE jvm_memory_non_heap_max gauge\njvm_memory_non_heap_max 9.94050048E8\n# TYPE jvm_memory_non_heap_usage gauge\njvm_memory_non_heap_usage 0.15262878997416435\n# TYPE jvm_memory_non_heap_used gauge\njvm_memory_non_heap_used 1.51720656E8\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__committed gauge\njvm_memory_pools_CodeHeap__non_nmethods__committed 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__init gauge\njvm_memory_pools_CodeHeap__non_nmethods__init 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__max gauge\njvm_memory_pools_CodeHeap__non_nmethods__max 5832704.0\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__usage gauge\njvm_memory_pools_CodeHeap__non_nmethods__usage 0.28408093398876405\n# TYPE jvm_memory_pools_CodeHeap__non_nmethods__used gauge\njvm_memory_pools_CodeHeap__non_nmethods__used 1656960.0\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__committed gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__committed 1.1862016E7\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__init gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__init 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__max gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__max 1.22912768E8\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__usage gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__usage 0.09610562183417755\n# TYPE jvm_memory_pools_CodeHeap__non_profiled_nmethods__used gauge\njvm_memory_pools_CodeHeap__non_profiled_nmethods__used 1.1812608E7\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__committed gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__committed 3.735552E7\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__init gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__init 2555904.0\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__max gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__max 1.22912768E8\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__usage gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__usage 0.25493618368435084\n# TYPE jvm_memory_pools_CodeHeap__profiled_nmethods__used gauge\njvm_memory_pools_CodeHeap__profiled_nmethods__used 3.1334912E7\n# TYPE jvm_memory_pools_Compressed_Class_Space_committed gauge\njvm_memory_pools_Compressed_Class_Space_committed 1.4942208E7\n# TYPE jvm_memory_pools_Compressed_Class_Space_init gauge\njvm_memory_pools_Compressed_Class_Space_init 0.0\n# TYPE jvm_memory_pools_Compressed_Class_Space_max gauge\njvm_memory_pools_Compressed_Class_Space_max 3.670016E8\n# TYPE jvm_memory_pools_Compressed_Class_Space_usage gauge\njvm_memory_pools_Compressed_Class_Space_usage 0.03386023385184152\n# TYPE jvm_memory_pools_Compressed_Class_Space_used gauge\njvm_memory_pools_Compressed_Class_Space_used 1.242676E7\n# TYPE jvm_memory_pools_Metaspace_committed gauge\njvm_memory_pools_Metaspace_committed 9.9794944E7\n# TYPE jvm_memory_pools_Metaspace_init gauge\njvm_memory_pools_Metaspace_init 0.0\n# TYPE jvm_memory_pools_Metaspace_max gauge\njvm_memory_pools_Metaspace_max 3.75390208E8\n# TYPE jvm_memory_pools_Metaspace_usage gauge\njvm_memory_pools_Metaspace_usage 0.25170985813247426\n# TYPE jvm_memory_pools_Metaspace_used gauge\njvm_memory_pools_Metaspace_used 9.4489416E7\n# TYPE jvm_memory_pools_PS_Eden_Space_committed gauge\njvm_memory_pools_PS_Eden_Space_committed 3.49700096E8\n# TYPE jvm_memory_pools_PS_Eden_Space_init gauge\njvm_memory_pools_PS_Eden_Space_init 4.22576128E8\n# TYPE jvm_memory_pools_PS_Eden_Space_max gauge\njvm_memory_pools_PS_Eden_Space_max 1.110966272E9\n# TYPE jvm_memory_pools_PS_Eden_Space_usage gauge\njvm_memory_pools_PS_Eden_Space_usage 0.17698545577448457\n# TYPE jvm_memory_pools_PS_Eden_Space_used gauge\njvm_memory_pools_PS_Eden_Space_used 1.96624872E8\n# TYPE jvm_memory_pools_PS_Eden_Space_used_after_gc gauge\njvm_memory_pools_PS_Eden_Space_used_after_gc 0.0\n# TYPE jvm_memory_pools_PS_Old_Gen_committed gauge\njvm_memory_pools_PS_Old_Gen_committed 1.1272192E9\n# TYPE jvm_memory_pools_PS_Old_Gen_init gauge\njvm_memory_pools_PS_Old_Gen_init 1.1272192E9\n# TYPE jvm_memory_pools_PS_Old_Gen_max gauge\njvm_memory_pools_PS_Old_Gen_max 2.253914112E9\n# TYPE jvm_memory_pools_PS_Old_Gen_usage gauge\njvm_memory_pools_PS_Old_Gen_usage 0.014950035505168354\n# TYPE jvm_memory_pools_PS_Old_Gen_used gauge\njvm_memory_pools_PS_Old_Gen_used 3.3696096E7\n# TYPE jvm_memory_pools_PS_Old_Gen_used_after_gc gauge\njvm_memory_pools_PS_Old_Gen_used_after_gc 2.3791152E7\n# TYPE jvm_memory_pools_PS_Survivor_Space_committed gauge\njvm_memory_pools_PS_Survivor_Space_committed 1048576.0\n# TYPE jvm_memory_pools_PS_Survivor_Space_init gauge\njvm_memory_pools_PS_Survivor_Space_init 7.0254592E7\n# TYPE jvm_memory_pools_PS_Survivor_Space_max gauge\njvm_memory_pools_PS_Survivor_Space_max 1048576.0\n# TYPE jvm_memory_pools_PS_Survivor_Space_usage gauge\njvm_memory_pools_PS_Survivor_Space_usage 0.59375\n# TYPE jvm_memory_pools_PS_Survivor_Space_used gauge\njvm_memory_pools_PS_Survivor_Space_used 622592.0\n# TYPE jvm_memory_pools_PS_Survivor_Space_used_after_gc gauge\njvm_memory_pools_PS_Survivor_Space_used_after_gc 622592.0\n# TYPE jvm_memory_total_committed gauge\njvm_memory_total_committed 1.644478464E9\n# TYPE jvm_memory_total_init gauge\njvm_memory_total_init 1.697972224E9\n# TYPE jvm_memory_total_max gauge\njvm_memory_total_max 3.999268864E9\n# TYPE jvm_memory_total_used gauge\njvm_memory_total_used 3.82665128E8\n# TYPE jvm_thread_blocked_count gauge\njvm_thread_blocked_count 0.0\n# TYPE jvm_thread_count gauge\njvm_thread_count 82.0\n# TYPE jvm_thread_daemon_count gauge\njvm_thread_daemon_count 11.0\n# TYPE jvm_thread_deadlock_count gauge\njvm_thread_deadlock_count 0.0\n# TYPE jvm_thread_new_count gauge\njvm_thread_new_count 0.0\n# TYPE jvm_thread_runnable_count gauge\njvm_thread_runnable_count 25.0\n# TYPE jvm_thread_terminated_count gauge\njvm_thread_terminated_count 0.0\n# TYPE jvm_thread_timed_waiting_count gauge\njvm_thread_timed_waiting_count 10.0\n# TYPE jvm_thread_waiting_count gauge\njvm_thread_waiting_count 47.0\n```"},{"name":"netty-server.md","id":"/topics/netty-server.md","url":"/topics/netty-server.html","title":"Alternative HTTP server","content":"# Alternative HTTP server\n\n@@include[experimental.md](../includes/experimental.md) { .experimental-feature }\n\nwith the change of licence in Akka, we are experimenting around using Netty as http server for otoroshi (and getting rid of akka http)\n\nin `v1.5.14` we are introducing a new alternative http server base on [`reactor-netty`](https://projectreactor.io/docs/netty/release/reference/index.html). It also include a preview of an HTTP3 server using [netty-incubator-codec-quic](https://github.com/netty/netty-incubator-codec-quic) and [netty-incubator-codec-http3](https://github.com/netty/netty-incubator-codec-http3)\n\n## The specs\n\nthis new server can start during otoroshi boot sequence and accept HTTP/1.1 (with and without TLS), H2C and H2 (with and without TLS) connections and supporting both standard HTTP calls and websockets calls.\n\n## Enable the server\n\nto enable the server, just turn on the following flag\n\n```conf\notoroshi.next.experimental.netty-server.enabled = true\n```\n\nnow you should see something like the following in the logs\n\n```log\n...\nroot [info] otoroshi-experimental-netty-server -\nroot [info] otoroshi-experimental-netty-server - Starting the experimental Netty Server !!!\nroot [info] otoroshi-experimental-netty-server -\nroot [info] otoroshi-experimental-netty-server - https://0.0.0.0:10048 (HTTP/1.1, HTTP/2)\nroot [info] otoroshi-experimental-netty-server - http://0.0.0.0:10049 (HTTP/1.1, HTTP/2 H2C)\nroot [info] otoroshi-experimental-netty-server -\n...\n```\n\n## Server options\n\nyou can also setup the host and ports of the server using\n\n```conf\notoroshi.next.experimental.netty-server.host = \"0.0.0.0\"\notoroshi.next.experimental.netty-server.http-port = 10049\notoroshi.next.experimental.netty-server.https-port = 10048\n```\n\nyou can also enable access logs using\n\n```conf\notoroshi.next.experimental.netty-server.accesslog = true\n```\n\nand enable wiretaping using \n\n```conf\notoroshi.next.experimental.netty-server.wiretap = true\n```\n\nyou can also custom number of worker thread using\n\n```conf\notoroshi.next.experimental.netty-server.thread = 0 # system automatically assign the right number of threads\n```\n\n## HTTP2\n\nyou can enable or disable HTTP2 with\n\n```conf\notoroshi.next.experimental.netty-server.http2.enabled = true\notoroshi.next.experimental.netty-server.http2.h2c = true\n```\n\n## HTTP3\n\nyou can enable or disable HTTP3 (preview ;) ) with\n\n```conf\notoroshi.next.experimental.netty-server.http3.enabled = true\notoroshi.next.experimental.netty-server.http3.port = 10048 # yep can the the same as https because its on the UDP stack\n```\n\nthe result will be something like\n\n\n```log\n...\nroot [info] otoroshi-experimental-netty-server -\nroot [info] otoroshi-experimental-netty-server - Starting the experimental Netty Server !!!\nroot [info] otoroshi-experimental-netty-server -\nroot [info] otoroshi-experimental-netty-server - https://0.0.0.0:10048 (HTTP/3)\nroot [info] otoroshi-experimental-netty-server - https://0.0.0.0:10048 (HTTP/1.1, HTTP/2)\nroot [info] otoroshi-experimental-netty-server - http://0.0.0.0:10049 (HTTP/1.1, HTTP/2 H2C)\nroot [info] otoroshi-experimental-netty-server -\n...\n```\n\n## Native transport\n\nIt is possible to enable native transport for the server\n\n```conf\notoroshi.next.experimental.netty-server.native.enabled = true\notoroshi.next.experimental.netty-server.native.driver = \"Auto\"\n```\n\npossible values for `otoroshi.next.experimental.netty-server.native.driver` are \n\n- `Auto`: the server try to find the best native option available\n- `Epoll`: the server uses Epoll native transport for Linux environments\n- `KQueue`: the server uses KQueue native transport for MacOS environments\n- `IOUring`: the server uses IOUring native transport for Linux environments that supports it (experimental, using [netty-incubator-transport-io_uring](https://github.com/netty/netty-incubator-transport-io_uring))\n\nthe result will be something like when starting on a Mac\n\n```log\n...\nroot [info] otoroshi-experimental-netty-server -\nroot [info] otoroshi-experimental-netty-server - Starting the experimental Netty Server !!!\nroot [info] otoroshi-experimental-netty-server -\nroot [info] otoroshi-experimental-netty-server - using KQueue native transport\nroot [info] otoroshi-experimental-netty-server -\nroot [info] otoroshi-experimental-netty-server - https://0.0.0.0:10048 (HTTP/3)\nroot [info] otoroshi-experimental-netty-server - https://0.0.0.0:10048 (HTTP/1.1, HTTP/2)\nroot [info] otoroshi-experimental-netty-server - http://0.0.0.0:10049 (HTTP/1.1, HTTP/2 H2C)\nroot [info] otoroshi-experimental-netty-server -\n...\n```\n\n## Env. variables\n\nyou can configure the server using the following env. variables\n\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_ENABLED`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_NEW_ENGINE_ONLY`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HOST`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_PORT`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTPS_PORT`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_WIRETAP`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_ACCESSLOG`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_THREADS`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_ALLOW_DUPLICATE_CONTENT_LENGTHS`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_VALIDATE_HEADERS`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_H_2_C_MAX_CONTENT_LENGTH`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_INITIAL_BUFFER_SIZE`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_MAX_HEADER_SIZE`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_MAX_INITIAL_LINE_LENGTH`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_MAX_CHUNK_SIZE`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP2_ENABLED`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP2_H2C`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP3_ENABLED`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP3_PORT`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_STREAMS_BIDIRECTIONAL`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_STREAM_DATA_BIDIRECTIONAL_REMOTE`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_STREAM_DATA_BIDIRECTIONAL_LOCAL`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_DATA`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_MAX_RECV_UDP_PAYLOAD_SIZE`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_MAX_SEND_UDP_PAYLOAD_SIZE`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_NATIVE_ENABLED`\n* `OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_NATIVE_DRIVER`\n\n"},{"name":"otoroshi-protocol.md","id":"/topics/otoroshi-protocol.md","url":"/topics/otoroshi-protocol.html","title":"The Otoroshi communication protocol","content":"# The Otoroshi communication protocol\n\nThe exchange protocol secure the communication with an app. When it's enabled, Otoroshi will send for each request a value in pre-selected token header, and will check the same header in the return request. On routes, you will have to use the `Otoroshi challenge token` plugin to enable it.\n\n### V1 challenge\n\nIf you enable secure communication for a given service with `V1 - simple values exchange` activated, you will have to add a filter on the target application that will take the `Otoroshi-State` header and return it in a header named `Otoroshi-State-Resp`. \n\n@@@ div { .centered-img }\n\n@@@\n\nyou can find an example project that implements V1 challenge [here](https://github.com/MAIF/otoroshi/tree/master/demos/challenge)\n\n### V2 challenge\n\nIf you enable secure communication for a given service with `V2 - signed JWT token exhange` activated, you will have to add a filter on the target application that will take the `Otoroshi-State` header value containing a JWT token, verify it's content signature then extract a claim named `state` and return a new JWT token in a header named `Otoroshi-State-Resp` with the `state` value in a claim named `state-resp`. By default, the signature algorithm is HMAC+SHA512 but can you can choose your own. The sent and returned JWT tokens have short TTL to avoid being replayed. You must be validate the tokens TTL. The audience of the response token must be `Otoroshi` and you have to specify `iat`, `nbf` and `exp`.\n\n@@@ div { .centered-img }\n\n@@@\n\nyou can find an example project that implements V2 challenge [here](https://github.com/MAIF/otoroshi/tree/master/demos/challenge)\n\n### Info. token\n\nOtoroshi is also sending a JWT token in a header named `Otoroshi-Claim` that the target app can validate too. On routes, you will have to use the `Otoroshi info. token` plugin to enable it.\n\nThe `Otoroshi-Claim` is a JWT token containing some informations about the service that is called and the client if available. You can choose between a legacy version of the token and a new one that is more clear and structured.\n\nBy default, the otoroshi jwt token is signed with the `otoroshi.claim.sharedKey` config property (or using the `$CLAIM_SHAREDKEY` env. variable) and uses the `HMAC512` signing algorythm. But it is possible to customize how the token is signed from the service descriptor page in the `Otoroshi exchange protocol` section. \n\n@@@ div { .centered-img }\n\n@@@\n\nusing another signing algo.\n\n@@@ div { .centered-img }\n\n@@@\n\nhere you can choose the signing algorithm and the secret/keys used. You can use syntax like `${env.MY_ENV_VAR}` or `${config.my.config.path}` to provide secret/keys values. \n\nFor example, for a service named `my-service` with a signing key `secret` with `HMAC512` signing algorythm, the basic JWT token that will be sent should look like the following\n\n```\neyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJzdWIiOiItLSIsImF1ZCI6Im15LXNlcnZpY2UiLCJpc3MiOiJPdG9yb3NoaSIsImV4cCI6MTUyMTQ0OTkwNiwiaWF0IjoxNTIxNDQ5ODc2LCJqdGkiOiI3MTAyNWNjMTktMmFjNy00Yjk3LTljYzctMWM0ODEzYmM1OTI0In0.mRcfuFVFPLUV1FWHyL6rLHIJIu0KEpBkKQCk5xh-_cBt9cb6uD6enynDU0H1X2VpW5-bFxWCy4U4V78CbAQv4g\n```\n\nif you decode it, the payload will look something like\n\n```json\n{\n \"sub\": \"apikey_client_id\",\n \"aud\": \"my-service\",\n \"iss\": \"Otoroshi\",\n \"exp\": 1521449906,\n \"iat\": 1521449876,\n \"jti\": \"71025cc19-2ac7-4b97-9cc7-1c4813bc5924\"\n}\n```\n\nIf you want to validate the `Otoroshi-Claim` on the target app side to ensure that the input requests only comes from `Otoroshi`, you will have to write an HTTP filter to do the job. For instance, if you want to write a filter to make sure that requests only comes from Otoroshi, you can write something like the following (using playframework 2.6).\n\nScala\n: @@snip [filter.scala](../snippets/filter.scala)\n\nJava\n: @@snip [filter.java](../snippets/filter.java)\n"},{"name":"pki.md","id":"/topics/pki.md","url":"/topics/pki.html","title":"Otoroshi's PKI","content":"# Otoroshi's PKI\n\nWith Otoroshi, you can add your own certificates, your own CA and even create self signed certificates or certificates from CAs. You can enable auto renewal of thoses self signed certificates or certificates generated. Certificates have to be created with the certificate chain and the private key in PEM format.\n\nAn Otoroshi instance always starts with 5 auto-generated certificates. \n\nThe highest certificate is the **Otoroshi Default Root CA Certificate**. This certificate is used by Otoroshi to sign the intermediate CA.\n\n**Otoroshi Default Intermediate CA Certificate**: first intermediate CA that must be used to issue new certificates in Otoroshi. Creating certificates directly from the CA root certificate increases the risk of root certificate compromise, and if the CA root certificate is compromised, the entire trust infrastructure built by the SSL provider will fail\n\nThis intermediate CA signed three certificates :\n\n* **Otoroshi Default Client certificate**: \n* **Otoroshi Default Jwt Signing Keypair**: default keypair (composed of a public and private key), exposed on `https://xxxxxx/.well-known/jwks.json`, that can be used to sign and verify JWT verifier\n* **Otoroshi Default Wildcard Certificate**: this certificate has `*.oto.tools` as common name. It can be very useful to the development phase\n\n## The PKI API\n\nThe Otoroshi's PKI can be managed using the admin api of otoroshi (by default admin api is exposed on https://otoroshi-api.xxxxx)\n\nLink to the complete swagger section about PKI : https://maif.github.io/otoroshi/swagger-ui/index.html#/pki\n\n* `POST` [/api/pki/certs/_letencrypt](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.genLetsEncryptCert): generates a certificate using Let's Encrypt or any ACME compatible system\n* `POST` [/api/pki/certs/_p12](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.importCertFromP12): import a .p12 file as client certificates\n* `POST` [/api/pki/certs/_valid](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.certificateIsValid): check if a certificate is valid (based on its own data)\n* `POST` [/api/pki/certs/_data](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.certificateData): extract data from a certificate\n* `POST` [/api/pki/certs](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.genSelfSignedCert): generates a self signed certificates\n* `POST` [/api/pki/csrs](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.genCsr) : generates a CSR\n* `POST` [/api/pki/keys](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.genKeyPair) : generates a keypair\n* `POST` [/api/pki/cas](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.genSelfSignedCA) : generates a self signed CA\n* `POST` [/api/pki/cas/:ca/certs/_sign](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.signCert): sign a certificate based on CSR\n* `POST` [/api/pki/cas/:ca/certs](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.genCert): generates a certificate\n* `POST` [/api/pki/cas/:ca/cas](https://maif.github.io/otoroshi/swagger-ui/index.html#/pki/otoroshi.controllers.adminapi.PkiController.genSubCA) : generates a sub-CA\n\n## The PKI UI\n\nAll generated certificates are listed in the `https://xxxxxx/bo/dashboard/certificates` page. All those certificates can be used to serve traffic with TLS, perform mTLS calls, sign and verify JWT tokens.\n\nThe PKI UI are composed of these following actions:\n\n* **Add item**: redirects the user on the certificate creation page. It’s useful when you already had a certificate (like a pem file) and that you want to load it in Otoroshi.\n* **Let's Encrypt certificate**: asks a certificate matching a given host to Let’s encrypt\n* **Create certificate**: issues a certificate with an existing Otoroshi certificate as CA. You can create a client certificate, a server certificate or a keypair certiciate that will be used to verify and sign JWT tokens.\n* **Import .p12 file**: loads a p12 file as certificate\n\nUnder these buttons, you have the list of current certificates, imported or generated, revoked or not. For each certificate, you will find: \n\n* a **name** \n* a **description** \n* the **subject** \n* the **type** of certificate (CA / client / keypair / certificate)\n* the **revoked reason** (empty if not) \n* the **creation date** following by its **expiration date**.\n\n## Exposed public keys\n\nThe Otoroshi certificate can be turned and used as keypair (simple action that can be executed by editing a certificate or during its creation, or using the admin api). A Otoroski keypair can be used to sign and verify JWT tokens with asymetric signature. Once a jwt token is signed with a keypair, it can be necessary to provide a way to the services to verify the tokens received by Otoroshi. This usage is cover by Otoroshi by the flag `Public key exposed`, available on each certificate.\n\nOtoroshi exposes each keypair with the flag enabled, on the following routes:\n\n* `https://xxxxxxxxx.xxxxxxx.xx/.well-known/otoroshi/security/jwks.json`\n* `https://otoroshi-api.xxxxxxx.xx/.well-known/jwks.json`\n\nOn these routes, you will find the list of public keys exposed using [the JWK standard](https://datatracker.ietf.org/doc/html/rfc7517)\n\n\n## OCSP Responder\n\nOtoroshi is able to revocate a certificate, directly from the UI, and to add a revocation status to specifiy the reason. The revocation reason can be :\n\n* `VALID`: The certificate is not revoked\n* `UNSPECIFIED`: Can be used to revoke certificates for reasons other than the specific codes.\n* `KEY_COMPROMISE`: It is known or suspected that the subject's private key or other aspects have been compromised.\n* `CA_COMPROMISE`: It is known or suspected that the subject's private key or other aspects have been compromised.\n* `AFFILIATION_CHANGED`: The subject's name or other information in the certificate has been modified but there is no cause to suspect that the private key has been compromised.\n* `SUPERSEDED`: The certificate has been superseded but there is no cause to suspect that the private key has been compromised\n* `CESSATION_OF_OPERATION`: The certificate is no longer needed for the purpose for which it was issued but there is no cause to suspect that the private key has been compromised\n* `CERTIFICATE_HOLD`: The certificate is temporarily revoked but there is no cause to suspect that the private kye has been compromised\n* `REMOVE_FROM_CRL`: The certificate has been unrevoked\n* `PRIVILEGE_WITH_DRAWN`: The certificate was revoked because a privilege contained within that certificate has been withdrawn\n* `AA_COMPROMISE`: It is known or suspected that aspects of the AA validated in the attribute certificate, have been compromised\n\nOtoroshi supports the Online Certificate Status Protocol for obtaining the revocation status of its certificates. The OCSP endpoint is also add to any generated certificate. This endpoint is available at `https://otoroshi-api.xxxxxx/.well-known/otoroshi/security/ocsp`\n\n## A.I.A : Authority Information Access\n\nOtoroshi provides a way to add the A.I.A in the certificate. This certificate extension contains :\n\n* Information about how to get the issuer of this certificate (CA issuer access method)\n* Address of the OCSP responder from where revocation of this certificate can be checked (OCSP access method)\n\n`https://xxxxxxxxxx/.well-known/otoroshi/security/certificates/:cert-id`"},{"name":"relay-routing.md","id":"/topics/relay-routing.md","url":"/topics/relay-routing.html","title":"Relay Routing","content":"# Relay Routing\n\n@@include[experimental.md](../includes/experimental.md) { .experimental-feature }\n\nRelay routing is the capability to forward traffic between otoroshi leader nodes based on network location of the target. Let say we have an otoroshi cluster split accross 3 network zones. Each zone has \n\n- one or more datastore instances\n- one or more otoroshi leader instances\n- one or more otoroshi worker instances\n\nthe datastores are replicated accross network zones in an active-active fashion. Each network zone also have applications, apis, etc deployed. Sometimes the same application is deployed in multiple zones, sometimes not. \n\nit can quickly become a nightmare when you want to access an application deployed in one network zone from another network zone. You'll have to publicly expose this application to be able to access it from the other zone. This pattern is fine, but sometimes it's not enough. With `relay routing`, you will be able to flag your routes as being deployed in one zone or another, and let otoroshi handle all the heavy lifting to route the traffic to the right network zone for you.\n\n@@@ div { .centered-img }\n\n@@@\n\n\n@@@ warning { .margin-top-20 }\nthis feature may introduce additional latency as the call passes through relay nodes\n@@@\n\n## Otoroshi instance setup\n\nfirst of all, for every otoroshi instance deployed, you have to flag where the instance is deployed and, for leaders, how this instance can be contacted from other zones (this is a **MAJOR** requirement, without that, you won't be able to make relay routing work). Also, you'll have to enable the @ref:[new proxy engine](./engine.md).\n\nIn the otoroshi configuration file, for each instance, enable relay routing and configure where the instance is located and how the leader can be contacted\n\n```conf\notoroshi {\n ...\n cluster {\n mode = \"leader\" # or \"worker\" dependending on the instance kind\n ...\n relay {\n enabled = true # enable relay routing\n leaderOnly = true # use leaders as the only kind of relay node\n location { # you can use all those parameters at the same time. There is no actual network concepts bound here, just some kind of tagging system, so you can use it as you wish\n provider = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_PROVIDER}\n zone = \"zone-1\"\n region = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_REGION}\n datacenter = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_DATACENTER}\n rack = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_RACK}\n }\n exposition {\n urls = [\"https://otoroshi-api-zone-1.my.domain:443\"]\n hostname = \"otoroshi-api-zone-1.my.domain\"\n clientId = \"apkid_relay-routing-apikey\"\n }\n }\n }\n}\n```\n\nalso, to make your leaders exposed by zone, do not hesitate to add domain names to the `otoroshi-admin-api` service and setup your DNS to bind those domains to the right place\n\n@@@ div { .centered-img }\n\n@@@\n\n## Route setup for an application deployed in only one zone\n\nNow, for any route/service deployed in only one zone, you will be able to flag it using its metadata as being deployed in one zone or another. The possible metadata keys are the following\n\n- `otoroshi-deployment-providers`\n- `otoroshi-deployment-regions`\n- `otoroshi-deployment-zones`\n- `otoroshi-deployment-dcs`\n- `otoroshi-deployment-racks`\n\nlet say we set `otoroshi-deployment-zones=zone-1` on a route, if we call this route from an otoroshi instance where `otoroshi.cluster.relay.location.zone` is not `zone-1`, otoroshi will automatically forward the requests to an otoroshi leader node where `otoroshi.cluster.relay.location.zone` is `zone-1`\n\n## Route setup for an application deployed in multiple zones at the same time\n\nNow, for any route/service deployed in multiple zones zones at the same time, you will be able to flag it using its metadata as being deployed in some zones. The possible metadata keys are the following\n\n- `otoroshi-deployment-providers`\n- `otoroshi-deployment-regions`\n- `otoroshi-deployment-zones`\n- `otoroshi-deployment-dcs`\n- `otoroshi-deployment-racks`\n\nlet say we set `otoroshi-deployment-zones=zone-1, zone-2` on a route, if we call this route from an otoroshi instance where `otoroshi.cluster.relay.location.zone` is not `zone-1` or `zone-2`, otoroshi will automatically forward the requests to an otoroshi leader node where `otoroshi.cluster.relay.location.zone` is `zone-1` or `zone-2` and load balance between them.\n\nalso, you will have to setup your targets to avoid trying to contact targets that are not actually in the current zone. To do that, you'll have to set the target predicate to `NetworkLocationMatch` and fill the possible locations according to the actual location of your target\n\n@@@ div { .centered-img }\n\n@@@\n\n## Demo\n\nyou can find a demo of this setup [here](https://github.com/MAIF/otoroshi/tree/master/demos/relay). This is a `docker-compose` setup with multiple network to simulate network zones. You also have an otoroshi export to understand how to setup your routes/services\n"},{"name":"secrets.md","id":"/topics/secrets.md","url":"/topics/secrets.html","title":"Secrets management","content":"# Secrets management\n\n@@include[experimental.md](../includes/experimental.md) { .experimental-feature }\n\nSecrets are generally confidential values that should not appear in plain text in the application. There are several products that help you store, retrieve, and rotate these secrets securely. Otoroshi offers a mechanism to set up references to these secrets in its entities to benefits from the perks of your existing secrets management infrastructure. This feature only work with the @ref:[new proxy engine](./engine.md).\n\nA secret can be anything you want like an apikey secret, a certificate private key or password, a jwt verifier signing key, a password to a proxy, a value for a header, etc.\n\n## Enable secrets management in otoroshi\n\nBy default secrets management is disbaled. You can enable it by setting `otoroshi.vaults.enabled` or `${OTOROSHI_VAULTS_ENABLED}` to `true`.\n\n## Global configuration\n\nSecrets management can be only configured using otoroshi static configuration file (also using jvm args mechanism). \nThe configuration is located at `otoroshi.vaults` where you can find the global configuration of the secrets management system and the configurations for each enabled secrets management backends. Basically it looks like\n\n```conf\nvaults {\n enabled = false\n enabled = ${?OTOROSHI_VAULTS_ENABLED}\n secrets-ttl = 300000 # 5 minutes\n secrets-ttl = ${?OTOROSHI_VAULTS_SECRETS_TTL}\n cached-secrets = 10000\n cached-secrets = ${?OTOROSHI_VAULTS_CACHED_SECRETS}\n read-timeout = 10000 # 10 seconds\n read-timeout = ${?OTOROSHI_VAULTS_READ_TIMEOUT}\n # if enabled, only leader nodes fetches the secrets.\n # entities with secret values filled are then sent to workers when they poll the cluster state.\n # only works if `otoroshi.cluster.autoUpdateState=true`\n leader-fetch-only = false\n leader-fetch-only = ${?OTOROSHI_VAULTS_LEADER_FETCH_ONLY}\n env {\n type = \"env\"\n prefix = ${?OTOROSHI_VAULTS_ENV_PREFIX}\n }\n}\n```\n\nyou can see here the global configuration and a default backend configured that can retrieve secrets from environment variables. \n\nThe configuration keys can be used for \n\n- `secrets-ttl`: the amount of milliseconds before the secret value is read again from backend\n- `cached-secrets`: the number of secrets that will be cached on an otoroshi instance\n- `read-timeout`: the timeout (in milliseconds) to read a secret from a backend\n\n## Entities with secrets management\n\nthe entities that support secrets management are the following \n\n- `routes`\n- `services`\n- `service_descriptors`\n- `apikeys`\n- `certificates`\n- `jwt_verifiers`\n- `authentication_modules`\n- `targets`\n- `backends`\n- `tcp_services`\n- `data_exporters`\n\n## Define a reference to a secret\n\nin the previously listed entities, you can define, almost everywhere, references to a secret using the following syntax:\n\n`${vault://name_of_the_vault/secret/of/the/path}`\n\nlet say I define a new apikey with the following value as secret `${vault://my_env/apikey_secret}` with the following secrets management configuration\n\n```conf\nvaults {\n enabled = true\n secrets-ttl = 300000\n cached-secrets = 10000\n read-ttl = 10000\n my_env {\n type = \"env\"\n }\n}\n```\n\nif the machine running otoroshi has an environment variable named `APIKEY_SECRET` with the value `verysecret`, then you will be able to can an api with the defined apikey `client_id` and a `client_secret` value of `verysecret`\n\n```sh\ncurl 'http://my-awesome-api.oto.tools:8080/api/stuff' -u awesome_apikey:verysecret\n```\n\n## Possible backends\n\nOtoroshi comes with the support of several secrets management backends.\n\n### Environment variables\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"env\"\n prefix = \"the_prefix_added_to_the_name_of_the_env_variable\"\n }\n}\n```\n\n### Hashicorp Vault\n\na backend for [Hashicorp Vault](https://www.vaultproject.io/). Right now we only support KV engines.\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"hashicorp-vault\"\n url = \"http://127.0.0.1:8200\"\n mount = \"kv\" # the name of the secret store in vault\n kv = \"v2\" # the version of the kv store (v1 or v2)\n token = \"root\" # the token that can access to your secrets\n }\n}\n```\n\nyou should define your references like `${vault://hashicorp_vault/secret/path/key_name}`.\n\n\n### Azure Key Vault\n\na backend for [Azure Key Vault](https://azure.microsoft.com/en-en/services/key-vault/). Right now we only support secrets and not keys and certificates.\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"azure\"\n url = \"https://keyvaultname.vault.azure.net\"\n api-version = \"7.2\" # the api version of the vault\n tenant = \"xxxx-xxx-xxx\" # your azure tenant id, optional\n client_id = \"xxxxx\" # your azure client_id\n client_secret = \"xxxxx\" # your azure client_secret\n # token = \"xxx\" possible if you have a long lived existing token. will take over tenant / client_id / client_secret\n }\n}\n```\n\nyou should define your references like `${vault://azure_vault/secret_name/secret_version}`. `secret_version` is mandatory\n\nIf you want to use certificates and keys objects from the azure key vault, you will have to specify an option in the reference named `azure_secret_kind` with possible value `certificate`, `privkey`, `pubkey` like the following :\n\n```\n${vault://azure_vault/myprivatekey/secret_version?azure_secret_kind=privkey}\n```\n\n### AWS Secrets Manager\n\na backend for [AWS Secrets Manager](https://aws.amazon.com/en/secrets-manager/)\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"aws\"\n access-key = \"key\"\n access-key-secret = \"secret\"\n region = \"eu-west-3\" # the aws region of your secrets management\n }\n}\n```\n\nyou should define your references like `${vault://aws_vault/secret_name/secret_version}`. `secret_version` is optional\n\n### Google Cloud Secrets Manager\n\na backend for [Google Cloud Secrets Manager](https://cloud.google.com/secret-manager)\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"gcloud\"\n url = \"https://secretmanager.googleapis.com\"\n apikey = \"secret\"\n }\n}\n```\n\nyou should define your references like `${vault://gcloud_vault/projects/foo/secrets/bar/versions/the_version}`. `the_version` can be `latest`\n\n### AlibabaCloud Cloud Secrets Manager\n\na backend for [AlibabaCloud Secrets Manager](https://www.alibabacloud.com/help/en/doc-detail/152001.html)\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"alibaba-cloud\"\n url = \"https://kms.eu-central-1.aliyuncs.com\"\n access-key-id = \"access-key\"\n access-key-secret = \"secret\"\n }\n}\n```\n\nyou should define your references like `${vault://alibaba_vault/secret_name}`\n\n\n### Kubernetes Secrets\n\na backend for [Kubernetes secrets](https://kubernetes.io/en/docs/concepts/configuration/secret/)\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"kubernetes\"\n # see the configuration of the kubernetes plugin, \n # by default if the pod if well configured, \n # you don't have to setup anything\n }\n}\n```\n\nyou should define your references like `${vault://k8s_vault/namespace/secret_name/key_name}`. `key_name` is optional. if present, otoroshi will try to lookup `key_name` in the secrets `stringData`, if not defined the secrets `data` will be base64 decoded and used.\n\n\n### Izanami config.\n\na backend for [Izanami config.](https://maif.github.io/izanami/manual/)\n\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"izanami\"\n url = \"http://127.0.0.1:8200\"\n client-id = \"client\"\n client-secret = \"secret\"\n }\n}\n```\n\nyou should define your references like `${vault://izanami_vault/the:secret:id/key_name}`. `key_name` is optional if the secret value is not a json object\n\n### Spring Cloud Config\n\na backend for [Spring Cloud Config.](https://docs.spring.io/spring-cloud-config/docs/current/reference/html/)\n\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"spring-cloud\"\n url = \"http://127.0.0.1:8000\"\n root = \"myapp/prod\"\n headers {\n authorization = \"Basic xxxx\"\n }\n }\n}\n```\n\nyou should define your references like `${vault://spring_vault/the/path/of/the/value}` where `/the/path/of/the/value` is the path of the value.\n\n### Http backend\n\na backend for that uses the result of an http endpoint\n\nthe configuration of this backend should be like\n\n```conf\nvaults {\n ...\n name_of_the_vault {\n type = \"http\"\n url = \"http://127.0.0.1:8000/endpoint/for/config\"\n headers {\n authorization = \"Basic xxxx\"\n }\n }\n}\n```\n\nyou should define your references like `${vault://http_vault/the/path/of/the/value}` where `/the/path/of/the/value` is the path of the value.\n"},{"name":"sessions-mgmt.md","id":"/topics/sessions-mgmt.md","url":"/topics/sessions-mgmt.html","title":"Sessions management","content":"# Sessions management\n\n## Admins\n\nAll logged users to an Otoroshi instance are administrators. An user session is created for each sucessfull connection to the UI. \n\nThese sessions are listed in the `Admin users sessions` (available in the cog icon menu or at this location of your instance `/bo/dashboard/sessions/admin`).\n\nAn admin user session is composed of: \n\n* `name`: the name of the connected user\n* `email`: the unique email\n* `Created at`: the creation date of the user session\n* `Expires at`: date until the user session is drop\n* `Profile`: user profile, at JSON format, containing name, email and others linked metadatas\n* `Rights`: list of rules to authorize the connected user on each tenant and teams.\n* `Discard session`: action to kill a session. On click, a modal will appear with the session ID\n\nIn the `Admin users sessions` page, you have two more actions:\n\n* `Discard all sessions`: kills all current sessions (including the session of the owner of this action)\n* `Discard old sessions`: kill all outdated sessions\n\n## Private apps\n\nAll logged users to a protected application has an private user session.\n\nThese sessions are listed in the `Private apps users sessions` (available in the cog icon menu or at this location of your instance `/bo/dashboard/sessions/private`).\n\nAn private user session is composed of: \n\n* `name`: the name of the connected user\n* `email`: the unique email\n* `Created at`: the creation date of the user session\n* `Expires at`: date until the user session is drop\n* `Profile`: user profile, at JSON format, containing name, email and others linked metadatas\n* `Meta.`: list of metadatas added by the authentication module.\n* `Tokens`: list of tokens received from the identity provider used. In the case of a memory authentication, this part will keep empty.\n* `Discard session`: action to kill a session. On click, a modal will appear with the session ID\n"},{"name":"tls.md","id":"/topics/tls.md","url":"/topics/tls.html","title":"TLS","content":"# TLS\n\nas you might have understand, otoroshi can store TLS certificates and use them dynamically. It means that once a certificate is imported or created in otoroshi, you can immediately use it to serve http request over TLS, to call https backends that requires mTLS or that do not have certicates signed by a globally knowned authority.\n\n## TLS termination\n\nany certficate added to otoroshi with a valid `CN` and `SANs` can be used in the following seconds to serve https requests. If you do not provide a private key with a certificate chain, the certificate will only be trusted like a CA. If you want to perform mTLS calls on you otoroshi instance, do not forget to enabled it (it is disabled by default for performance reasons as the TLS handshake is bigger with mTLS enabled)\n\n```sh\notoroshi.ssl.fromOutside.clientAuth=None|Want|Need\n```\n\nor using env. variables\n\n```sh\nSSL_OUTSIDE_CLIENT_AUTH=None|Want|Need\n```\n\n### TLS termination configuration\n\nYou can configure TLS termination statically using config. file or env. variables. Everything is available at `otoroshi.tls`\n\n```conf\notoroshi {\n tls {\n # the cipher suites used by otoroshi TLS termination\n cipherSuitesJDK11 = [\"TLS_AES_128_GCM_SHA256\", \"TLS_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_DHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_DHE_DSS_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_DHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_DHE_DSS_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384\", \"TLS_RSA_WITH_AES_256_CBC_SHA256\", \"TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384\", \"TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384\", \"TLS_DHE_RSA_WITH_AES_256_CBC_SHA256\", \"TLS_DHE_DSS_WITH_AES_256_CBC_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA\", \"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA\", \"TLS_RSA_WITH_AES_256_CBC_SHA\", \"TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA\", \"TLS_ECDH_RSA_WITH_AES_256_CBC_SHA\", \"TLS_DHE_RSA_WITH_AES_256_CBC_SHA\", \"TLS_DHE_DSS_WITH_AES_256_CBC_SHA\", \"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256\", \"TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_DHE_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_DHE_DSS_WITH_AES_128_CBC_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA\", \"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA\", \"TLS_RSA_WITH_AES_128_CBC_SHA\", \"TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA\", \"TLS_ECDH_RSA_WITH_AES_128_CBC_SHA\", \"TLS_DHE_RSA_WITH_AES_128_CBC_SHA\", \"TLS_DHE_DSS_WITH_AES_128_CBC_SHA\", \"TLS_EMPTY_RENEGOTIATION_INFO_SCSV\"]\n cipherSuitesJDK8 = [\"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384\", \"TLS_RSA_WITH_AES_256_CBC_SHA256\", \"TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384\", \"TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384\", \"TLS_DHE_RSA_WITH_AES_256_CBC_SHA256\", \"TLS_DHE_DSS_WITH_AES_256_CBC_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA\", \"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA\", \"TLS_RSA_WITH_AES_256_CBC_SHA\", \"TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA\", \"TLS_ECDH_RSA_WITH_AES_256_CBC_SHA\", \"TLS_DHE_RSA_WITH_AES_256_CBC_SHA\", \"TLS_DHE_DSS_WITH_AES_256_CBC_SHA\", \"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256\", \"TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_DHE_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_DHE_DSS_WITH_AES_128_CBC_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA\", \"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA\", \"TLS_RSA_WITH_AES_128_CBC_SHA\", \"TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA\", \"TLS_ECDH_RSA_WITH_AES_128_CBC_SHA\", \"TLS_DHE_RSA_WITH_AES_128_CBC_SHA\", \"TLS_DHE_DSS_WITH_AES_128_CBC_SHA\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_DHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_DHE_DSS_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_DHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_DHE_DSS_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA\", \"TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA\", \"SSL_RSA_WITH_3DES_EDE_CBC_SHA\", \"TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA\", \"TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA\", \"SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA\", \"SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA\", \"TLS_EMPTY_RENEGOTIATION_INFO_SCSV\"]\n cipherSuites = []\n # the protocols used by otoroshi TLS termination\n protocolsJDK11 = [\"TLSv1.3\", \"TLSv1.2\", \"TLSv1.1\", \"TLSv1\"]\n protocolsJDK8 = [\"SSLv2Hello\", \"TLSv1\", \"TLSv1.1\", \"TLSv1.2\"]\n protocols = []\n # the JDK cacert access\n cacert {\n path = \"$JAVA_HOME/lib/security/cacerts\"\n password = \"changeit\"\n }\n # the mtls mode\n fromOutside {\n clientAuth = \"None\"\n clientAuth = ${?SSL_OUTSIDE_CLIENT_AUTH}\n }\n # the default trust mode\n trust {\n all = false\n all = ${?OTOROSHI_SSL_TRUST_ALL}\n }\n # some initial cacert access, useful to include non standard CA when starting (file paths)\n initialCacert = ${?CLUSTER_WORKER_INITIAL_CACERT}\n initialCacert = ${?INITIAL_CACERT}\n initialCert = ${?CLUSTER_WORKER_INITIAL_CERT}\n initialCert = ${?INITIAL_CERT}\n initialCertKey = ${?CLUSTER_WORKER_INITIAL_CERT_KEY}\n initialCertKey = ${?INITIAL_CERT_KEY}\n # initialCerts = [] \n }\n}\n```\n\n\n### TLS termination settings\n\nIt is possible to adjust the behavior of the TLS termination from the `danger zone` at the `Tls Settings` section. Here you can either define that a non-matching SNI call will use a random TLS certtificate to reply or will use a default domain (the TLS certificate associated to this domain) to reply. Here you can also choose if you want to trust all the CAs trusted by your JDK when performing TLS calls `Trust JDK CAs (client)` or when receiving mTLS calls `Trust JDK CAs (server)`. If you disable the later, it is possible to select the list of CAs presented to the client during mTLS handshake.\n\n### Certificates auto generation\n\nit is also possible to generate non-existing certificate on the fly without losing the request. If you are interested by this feature, you can enable it in the `danger zone` at the `Auto Generate Certificates` section. Here you'll have to enable it and select the CA that will generate the certificate. Of course, the client will have to trust the selected CA. You can also add filters to choose which domain are allowed to generate certificates or not. The `Reply Nicely` flag is used to reply a nice error message (ie. human readable) telling that it's not possible to have an auto certficate for the current domain. \n\n## Backends TLS and mTLS calls\n\nFor any call to a backend, it is possible to customize the TLS behavior \n\n@@@ div { .centered-img }\n\n@@@\n\nhere you can define your level of trust (trust all, loose verification) or even select on or more CAs you will trust for the following backend calls. You can also select the client certificate that will be used for the following backend calls\n\n## Keypair for signing and verification\n\nIt is also possible to use the keypair contained in a certificate to sign and verificate JWT token signature. You can mark an existing certificate in otoroshi as a keypair using the `keypair` on the certificate page.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"tunnels.md","id":"/topics/tunnels.md","url":"/topics/tunnels.html","title":"Otoroshi tunnels","content":"# Otoroshi tunnels\n\n@@include[experimental.md](../includes/experimental.md) { .experimental-feature }\n\nSometimes, exposing apis that lives in our private network can be a nightmare, especially from a networking point of view. \nWith otoroshi tunnels, this is now trivial, as long as your internal otoroshi (that lives inside your private network) is able to contact an external otoroshi (exposed on the internet).\n\n@@@ warning { .margin-top-20 }\nYou have to enable cluster mode (Leader or Worker) to make this feature work. As this feature is experimental, we only support simple http request right now. Server Sent Event and Websocket request are not supported at the moment.\n@@@\n\n## How Otoroshi tunnels works\n\nthe main idea behind otoroshi tunnels is that the connection between your private network et the public network is initiated by the private network side. You don't have to expose a part of your private network, create a DMZ or whatever, you just have to authorize your private network otoroshi instance to contact your public network otoroshi instance.\n\n@@@ div { .centered-img }\n\n@@@\n\nonce the persistent tunnel has been created, you can create routes on the public otoroshi instance that uses the otoroshi `Remote tunnel calls` to target your remote routes through the designated tunnel instance \n\n\n@@@ div { .centered-img }\n\n@@@\n\n@@@ warning { .margin-top-20 }\nthis feature may introduce additional latency as the call passes through otoroshi tunnels\n@@@\n\n## Otoroshi tunnel example\n\nfirst you have to enable the tunnels feature in your otoroshi configuration (on both public and private instances)\n\n```conf\notoroshi {\n ...\n tunnels {\n enabled = true\n enabled = ${?OTOROSHI_TUNNELS_ENABLED}\n ...\n }\n}\n```\n\nthen you can setup a tunnel instance on your private instance to contact your public instance\n\n```conf\notoroshi {\n ...\n tunnels {\n enabled = true\n ...\n public-apis {\n id = \"public-apis\"\n name = \"public apis tunnel\"\n url = \"https://otoroshi-api.company.com:443\"\n host = \"otoroshi-api.company.com\"\n clientId = \"xxx\"\n clientSecret = \"xxxxxx\"\n # ipAddress = \"127.0.0.1\" # optional: ip address of the public instance admin api\n # tls { # optional: TLS settings to access the public instance admin api\n # ... \n # }\n # export-routes = true # optional: send routes information to remote otoroshi instance to facilitate remote route exposition\n # export-routes-tag = \"tunnel-exposed\" # optional: only send routes information if the route has this tag\n }\n }\n}\n```\n\nNow when your private otoroshi instance will boot, a persistent tunnel will be made between private and public instance. \nNow let say you have a private api exposed on `api-a.company.local` on your private otoroshi instance and you want to expose it on your public otoroshi instance. \n\nFirst create a new route exposed on `api-a.company.com` that targets `https://api-a.company.local:443`\n\n@@@ div { .centered-img }\n\n@@@\n\nthen add the `Remote tunnel calls` plugin to your route and set the tunnel id to `public-apis` to match the id you set in the otoroshi config file\n\n@@@ div { .centered-img }\n\n@@@\n\nadd all the plugin you need to secure this brand new public api and call it\n\n```sh\ncurl \"https://api-a.company.com/users\" | jq\n```\n\n## Easily expose your remote services\n\nyou can see all the connected tunnel instances on an otoroshi instance on the `Connected tunnels` (`Cog icon` / `Connected tunnels`). For each tunnel instance you will be able to check the tunnel health and also to easily expose all the routes available on the other end of the tunnel. Just clic on the `expose` button of the route you want to expose, and a new route will be created with the `Remote tunnel calls` plugin already setup.\n\n@@@ div { .centered-img }\n\n@@@\n"},{"name":"user-rights.md","id":"/topics/user-rights.md","url":"/topics/user-rights.html","title":"Otoroshi user rights","content":"# Otoroshi user rights\n\nIn Otoroshi, all users are considered **Administrators**. This choice is reinforced by the fact that Otoroshi is designed to be an administrator user interface and not an interface for users who simply want to view information. For this type of use, we encourage to use the admin API rather than giving access to the user interface.\n\nThe Otoroshi rights are split by a list of authorizations on **organizations** and **teams**. \n\nLet's taking an example where we want to authorize an administrator user on all organizations and teams.\n\nThe list of rights will be :\n\n```json\n[\n {\n \"tenant\": \"*:rw\", # (1)\n \"teams\": [\"*:rw\"] # (2)\n }\n]\n```\n\n* (1): this field, separated by a colon, indicates the name of the tenant and the associated rights. In our case, we set `*` to apply the rights to all tenants, and the `rw` to get the read and write access on them.\n* (2): the `teams` array field, represents the list of rights, applied by team. The behaviour is the same as the tenant field, we define the team or the wildcard, followed by the rights\n\nif you want to have an user that is administrator only for one organization, the rights will be :\n\n```json\n[\n {\n \"tenant\": \"orga-1:rw\",\n \"teams\": [\"*:rw\"]\n }\n]\n```\n\nif you want to have an user that is administrator only for two organization, the rights will be :\n\n```json\n[\n {\n \"tenant\": \"orga-1:rw\",\n \"teams\": [\"*:rw\"]\n },\n {\n \"tenant\": \"orga-2:rw\",\n \"teams\": [\"*:rw\"]\n }\n]\n```\n\nif you want to have an user that can only see 3 teams of one organization and one team in the other, the rights will be :\n\n```json\n[\n {\n \"tenant\": \"orga-1:rw\",\n \"teams\": [\n \"team-1:rw\",\n \"team-2:rw\",\n \"team-3:rw\",\n ]\n },\n {\n \"tenant\": \"orga-2:rw\",\n \"teams\": [\n \"team-4:rw\"\n ]\n }\n]\n```\n\nThe list of possible rights for an organization or a team is:\n\n* **r**: read access\n* **w**: write access\n* **not**: none access to the resource\n\nThe list of possible tenant and teams are your created tenants and teams, and the wildcard to define rights to all resources once.\n\nThe user rights is defined by the @ref:[authentication modules](../entities/auth-modules.md).\n"},{"name":"wasm-usage.md","id":"/topics/wasm-usage.md","url":"/topics/wasm-usage.html","title":"Otoroshi and WASM","content":"# Otoroshi and WASM\n\nWebAssembly (WASM) is a simple machine model and executable format with an extensive specification. It is designed to be portable, compact, and execute at or near native speeds. Otoroshi already supports the execution of WASM files by providing different plugins that can be applied on routes. These plugins are:\n\n- `WasmRouteMatcher`: useful to define if a route can handle a request\n- `WasmPreRoute`: useful to check request and extract useful stuff for the other plugins\n- `WasmAccessValidator`: useful to control access to a route (jump to the next section to learn more about it)\n- `WasmRequestTransformer`: transform the content of an incoming request (body, headers, etc ...)\n- `WasmBackend`: execute a WASM file as Otoroshi target. Useful to implement user defined logic and function at the edge\n- `WasmResponseTransformer`: transform the content of the response produced by the target\n- `WasmSink`: create a sink plugin to handle unmatched requests\n- `WasmRequestHandler`: create a plugin that can handle the whole request lifecycle\n- `WasmJob`: create a job backed by a wasm function\n\nTo simplify the process of WASM creation and usage, Otoroshi provides:\n\n- otoroshi ui integration: a full set of plugins that let you pick which WASM function to runtime at any point in a route\n- otoroshi `wasm-manager`: a code editor in the browser that let you write your plugin in `Rust`, `TinyGo`, `Javascript` or `Assembly Script` without having to think about compiling it to WASM (you can find a complete tutorial about it @ref:[here](../how-to-s/wasm-manager-installation.md))\n\n@@@ div { .centered-img }\n\n@@@\n\n## Available tutorials\n\nhere is the list of available tutorials about wasm in Otoroshi\n\n1. @ref:[install a wasm manager](../how-to-s/wasm-manager-installation.md)\n2. @ref:[use a wasm plugin](../how-to-s/wasm-usage.md)\n\n## Wasm plugins entities\n\nOtoroshi provides a dedicated entity for wasm plugins. Those entities makes it easy to declare a wasm plugin with specific configuration only once and use it in multiple places. \n\nYou can find wasm plugin entities at `/bo/dashboard/wasm-plugins`\n\nIn a wasm plugin entity, you can define the source of your wasm plugin. You can choose between\n\n- `base64`: a base64 encoded wasm script\n- `file`: the path to a wasm script file\n- `http`: the url to a wasm script file\n- `wasm-manager`: the name of a wasm script compiled by a wasm manager instance\n\nthen you can define the number of memory pages available for each plugin instanciation, the name of the function you want to invoke, the config. map of the VM and if you want to keep a wasm vm alive during the request lifecycle to be able to reuse it in different plugin steps\n\n@@@ div { .centered-img }\n\n@@@\n\n## Otoroshi plugins api\n\nthe following parts illustrates the apis for the different plugins. Otoroshi uses [Extism](https://extism.org/) to handle content sharing between the JVM and the wasm VM. All structures are sent to/from the plugins as json strings. \n\nfor instance, if we want to write a `WasmBackendCall` plugin using javascript, we could write something like\n\n```js\nfunction backend_call() {\n const input_str = Host.inputString(); // here we get the context passed by otoroshi as json string\n const backend_call_context = JSON.parse(input_str); // and parse it\n if (backend_call_context.path === '/hello') {\n Host.outputString(JSON.stringify({ // now we return a json string to otoroshi with the \"backend\" call result\n headers: { \n 'content-type': 'application/json' \n },\n body_json: { \n message: `Hello ${ctx.request.query.name[0]}!` \n },\n status: 200,\n }));\n } else {\n Host.outputString(JSON.stringify({ // now we return a json string to otoroshi with the \"backend\" call result\n headers: { \n 'content-type': 'application/json' \n },\n body_json: { \n error: \"not found\"\n },\n status: 404,\n }));\n }\n return 0; // we return 0 to tell otoroshi that everything went fine\n}\n```\n\nthe following examples are written in rust. the rust macros provided by extism makes the usage of `Host.inputString` and `Host.outputString` useless. Remember that it's still used under the hood and that the structures are passed as json strings.\n\ndo not forget to add the extism pdk library to your project to make it compile\n\nCargo.toml\n: @@snip [Cargo.toml](../../../../../tools/otoroshi-wasm-manager/server/templates/rust/Cargo.toml) \n\ngo.mod\n: @@snip [go.mod](../../../../../tools/otoroshi-wasm-manager/server/templates/go/go.mod) \n\npackage.json\n: @@snip [package.json](../../../../../tools/otoroshi-wasm-manager/server/templates/js/package.json) \n\n### WasmRouteMatcher\n\nA route matcher is a plugin that can help the otoroshi router to select a route instance based on your own custom predicate. Basically it's a function that returns a boolean answer.\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn matches_route(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmMatchRouteContext {\n pub snowflake: Option,\n pub route: Route,\n pub request: RawRequest,\n pub config: Value,\n pub attrs: Value,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmMatchRouteResponse {\n pub result: bool,\n}\n```\n\n### WasmPreRoute\n\nA pre-route plugin can be used to short-circuit a request or enrich it (maybe extracting your own kind of auth. token, etc) a the very beginning of the request handling process, just after the routing part, when a route has bee chosen by the otoroshi router.\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn pre_route(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmPreRouteContext {\n pub snowflake: Option,\n pub route: Route,\n pub request: RawRequest,\n pub config: Value,\n pub global_config: Value,\n pub attrs: Value,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmPreRouteResponse {\n pub error: bool,\n pub attrs: Option>,\n pub status: Option,\n pub headers: Option>,\n pub body_bytes: Option>,\n pub body_base64: Option,\n pub body_json: Option,\n pub body_str: Option,\n}\n```\n\n### WasmAccessValidator\n\nAn access validator plugin is typically used to verify if the request can continue or must be cancelled. For instance, the otoroshi apikey plugin is an access validator that check if the current apikey provided by the client is legit and authorized on the current route.\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn can_access(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmAccessValidatorContext {\n pub snowflake: Option,\n pub apikey: Option,\n pub user: Option,\n pub request: RawRequest,\n pub config: Value,\n pub global_config: Value,\n pub attrs: Value,\n pub route: Route,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmAccessValidatorError {\n pub message: String,\n pub status: u32,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmAccessValidatorResponse {\n pub result: bool,\n pub error: Option,\n}\n```\n\n### WasmRequestTransformer\n\nA request transformer plugin can be used to compose or transform the request that will be sent to the backend\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn transform_request(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmRequestTransformerContext {\n pub snowflake: Option,\n pub raw_request: OtoroshiRequest,\n pub otoroshi_request: OtoroshiRequest,\n pub backend: Backend,\n pub apikey: Option,\n pub user: Option,\n pub request: RawRequest,\n pub config: Value,\n pub global_config: Value,\n pub attrs: Value,\n pub route: Route,\n pub request_body_bytes: Option>,\n}\n```\n\n### WasmBackendCall\n\nA backend call plugin can be used to simulate a backend behavior in otoroshi. For instance the static backend of otoroshi return the content of a file\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn call_backend(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmBackendContext {\n pub snowflake: Option,\n pub backend: Backend,\n pub apikey: Option,\n pub user: Option,\n pub raw_request: RawRequest,\n pub config: Value,\n pub global_config: Value,\n pub attrs: Value,\n pub route: Route,\n pub request_body_bytes: Option>,\n pub request: OtoroshiRequest,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmBackendResponse {\n pub headers: Option>,\n pub body_bytes: Option>,\n pub body_base64: Option,\n pub body_json: Option,\n pub body_str: Option,\n pub status: u32,\n}\n```\n\n### WasmResponseTransformer\n\nA response transformer plugin can be used to compose or transform the response that will be sent back to the client\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn transform_response(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmResponseTransformerContext {\n pub snowflake: Option,\n pub raw_response: OtoroshiResponse,\n pub otoroshi_response: OtoroshiResponse,\n pub apikey: Option,\n pub user: Option,\n pub request: RawRequest,\n pub config: Value,\n pub global_config: Value,\n pub attrs: Value,\n pub route: Route,\n pub response_body_bytes: Option>,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmTransformerResponse {\n pub headers: HashMap,\n pub cookies: Value,\n pub body_bytes: Option>,\n pub body_base64: Option,\n pub body_json: Option,\n pub body_str: Option,\n}\n```\n\n### WasmSink\n\nA sink is a kind of plugin that can be used to respond to any unmatched request before otoroshi sends back a 404 response\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn sink_matches(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[plugin_fn]\npub fn sink_handle(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmSinkContext {\n pub snowflake: Option,\n pub request: RawRequest,\n pub config: Value,\n pub global_config: Value,\n pub attrs: Value,\n pub origin: String,\n pub status: u32,\n pub message: String,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmSinkMatchesResponse {\n pub result: bool,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct WasmSinkHandleResponse {\n pub status: u32,\n pub headers: HashMap,\n pub body_bytes: Option>,\n pub body_base64: Option,\n pub body_json: Option,\n pub body_str: Option,\n}\n```\n\n### WasmRequestHandler\n\nA request handler is a very special kind of plugin that can bypass the otoroshi proxy engine on specific domains and completely handles the request/response lifecycle on it's own.\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn can_handle_request(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[plugin_fn]\npub fn handle_request(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmRequestHandlerContext {\n pub request: RawRequest\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmRequestHandlerResponse {\n pub status: u32,\n pub headers: HashMap,\n pub body_bytes: Option>,\n pub body_base64: Option,\n pub body_json: Option,\n pub body_str: Option,\n}\n```\n\n### WasmJob\n\nA job is a plugin that can run periodically an do whatever you want. Typically, the kubernetes plugins of otoroshi are jobs that periodically sync stuff between otoroshi and kubernetes using the kube-api\n\n```rs\nuse extism_pdk::*;\n\n#[plugin_fn]\npub fn job_run(Json(_context): Json) -> FnResult> {\n ///\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmJobContext {\n pub attrs: Value,\n pub global_config: Value,\n pub snowflake: Option,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct WasmJobResult {\n\n}\n```\n\n### Common types\n\n```rs\nuse serde::{Deserialize, Serialize};\nuse serde_json::Value;\nuse std::collections::HashMap;\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct Backend {\n pub id: String,\n pub hostname: String,\n pub port: u32,\n pub tls: bool,\n pub weight: u32,\n pub protocol: String,\n pub ip_address: Option,\n pub predicate: Value,\n pub tls_config: Value,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct Apikey {\n #[serde(alias = \"clientId\")]\n pub client_id: String,\n #[serde(alias = \"clientName\")]\n pub client_name: String,\n pub metadata: HashMap,\n pub tags: Vec,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct User {\n pub name: String,\n pub email: String,\n pub profile: Value,\n pub metadata: HashMap,\n pub tags: Vec,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct RawRequest {\n pub id: u32,\n pub method: String,\n pub headers: HashMap,\n pub cookies: Value,\n pub tls: bool,\n pub uri: String,\n pub path: String,\n pub version: String,\n pub has_body: bool,\n pub remote: String,\n pub client_cert_chain: Value,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct Frontend {\n pub domains: Vec,\n pub strict_path: Option,\n pub exact: bool,\n pub headers: HashMap,\n pub query: HashMap,\n pub methods: Vec,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct HealthCheck {\n pub enabled: bool,\n pub url: String,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct RouteBackend {\n pub targets: Vec,\n pub root: String,\n pub rewrite: bool,\n pub load_balancing: Value,\n pub client: Value,\n pub health_check: Option,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct Route {\n pub id: String,\n pub name: String,\n pub description: String,\n pub tags: Vec,\n pub metadata: HashMap,\n pub enabled: bool,\n pub debug_flow: bool,\n pub export_reporting: bool,\n pub capture: bool,\n pub groups: Vec,\n pub frontend: Frontend,\n pub backend: RouteBackend,\n pub backend_ref: Option,\n pub plugins: Value,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct OtoroshiResponse {\n pub status: u32,\n pub headers: HashMap,\n pub cookies: Value,\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct OtoroshiRequest {\n pub url: String,\n pub method: String,\n pub headers: HashMap,\n pub version: String,\n pub client_cert_chain: Value,\n pub backend: Option,\n pub cookies: Value,\n}\n```\n\n## Otoroshi interop. with host functions\n\notoroshi provides some host function in order make wasm interact with otoroshi internals. You can\n\n- access wasi resources\n- access http resources\n- access otoroshi internal state\n- access otoroshi internal configuration\n- access otoroshi static configuration\n- access plugin scoped in-memory key/value storage\n- access global in-memory key/value storage\n- access plugin scoped persistent key/value storage\n- access global persistent key/value storage\n\n### authorizations\n\nall the previously listed host functions are enabled with specific authorizations to avoid security issues with third party plugins. You can enable/disable the host function from the wasm plugin entity\n\n@@@ div { .centered-img }\n\n@@@\n\n\n### host functions abi\n\nyou'll find here the raw signatures for the otoroshi host functions. we are currently in the process of writing higher level functions to hide the complexity.\n\nevery time you the the following signature: `(context: u64, size: u64) -> u64` it means that otoroshi is expecting for a pointer to the call context (which is a json string) and it's size. The return is a pointer to the response (which is a json string).\n\nthe signature `(unused: u64) -> u64` means that there is no need for a params but as we technically need one (and hope to don't need one in the future), you have to pass something like `0` as parameter.\n\n```rust\nextern \"C\" {\n // log messages in otoroshi (log levels are 0 to 6 for trace, debug, info, warn, error, critical, max)\n fn proxy_log(logLevel: i32, message: u64, size: u64) -> i32;\n // trigger an otoroshi wasm event that can be exported through data exporters\n fn proxy_log_event(context: u64, size: u64) -> u64;\n // an http client\n fn proxy_http_call(context: u64, size: u64) -> u64;\n // access the current otoroshi state containing a snapshot of all otoroshi entities\n fn proxy_state(context: u64) -> u64;\n fn proxy_state_value(context: u64, size: u64) -> u64;\n // access the current otoroshi cluster configuration\n fn proxy_cluster_state(context: u64) -> u64;\n fn proxy_cluster_state_value(context: u64, size: u64) -> u64;\n // access the current otoroshi static configuration\n fn proxy_global_config(unused: u64) -> u64;\n // access the current otoroshi dynamic configuration\n fn proxy_config(unused: u64) -> u64;\n // access a persistent key/value store shared by every wasm plugins\n fn proxy_datastore_keys(context: u64, size: u64) -> u64;\n fn proxy_datastore_get(context: u64, size: u64) -> u64;\n fn proxy_datastore_exists(context: u64, size: u64) -> u64;\n fn proxy_datastore_pttl(context: u64, size: u64) -> u64;\n fn proxy_datastore_setnx(context: u64, size: u64) -> u64;\n fn proxy_datastore_del(context: u64, size: u64) -> u64;\n fn proxy_datastore_incrby(context: u64, size: u64) -> u64;\n fn proxy_datastore_pexpire(context: u64, size: u64) -> u64;\n fn proxy_datastore_all_matching(context: u64, size: u64) -> u64;\n // access a persistent key/value store for the current plugin instance only\n fn proxy_plugin_datastore_keys(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_get(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_exists(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_pttl(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_setnx(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_del(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_incrby(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_pexpire(context: u64, size: u64) -> u64;\n fn proxy_plugin_datastore_all_matching(context: u64, size: u64) -> u64;\n // access an in memory key/value store for the current plugin instance only\n fn proxy_plugin_map_set(context: u64, size: u64) -> u64;\n fn proxy_plugin_map_get(context: u64, size: u64) -> u64;\n fn proxy_plugin_map(unused: u64) -> u64;\n // access an in memory key/value store shared by every wasm plugins\n fn proxy_global_map_set(context: u64, size: u64) -> u64;\n fn proxy_global_map_get(context: u64, size: u64) -> u64;\n fn proxy_global_map(unused: u64) -> u64;\n}\n```\n\nright know, when using the wasm manager, a default idiomatic implementation is provided for `TinyGo` and `Rust`\n\nhost.rs\n: @@snip [host.rs](../snippets/wasm-manager/host.rs) \n\nhost.go\n: @@snip [host.go](../snippets/wasm-manager/host.go) \n"}] \ No newline at end of file diff --git a/manual/src/main/paradox/snippets/reference-env.conf b/manual/src/main/paradox/snippets/reference-env.conf index 3a51f6f198..e69de29bb2 100644 --- a/manual/src/main/paradox/snippets/reference-env.conf +++ b/manual/src/main/paradox/snippets/reference-env.conf @@ -1,1015 +0,0 @@ -app { - storage = ${?APP_STORAGE} # the storage used by otoroshi. possible values are lettuce (for redis), inmemory, file, http, s3, cassandra, postgresql - storage = ${?OTOROSHI_STORAGE} # the storage used by otoroshi. possible values are lettuce (for redis), inmemory, file, http, s3, cassandra, postgresql - storageRoot = ${?APP_STORAGE_ROOT} # the prefix used for storage keys - storageRoot = ${?OTOROSHI_STORAGE_ROOT} # the prefix used for storage keys - eventsName = ${?APP_EVENTS_NAME} # the name of the event producer - eventsName = ${?OTOROSHI_EVENTS_NAME} # the name of the event producer - importFrom = ${?APP_IMPORT_FROM} # file path to import otoroshi initial configuration - importFrom = ${?OTOROSHI_IMPORT_FROM} # file path to import otoroshi initial configuration - env = ${?APP_ENV} # env name, should always be prod except in dev mode - env = ${?OTOROSHI_ENV} # env name, should always be prod except in dev mode - domain = ${?APP_DOMAIN} # default domain for basic otoroshi services - domain = ${?OTOROSHI_DOMAIN} # default domain for basic otoroshi services - commitId = ${?COMMIT_ID} - commitId = ${?OTOROSHI_COMMIT_ID} - rootScheme = ${?APP_ROOT_SCHEME} # default root scheme when composing urls - rootScheme = ${?OTOROSHI_ROOT_SCHEME} # default root scheme when composing urls - throttlingWindow = ${?THROTTLING_WINDOW} # the number of second used to compute throttling number - throttlingWindow = ${?OTOROSHI_THROTTLING_WINDOW} # the number of second used to compute throttling number - checkForUpdates = ${?CHECK_FOR_UPDATES} # enable automatic version update checks - checkForUpdates = ${?OTOROSHI_CHECK_FOR_UPDATES} # enable automatic version update checks - overheadThreshold = ${?OVERHEAD_THRESHOLD} # the value threshold (in milliseconds) used to send HighOverheadAlert - overheadThreshold = ${?OTOROSHI_OVERHEAD_THRESHOLD} # the value threshold (in milliseconds) used to send HighOverheadAlert - adminLogin = ${?OTOROSHI_INITIAL_ADMIN_LOGIN} # the initial admin login - adminPassword = ${?OTOROSHI_INITIAL_ADMIN_PASSWORD} # the initial admin password - initialCustomization = ${?OTOROSHI_INITIAL_CUSTOMIZATION} # otoroshi inital configuration that will be merged with a new confguration. Shaped like an otoroshi export - boot { - failOnTimeout = ${?OTOROSHI_BOOT_FAIL_ON_TIMEOUT} # otoroshi will exit if a subsystem failed its init - globalWait = ${?OTOROSHI_BOOT_GLOBAL_WAIT} # should we wait until everything is setup to accept http requests - globalWaitTimeout = ${?OTOROSHI_BOOT_GLOBAL_WAIT_TIMEOUT} # max wait before accepting requests - waitForPluginsSearch = ${?OTOROSHI_BOOT_WAIT_FOR_PLUGINS_SEARCH} # should we wait for classpath plugins search before accepting http requests - waitForPluginsSearchTimeout = ${?OTOROSHI_BOOT_WAIT_FOR_PLUGINS_SEARCH_TIMEOUT} # max wait for classpath plugins search before accepting http requests - waitForScriptsCompilation = ${?OTOROSHI_BOOT_WAIT_FOR_SCRIPTS_COMPILATION} # should we wait for plugins compilation before accepting http requests - waitForScriptsCompilationTimeout = ${?OTOROSHI_BOOT_WAIT_FOR_SCRIPTS_COMPILATION_TIMEOUT} # max wait for plugins compilation before accepting http requests - waitForTlsInit = ${?OTOROSHI_BOOT_WAIT_FOR_TLS_INIT} # should we wait for first TLS context initialization before accepting http requests - waitForTlsInitTimeout = ${?OTOROSHI_BOOT_WAIT_FOR_TLS_INIT_TIMEOUT} # max wait for first TLS context initialization before accepting http requests - waitForFirstClusterFetch = ${?OTOROSHI_BOOT_WAIT_FOR_FIRST_CLUSTER_FETCH} # should we wait for first cluster initialization before accepting http requests - waitForFirstClusterFetchTimeout = ${?OTOROSHI_BOOT_WAIT_FOR_FIRST_CLUSTER_TIMEOUT} # max wait for first cluster initialization before accepting http requests - waitForFirstClusterStateCache = ${?OTOROSHI_BOOT_WAIT_FOR_FIRST_CLUSTER_STATE_CACHE} # should we wait for first cluster initialization before accepting http requests - waitForFirstClusterStateCacheTimeout = ${?OTOROSHI_BOOT_WAIT_FOR_FIRST_CLUSTER_STATE_CACHE_TIMEOUT} # max wait for first cluster initialization before accepting http requests - } - instance { - instanceId = ${?OTOROSHI_INSTANCE_ID} # the instance id - number = ${?OTOROSHI_INSTANCE_NUMBER} # the instance number. Can be found in otoroshi events - number = ${?INSTANCE_NUMBER} # the instance number. Can be found in otoroshi events - name = ${?OTOROSHI_INSTANCE_NAME} # instance name - zone = ${?OTOROSHI_INSTANCE_ZONE} # instance zone (optional) - region = ${?OTOROSHI_INSTANCE_REGION} # instance region (optional) - dc = ${?OTOROSHI_INSTANCE_DATACENTER} # instance dc (optional) - provider = ${?OTOROSHI_INSTANCE_PROVIDER} # instance provider (optional) - rack = ${?OTOROSHI_INSTANCE_RACK} # instance rack (optional) - title = ${?OTOROSHI_INSTANCE_TITLE} # the title displayed in UI top left - } - longRequestTimeout = ${?OTOROSHI_PROXY_LONG_REQUEST_TIMEOUT} - } - health { - limit = ${?HEALTH_LIMIT} # the value threshold (in milliseconds) used to indicate if an otoroshi instance is healthy or not - limit = ${?OTOROSHI_HEALTH_LIMIT} # the value threshold (in milliseconds) used to indicate if an otoroshi instance is healthy or not - accessKey = ${?HEALTH_ACCESS_KEY} # the key to access /health edpoint - accessKey = ${?OTOROSHI_HEALTH_ACCESS_KEY} # the key to access /health edpoint - } - snowflake { - seed = ${?INSTANCE_NUMBER} # the seed number used to generate unique ids. Should be different for every instances - seed = ${?OTOROSHI_INSTANCE_NUMBER} # the seed number used to generate unique ids. Should be different for every instances - seed = ${?SNOWFLAKE_SEED} # the seed number used to generate unique ids. Should be different for every instances - seed = ${?OTOROSHI_SNOWFLAKE_SEED} # the seed number used to generate unique ids. Should be different for every instances - } - events { - maxSize = ${?MAX_EVENTS_SIZE} # the amount of event kept in the datastore - maxSize = ${?OTOROSHI_MAX_EVENTS_SIZE} # the amount of event kept in the datastore - } - exposed-ports { - http = ${?APP_EXPOSED_PORTS_HTTP} # the exposed http port for otoroshi (when in a container or behind a proxy) - http = ${?OTOROSHI_EXPOSED_PORTS_HTTP} # the exposed http port for otoroshi (when in a container or behind a proxy) - https = ${?APP_EXPOSED_PORTS_HTTPS} # the exposed https port for otoroshi (when in a container or behind a proxy - https = ${?OTOROSHI_EXPOSED_PORTS_HTTPS} # the exposed https port for otoroshi (when in a container or behind a proxy - } - backoffice { - exposed = ${?APP_BACKOFFICE_EXPOSED} # expose the backoffice ui - exposed = ${?OTOROSHI_BACKOFFICE_EXPOSED} # expose the backoffice ui - subdomain = ${?APP_BACKOFFICE_SUBDOMAIN} # the backoffice subdomain - subdomain = ${?OTOROSHI_BACKOFFICE_SUBDOMAIN} # the backoffice subdomain - domainsStr = ${?APP_BACKOFFICE_DOMAINS} # the backoffice domains - domainsStr = ${?OTOROSHI_BACKOFFICE_DOMAINS} # the backoffice domains - useNewEngine = ${?OTOROSHI_BACKOFFICE_USE_NEW_ENGINE} # avoid backoffice admin api proxy - usePlay = ${?OTOROSHI_BACKOFFICE_USE_PLAY} # avoid backoffice http call for admin api - session { - exp = ${?APP_BACKOFFICE_SESSION_EXP} # the backoffice cookie expiration - exp = ${?OTOROSHI_BACKOFFICE_SESSION_EXP} # the backoffice cookie expiration - } - } - privateapps { - subdomain = ${?APP_PRIVATEAPPS_SUBDOMAIN} # privateapps (proxy sso) domain - subdomain = ${?OTOROSHI_PRIVATEAPPS_SUBDOMAIN} # privateapps (proxy sso) domain - domainsStr = ${?APP_PRIVATEAPPS_DOMAINS} - domainsStr = ${?OTOROSHI_PRIVATEAPPS_DOMAINS} - session { - exp = ${?APP_PRIVATEAPPS_SESSION_EXP} # the privateapps cookie expiration - exp = ${?OTOROSHI_PRIVATEAPPS_SESSION_EXP} # the privateapps cookie expiration - } - } - adminapi { - exposed = ${?ADMIN_API_EXPOSED} # expose the admin api - exposed = ${?OTOROSHI_ADMIN_API_EXPOSED} # expose the admin api - targetSubdomain = ${?ADMIN_API_TARGET_SUBDOMAIN} # admin api target subdomain as targeted by otoroshi service - targetSubdomain = ${?OTOROSHI_ADMIN_API_TARGET_SUBDOMAIN} # admin api target subdomain as targeted by otoroshi service - exposedSubdomain = ${?ADMIN_API_EXPOSED_SUBDOMAIN} # admin api exposed subdomain as exposed by otoroshi service - exposedSubdomain = ${?OTOROSHI_ADMIN_API_EXPOSED_SUBDOMAIN} # admin api exposed subdomain as exposed by otoroshi service - additionalExposedDomain = ${?ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN} # admin api additional exposed subdomain as exposed by otoroshi service - additionalExposedDomain = ${?OTOROSHI_ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN} # admin api additional exposed subdomain as exposed by otoroshi service - domainsStr = ${?ADMIN_API_DOMAINS} - domainsStr = ${?OTOROSHI_ADMIN_API_DOMAINS} - exposedDomainsStr = ${?ADMIN_API_EXPOSED_DOMAINS} - exposedDomainsStr = ${?OTOROSHI_ADMIN_API_EXPOSED_DOMAINS} - defaultValues { - backOfficeGroupId = ${?ADMIN_API_GROUP} # default value for admin api service group - backOfficeGroupId = ${?OTOROSHI_ADMIN_API_GROUP} # default value for admin api service group - backOfficeApiKeyClientId = ${?ADMIN_API_CLIENT_ID} # default value for admin api apikey id - backOfficeApiKeyClientId = ${?OTOROSHI_ADMIN_API_CLIENT_ID} # default value for admin api apikey id - backOfficeApiKeyClientSecret = ${?otoroshi.admin-api-secret} # default value for admin api apikey secret - backOfficeApiKeyClientSecret = ${?OTOROSHI_otoroshi.admin-api-secret} # default value for admin api apikey secret - backOfficeApiKeyClientSecret = ${?ADMIN_API_CLIENT_SECRET} # default value for admin api apikey secret - backOfficeApiKeyClientSecret = ${?OTOROSHI_ADMIN_API_CLIENT_SECRET} # default value for admin api apikey secret - backOfficeServiceId = ${?ADMIN_API_SERVICE_ID} # default value for admin api service id - backOfficeServiceId = ${?OTOROSHI_ADMIN_API_SERVICE_ID} # default value for admin api service id - } - proxy { - https = ${?ADMIN_API_HTTPS} # backoffice proxy admin api over https - https = ${?OTOROSHI_ADMIN_API_HTTPS} # backoffice proxy admin api over https - local = ${?ADMIN_API_LOCAL} # backoffice proxy admin api on localhost - local = ${?OTOROSHI_ADMIN_API_LOCAL} # backoffice proxy admin api on localhost - } - } - claim { - sharedKey = ${?CLAIM_SHAREDKEY} # the default secret used to sign otoroshi exchange protocol tokens - sharedKey = ${?OTOROSHI_CLAIM_SHAREDKEY} # the default secret used to sign otoroshi exchange protocol tokens - } - webhooks { - } - redis { # configuration to fetch/store otoroshi state from a redis datastore using rediscala - host = ${?REDIS_HOST} - host = ${?OTOROSHI_REDIS_HOST} - port = ${?REDIS_PORT} - port = ${?OTOROSHI_REDIS_PORT} - password = ${?REDIS_PASSWORD} - password = ${?OTOROSHI_REDIS_PASSWORD} - windowSize = ${?REDIS_WINDOW_SIZE} - windowSize = ${?OTOROSHI_REDIS_WINDOW_SIZE} - slavesStr = ${?REDIS_SLAVES} - slavesStr = ${?OTOROSHI_REDIS_SLAVES} - slavesStr = ${?REDIS_MEMBERS} - slavesStr = ${?OTOROSHI_REDIS_MEMBERS} - useScan = ${?REDIS_USE_SCAN} - useScan = ${?OTOROSHI_REDIS_USE_SCAN} - pool { - members = ${?REDIS_POOL_MEMBERS} - members = ${?OTOROSHI_REDIS_POOL_MEMBERS} - } - mpool { - membersStr = ${?REDIS_MPOOL_MEMBERS} - membersStr = ${?OTOROSHI_REDIS_MPOOL_MEMBERS} - } - lf { - master { - host = ${?REDIS_LF_HOST} - host = ${?OTOROSHI_REDIS_LF_HOST} - port = ${?REDIS_LF_PORT} - port = ${?OTOROSHI_REDIS_LF_PORT} - password = ${?REDIS_LF_PASSWORD} - password = ${?OTOROSHI_REDIS_LF_PASSWORD} - } - slavesStr = ${?REDIS_LF_SLAVES} - slavesStr = ${?OTOROSHI_REDIS_LF_SLAVES} - slavesStr = ${?REDIS_LF_MEMBERS} - slavesStr = ${?OTOROSHI_REDIS_LF_MEMBERS} - } - sentinels { - master = ${?REDIS_SENTINELS_MASTER} - master = ${?OTOROSHI_REDIS_SENTINELS_MASTER} - password = ${?REDIS_SENTINELS_PASSWORD} - password = ${?OTOROSHI_REDIS_SENTINELS_PASSWORD} - db = ${?REDIS_SENTINELS_DB} - db = ${?OTOROSHI_REDIS_SENTINELS_DB} - name = ${?REDIS_SENTINELS_NAME} - name = ${?OTOROSHI_REDIS_SENTINELS_NAME} - membersStr = ${?REDIS_SENTINELS_MEMBERS} - membersStr = ${?OTOROSHI_REDIS_SENTINELS_MEMBERS} - lf { - master = ${?REDIS_SENTINELS_LF_MASTER} - master = ${?OTOROSHI_REDIS_SENTINELS_LF_MASTER} - membersStr = ${?REDIS_SENTINELS_LF_MEMBERS} - membersStr = ${?OTOROSHI_REDIS_SENTINELS_LF_MEMBERS} - } - } - cluster { - membersStr = ${?REDIS_CLUSTER_MEMBERS} - membersStr = ${?OTOROSHI_REDIS_CLUSTER_MEMBERS} - } - lettuce { # configuration to fetch/store otoroshi state from a redis datastore using the lettuce driver (the next default one) - connection = ${?REDIS_LETTUCE_CONNECTION} - connection = ${?OTOROSHI_REDIS_LETTUCE_CONNECTION} - uri = ${?REDIS_LETTUCE_URI} - uri = ${?OTOROSHI_REDIS_LETTUCE_URI} - uri = ${?REDIS_URL} - uri = ${?OTOROSHI_REDIS_URL} - urisStr = ${?REDIS_LETTUCE_URIS} - urisStr = ${?OTOROSHI_REDIS_LETTUCE_URIS} - readFrom = ${?REDIS_LETTUCE_READ_FROM} - readFrom = ${?OTOROSHI_REDIS_LETTUCE_READ_FROM} - startTLS = ${?REDIS_LETTUCE_START_TLS} - startTLS = ${?OTOROSHI_REDIS_LETTUCE_START_TLS} - verifyPeers = ${?REDIS_LETTUCE_VERIFY_PEERS} - verifyPeers = ${?OTOROSHI_REDIS_LETTUCE_VERIFY_PEERS} - } - } - inmemory { # configuration to fetch/store otoroshi state in memory - windowSize = ${?INMEMORY_WINDOW_SIZE} - windowSize = ${?OTOROSHI_INMEMORY_WINDOW_SIZE} - experimental = ${?INMEMORY_EXPERIMENTAL_STORE} - experimental = ${?OTOROSHI_INMEMORY_EXPERIMENTAL_STORE} - optimized = ${?INMEMORY_OPTIMIZED} - optimized = ${?OTOROSHI_INMEMORY_OPTIMIZED} - modern = ${?INMEMORY_MODERN} - modern = ${?OTOROSHI_INMEMORY_MODERN} - } - filedb { # configuration to fetch/store otoroshi state from a file - windowSize = ${?FILEDB_WINDOW_SIZE} - windowSize = ${?OTOROSHI_FILEDB_WINDOW_SIZE} - path = ${?FILEDB_PATH} - path = ${?OTOROSHI_FILEDB_PATH} - } - httpdb { # configuration to fetch/store otoroshi state from an http endpoint - headers = {} - } - s3db { # configuration to fetch/store otoroshi state from a S3 bucket - bucket = ${?OTOROSHI_DB_S3_BUCKET} - endpoint = ${?OTOROSHI_DB_S3_ENDPOINT} - region = ${?OTOROSHI_DB_S3_REGION} - access = ${?OTOROSHI_DB_S3_ACCESS} - secret = ${?OTOROSHI_DB_S3_SECRET} - key = ${?OTOROSHI_DB_S3_KEY} - chunkSize = ${?OTOROSHI_DB_S3_CHUNK_SIZE} - v4auth = ${?OTOROSHI_DB_S3_V4_AUTH} - writeEvery = ${?OTOROSHI_DB_S3_WRITE_EVERY} # write interval - acl = ${?OTOROSHI_DB_S3_ACL} - } - pg { # postrgesql settings. everything possible with the client - uri = ${?PG_URI} - uri = ${?OTOROSHI_PG_URI} - uri = ${?POSTGRESQL_ADDON_URI} - uri = ${?OTOROSHI_POSTGRESQL_ADDON_URI} - poolSize = ${?PG_POOL_SIZE} - poolSize = ${?OTOROSHI_PG_POOL_SIZE} - port = ${?PG_PORT} - port = ${?OTOROSHI_PG_PORT} - host = ${?PG_HOST} - host = ${?OTOROSHI_PG_HOST} - database = ${?PG_DATABASE} - database = ${?OTOROSHI_PG_DATABASE} - user = ${?PG_USER} - user = ${?OTOROSHI_PG_USER} - password = ${?PG_PASSWORD} - password = ${?OTOROSHI_PG_PASSWORD} - logQueries = ${?PG_DEBUG_QUERIES} - logQueries = ${?OTOROSHI_PG_DEBUG_QUERIES} - avoidJsonPath = ${?PG_AVOID_JSON_PATH} - avoidJsonPath = ${?OTOROSHI_PG_AVOID_JSON_PATH} - optimized = ${?PG_OPTIMIZED} - optimized = ${?OTOROSHI_PG_OPTIMIZED} - connect-timeout = ${?PG_CONNECT_TIMEOUT} - connect-timeout = ${?OTOROSHI_PG_CONNECT_TIMEOUT} - idle-timeout = ${?PG_IDLE_TIMEOUT} - idle-timeout = ${?OTOROSHI_PG_IDLE_TIMEOUT} - log-activity = ${?PG_LOG_ACTIVITY} - log-activity = ${?OTOROSHI_PG_LOG_ACTIVITY} - pipelining-limit = ${?PG_PIPELINING_LIMIT} - pipelining-limit = ${?OTOROSHI_PG_PIPELINING_LIMIT} - ssl { - enabled = ${?PG_SSL_ENABLED} - enabled = ${?OTOROSHI_PG_SSL_ENABLED} - mode = ${?PG_SSL_MODE} - mode = ${?OTOROSHI_PG_SSL_MODE} - trusted-cert-path = ${?PG_SSL_TRUSTED_CERT_PATH} - trusted-cert-path = ${?OTOROSHI_PG_SSL_TRUSTED_CERT_PATH} - trusted-cert = ${?PG_SSL_TRUSTED_CERT} - trusted-cert = ${?OTOROSHI_PG_SSL_TRUSTED_CERT} - client-cert-path = ${?PG_SSL_CLIENT_CERT_PATH} - client-cert-path = ${?OTOROSHI_PG_SSL_CLIENT_CERT_PATH} - client-cert = ${?PG_SSL_CLIENT_CERT} - client-cert = ${?OTOROSHI_PG_SSL_CLIENT_CERT} - trust-all = ${?PG_SSL_TRUST_ALL} - trust-all = ${?OTOROSHI_PG_SSL_TRUST_ALL} - } - } - cassandra { # cassandra settings. everything possible with the client - windowSize = ${?CASSANDRA_WINDOW_SIZE} - windowSize = ${?OTOROSHI_CASSANDRA_WINDOW_SIZE} - host = ${?CASSANDRA_HOST} - host = ${?OTOROSHI_CASSANDRA_HOST} - port = ${?CASSANDRA_PORT} - port = ${?OTOROSHI_CASSANDRA_PORT} - replicationFactor = ${?CASSANDRA_REPLICATION_FACTOR} - replicationFactor = ${?OTOROSHI_CASSANDRA_REPLICATION_FACTOR} - replicationOptions = ${?CASSANDRA_REPLICATION_OPTIONS} - replicationOptions = ${?OTOROSHI_CASSANDRA_REPLICATION_OPTIONS} - durableWrites = ${?CASSANDRA_DURABLE_WRITES} - durableWrites = ${?OTOROSHI_CASSANDRA_DURABLE_WRITES} - basic.contact-points = [ ${app.cassandra.host}":"${app.cassandra.port} ] - basic.session-name = ${?OTOROSHI_CASSANDRA_SESSION_NAME} - basic.session-keyspace = ${?OTOROSHI_CASSANDRA_SESSION_KEYSPACE} - basic.request { - consistency = ${?OTOROSHI_CASSANDRA_CONSISTENCY} - page-size = ${?OTOROSHI_CASSANDRA_PAGE_SIZE} - serial-consistency = ${?OTOROSHI_CASSANDRA_SERIAL_CONSISTENCY} - default-idempotence = ${?OTOROSHI_CASSANDRA_DEFAULT_IDEMPOTENCE} - } - basic.load-balancing-policy { - local-datacenter = ${?OTOROSHI_CASSANDRA_LOCAL_DATACENTER} - } - basic.cloud { - } - basic.application { - } - basic.graph { - } - advanced.connection { - set-keyspace-timeout = ${datastax-java-driver.advanced.connection.init-query-timeout} - pool { - local { - } - remote { - } - } - } - advanced.reconnection-policy { - } - advanced.retry-policy { - } - advanced.speculative-execution-policy { - } - advanced.auth-provider { - username = ${?CASSANDRA_USERNAME} - username = ${?OTOROSHI_CASSANDRA_USERNAME} - password = ${?CASSANDRA_PASSWORD} - password = ${?OTOROSHI_CASSANDRA_PASSWORD} - authorization-id = ${?OTOROSHI_CASSANDRA_AUTHORIZATION_ID} - # login-configuration { - # } - # sasl-properties { - # } - } - advanced.ssl-engine-factory { - } - advanced.timestamp-generator { - drift-warning { - } - } - advanced.request-tracker { - logs { - slow { - } - } - } - advanced.throttler { - } - advanced.address-translator { - } - advanced.protocol { - version = ${?OTOROSHI_CASSANDRA_PROTOCOL_VERSION} - compression = ${?OTOROSHI_CASSANDRA_PROTOCOL_COMPRESSION} - } - advanced.request { - trace { - } - } - advanced.graph { - paging-options { - page-size = ${datastax-java-driver.advanced.continuous-paging.page-size} - max-pages = ${datastax-java-driver.advanced.continuous-paging.max-pages} - max-pages-per-second = ${datastax-java-driver.advanced.continuous-paging.max-pages-per-second} - max-enqueued-pages = ${datastax-java-driver.advanced.continuous-paging.max-enqueued-pages} - } - } - advanced.continuous-paging { - page-size = ${datastax-java-driver.basic.request.page-size} - timeout { - } - } - advanced.monitor-reporting { - } - advanced.metrics { - session { - cql-requests { - } - throttling.delay { - } - continuous-cql-requests { - } - graph-requests { - } - } - node { - cql-messages { - } - graph-messages { - } - } - } - advanced.socket { - } - advanced.heartbeat { - timeout = ${datastax-java-driver.advanced.connection.init-query-timeout} - } - advanced.metadata { - topology-event-debouncer { - } - schema { - request-timeout = ${datastax-java-driver.basic.request.timeout} - request-page-size = ${datastax-java-driver.basic.request.page-size} - debouncer { - } - } - } - advanced.control-connection { - timeout = ${datastax-java-driver.advanced.connection.init-query-timeout} - schema-agreement { - } - } - advanced.prepared-statements { - reprepare-on-up { - timeout = ${datastax-java-driver.advanced.connection.init-query-timeout} - } - } - advanced.netty { - io-group { - shutdown {quiet-period = 2, timeout = 15, unit = SECONDS} - } - admin-group { - shutdown {quiet-period = 2, timeout = 15, unit = SECONDS} - } - timer { - } - } - advanced.coalescer { - } - } - actorsystems { - otoroshi { - akka { # otoroshi actorsystem configuration - version = ${akka.version} - default-dispatcher { - fork-join-executor { - parallelism-factor = ${?OTOROSHI_CORE_DISPATCHER_PARALLELISM_FACTOR} - parallelism-min = ${?OTOROSHI_CORE_DISPATCHER_PARALLELISM_MIN} - parallelism-max = ${?OTOROSHI_CORE_DISPATCHER_PARALLELISM_MAX} - task-peeking-mode = ${?OTOROSHI_CORE_DISPATCHER_TASK_PEEKING_MODE} - } - throughput = ${?OTOROSHI_CORE_DISPATCHER_THROUGHPUT} - } - http { - parsing { - max-uri-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_URI_LENGTH} - max-method-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_METHOD_LENGTH} - max-response-reason-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_RESPONSE_REASON_LENGTH} - max-header-name-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_HEADER_NAME_LENGTH} - max-header-value-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_HEADER_VALUE_LENGTH} - max-header-count = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_HEADER_COUNT} - max-chunk-ext-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_CHUNK_EXT_LENGTH} - max-chunk-size = ${?AKKA_HTTP_CLIENT_MAX_CHUNK_SIZE} - max-chunk-size = ${?OTOROSHI_AKKA_HTTP_CLIENT_MAX_CHUNK_SIZE} - max-chunk-size = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_CHUNK_SIZE} - max-content-length = ${?AKKA_HTTP_CLIENT_MAX_CONTENT_LENGHT} - max-content-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_MAX_CONTENT_LENGHT} - max-content-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_CONTENT_LENGHT} - max-to-strict-bytes = ${?AKKA_HTTP_CLIENT_MAX_TO_STRICT_BYTES} - max-to-strict-bytes = ${?OTOROSHI_AKKA_HTTP_CLIENT_MAX_TO_STRICT_BYTES} - max-to-strict-bytes = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_TO_STRICT_BYTES} - } - } - } - } - datastore { - akka { - version = ${akka.version} - default-dispatcher { - fork-join-executor { - } - } - } - } - } -} -otoroshi { - domain = ${?app.domain} - maintenanceMode = ${?OTOROSHI_MAINTENANCE_MODE_ENABLED} # enable global maintenance mode - secret = ${?OTOROSHI_SECRET} # the secret used to sign sessions - admin-api-secret = ${?OTOROSHI_ADMIN_API_SECRET} # the secret for admin api - next { - state-sync-interval = ${?OTOROSHI_NEXT_STATE_SYNC_INTERVAL} - export-reporting = ${?OTOROSHI_NEXT_EXPORT_REPORTING} - monitor-proxy-state-size = ${?OTOROSHI_NEXT_MONITOR_PROXY_STATE_SIZE} - monitor-datastore-size = ${?OTOROSHI_NEXT_MONITOR_DATASTORE_SIZE} - plugins { - merge-sync-steps = ${?OTOROSHI_NEXT_PLUGINS_MERGE_SYNC_STEPS} - apply-legacy-checks = ${?OTOROSHI_NEXT_PLUGINS_APPLY_LEGACY_CHECKS} - } - experimental { - netty-client { - wiretap = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_CLIENT_WIRETAP} - enforce = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_CLIENT_ENFORCE} - enforce-akka = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_CLIENT_ENFORCE_AKKA} - } - netty-server { - enabled = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_ENABLED} - new-engine-only = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_NEW_ENGINE_ONLY} - host = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HOST} - http-port = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_PORT} - exposed-http-port = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_EXPOSED_HTTP_PORT} - https-port = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTPS_PORT} - exposed-https-port = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_EXPOSED_HTTPS_PORT} - wiretap = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_WIRETAP} - accesslog = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_ACCESSLOG} - threads = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_THREADS} - parser { - allowDuplicateContentLengths = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_ALLOW_DUPLICATE_CONTENT_LENGTHS} - validateHeaders = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_VALIDATE_HEADERS} - h2cMaxContentLength = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_H_2_C_MAX_CONTENT_LENGTH} - initialBufferSize = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_INITIAL_BUFFER_SIZE} - maxHeaderSize = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_MAX_HEADER_SIZE} - maxInitialLineLength = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_MAX_INITIAL_LINE_LENGTH} - maxChunkSize = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_MAX_CHUNK_SIZE} - } - http2 { - enabled = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP2_ENABLED} - h2c = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP2_H2C} - } - http3 { - enabled = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP3_ENABLED} - port = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP3_PORT} - exposedPort = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP3_EXPOSED_PORT} - initialMaxStreamsBidirectional = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_STREAMS_BIDIRECTIONAL} - initialMaxStreamDataBidirectionalRemote = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_STREAM_DATA_BIDIRECTIONAL_REMOTE} - initialMaxStreamDataBidirectionalLocal = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_STREAM_DATA_BIDIRECTIONAL_LOCAL} - initialMaxData = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_DATA} - maxRecvUdpPayloadSize = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_MAX_RECV_UDP_PAYLOAD_SIZE} - maxSendUdpPayloadSize = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_MAX_SEND_UDP_PAYLOAD_SIZE} - disableQpackDynamicTable = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_DISABLE_QPACK_DYNAMIC_TABLE} - } - native { - enabled = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_NATIVE_ENABLED} - driver = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_NATIVE_DRIVER} - } - } - } - } - options { - bypassUserRightsCheck = ${?OTOROSHI_OPTIONS_BYPASSUSERRIGHTSCHECK} - emptyContentLengthIsChunked = ${?OTOROSHI_OPTIONS_EMPTYCONTENTLENGTHISCHUNKED} - detectApiKeySooner = ${?OTOROSHI_OPTIONS_DETECTAPIKEYSOONER} - sendClientChainAsPem = ${?OTOROSHI_OPTIONS_SENDCLIENTCHAINASPEM} - useOldHeadersComposition = ${?OTOROSHI_OPTIONS_USEOLDHEADERSCOMPOSITION} - manualDnsResolve = ${?OTOROSHI_OPTIONS_MANUALDNSRESOLVE} - useEventStreamForScriptEvents = ${?OTOROSHI_OPTIONS_USEEVENTSTREAMFORSCRIPTEVENTS} - trustXForwarded = ${?OTOROSHI_OPTIONS_TRUST_XFORWARDED} - disableFunnyLogos = ${?OTOROSHI_OPTIONS_DISABLE_FUNNY_LOGOS} - staticExposedDomain = ${?OTOROSHI_OPTIONS_STATIC_EXPOSED_DOMAIN} - enable-json-media-type-with-open-charset = ${?OTOROSHI_OPTIONS_ENABLE_JSON_MEDIA_TYPE_WITH_OPEN_CHARSET} - } - wasm { - cache { - ttl = ${?OTOROSHI_WASM_CACHE_TTL} - size = ${?OTOROSHI_WASM_CACHE_SIZE} - } - queue { - buffer { - size = ${?OTOROSHI_WASM_QUEUE_BUFFER_SIZE} - } - } - } - anonymous-reporting { - enabled = ${?OTOROSHI_ANONYMOUS_REPORTING_ENABLED} - url = ${?OTOROSHI_ANONYMOUS_REPORTING_REDIRECT} - url = ${?OTOROSHI_ANONYMOUS_REPORTING_URL} - timeout = ${?OTOROSHI_ANONYMOUS_REPORTING_TIMEOUT} - tls { - enabled = ${?OTOROSHI_ANONYMOUS_REPORTING_TLS_ENABLED} # enable mtls - loose = ${?OTOROSHI_ANONYMOUS_REPORTING_TLS_LOOSE} # loose verification - trustAll = ${?OTOROSHI_ANONYMOUS_REPORTING_TLS_ALL} # trust any CA - } - proxy { - enabled = ${?OTOROSHI_ANONYMOUS_REPORTING_PROXY_ENABLED} # enable proxy - host = ${?OTOROSHI_ANONYMOUS_REPORTING_PROXY_HOST}, - port = ${?OTOROSHI_ANONYMOUS_REPORTING_PROXY_PORT}, - principal = ${?OTOROSHI_ANONYMOUS_REPORTING_PROXY_PRINCIPAL}, - password = ${?OTOROSHI_ANONYMOUS_REPORTING_PROXY_PASSWORD}, - ntlmDomain = ${?OTOROSHI_ANONYMOUS_REPORTING_PROXY_DOMAIN}, - encoding = ${?OTOROSHI_ANONYMOUS_REPORTING_PROXY_ENCODING}, - } - } - backoffice { - flags { - useAkkaHttpClient = ${?OTOROSHI_BACKOFFICE_FLAGS_USE_AKKA_HTTP_CLIENT} - logUrl = ${?OTOROSHI_BACKOFFICE_FLAGS_LOG_URL} - requestTimeout = ${?OTOROSHI_BACKOFFICE_FLAGS_REQUEST_TIMEOUT} - } - } - sessions { - secret = ${otoroshi.secret} - secret = ${?OTOROSHI_SESSIONS_SECRET} - } - cache { - enabled = ${?USE_CACHE} - enabled = ${?OTOROSHI_USE_CACHE} - enabled = ${?OTOROSHI_ENTITIES_CACHE_ENABLED} - ttl = ${?OTOROSHI_ENTITIES_CACHE_TTL} - } - metrics { - enabled = ${?OTOROSHI_METRICS_ENABLED} - every = ${?OTOROSHI_METRICS_EVERY} - accessKey = ${?app.health.accessKey} - accessKey = ${?OTOROSHI_app.health.accessKey} - accessKey = ${?OTOROSHI_METRICS_ACCESS_KEY} - } - plugins { - packagesStr = ${?OTOROSHI_PLUGINS_SCAN_PACKAGES} - print = ${?OTOROSHI_PLUGINS_PRINT} - } - scripts { - enabled = ${?OTOROSHI_SCRIPTS_ENABLED} # enable scripts - static { # settings for statically enabled script/plugins - enabled = ${?OTOROSHI_SCRIPTS_STATIC_ENABLED} - transformersRefsStr = ${?OTOROSHI_SCRIPTS_STATIC_TRANSFORMER_REFS} - transformersConfig = {} - transformersConfigStr= ${?OTOROSHI_SCRIPTS_STATIC_TRANSFORMER_CONFIG} - validatorRefsStr = ${?OTOROSHI_SCRIPTS_STATIC_VALIDATOR_REFS} - validatorConfig = {} - validatorConfigStr = ${?OTOROSHI_SCRIPTS_STATIC_VALIDATOR_CONFIG} - preRouteRefsStr = ${?OTOROSHI_SCRIPTS_STATIC_PRE_ROUTE_REFS} - preRouteConfig = {} - preRouteConfigStr = ${?OTOROSHI_SCRIPTS_STATIC_PRE_ROUTE_CONFIG} - sinkRefsStr = ${?OTOROSHI_SCRIPTS_STATIC_SINK_REFS} - sinkConfig = {} - sinkConfigStr = ${?OTOROSHI_SCRIPTS_STATIC_SINK_CONFIG} - jobsRefsStr = ${?OTOROSHI_SCRIPTS_STATIC_JOBS_REFS} - jobsConfig = {} - jobsConfigStr = ${?OTOROSHI_SCRIPTS_STATIC_JOBS_CONFIG} - } - } - tls = ${otoroshi.ssl} - ssl { - cipherSuites = ${otoroshi.ssl.cipherSuitesJDK11} - protocols = ${otoroshi.ssl.modernProtocols} - cacert { - } - fromOutside { - clientAuth = ${?SSL_OUTSIDE_CLIENT_AUTH} - clientAuth = ${?OTOROSHI_SSL_OUTSIDE_CLIENT_AUTH} - } - trust { - all = ${?OTOROSHI_SSL_TRUST_ALL} - } - rootCa { - ca = ${?OTOROSHI_SSL_ROOTCA_CA} - cert = ${?OTOROSHI_SSL_ROOTCA_CERT} - key = ${?OTOROSHI_SSL_ROOTCA_KEY} - importCa = ${?OTOROSHI_SSL_ROOTCA_IMPORTCA} - } - initialCacert = ${?CLUSTER_WORKER_INITIAL_CACERT} - initialCacert = ${?OTOROSHI_CLUSTER_WORKER_INITIAL_CACERT} - initialCacert = ${?INITIAL_CACERT} - initialCacert = ${?OTOROSHI_INITIAL_CACERT} - initialCert = ${?CLUSTER_WORKER_INITIAL_CERT} - initialCert = ${?OTOROSHI_CLUSTER_WORKER_INITIAL_CERT} - initialCert = ${?INITIAL_CERT} - initialCert = ${?OTOROSHI_INITIAL_CERT} - initialCertKey = ${?CLUSTER_WORKER_INITIAL_CERT_KEY} - initialCertKey = ${?OTOROSHI_CLUSTER_WORKER_INITIAL_CERT_KEY} - initialCertKey = ${?INITIAL_CERT_KEY} - initialCertKey = ${?OTOROSHI_INITIAL_CERT_KEY} - initialCertImportCa = ${?OTOROSHI_INITIAL_CERT_IMPORTCA} - } - cluster { - mode = ${?CLUSTER_MODE} # can be "off", "leader", "worker" - mode = ${?OTOROSHI_CLUSTER_MODE} # can be "off", "leader", "worker" - compression = ${?CLUSTER_COMPRESSION} # compression of the data sent between leader cluster and worker cluster. From -1 (disabled) to 9 - compression = ${?OTOROSHI_CLUSTER_COMPRESSION} # compression of the data sent between leader cluster and worker cluster. From -1 (disabled) to 9 - retryDelay = ${?CLUSTER_RETRY_DELAY} # the delay before retrying a request to leader - retryDelay = ${?OTOROSHI_CLUSTER_RETRY_DELAY} # the delay before retrying a request to leader - retryFactor = ${?CLUSTER_RETRY_FACTOR} # the retry factor to avoid high load on failing nodes - retryFactor = ${?OTOROSHI_CLUSTER_RETRY_FACTOR} # the retry factor to avoid high load on failing nodes - selfAddress = ${?CLUSTER_SELF_ADDRESS} # the instance ip address - selfAddress = ${?OTOROSHI_CLUSTER_SELF_ADDRESS} # the instance ip address - autoUpdateState = ${?CLUSTER_AUTO_UPDATE_STATE} # auto update cluster state with a job (more efficient - autoUpdateState = ${?OTOROSHI_CLUSTER_AUTO_UPDATE_STATE} # auto update cluster state with a job (more efficient - backup { - enabled = ${?OTOROSHI_CLUSTER_BACKUP_ENABLED} - kind = ${?OTOROSHI_CLUSTER_BACKUP_KIND} - instance { - can-write = ${?OTOROSHI_CLUSTER_BACKUP_INSTANCE_CAN_WRITE} - can-read = ${?OTOROSHI_CLUSTER_BACKUP_INSTANCE_CAN_READ} - } - s3 { - bucket = ${?OTOROSHI_CLUSTER_BACKUP_S3_BUCKET} - endpoint = ${?OTOROSHI_CLUSTER_BACKUP_S3_ENDPOINT} - region = ${?OTOROSHI_CLUSTER_BACKUP_S3_REGION} - access = ${?OTOROSHI_CLUSTER_BACKUP_S3_ACCESSKEY} - secret = ${?OTOROSHI_CLUSTER_BACKUP_S3_SECRET} - path = ${?OTOROSHI_CLUSTER_BACKUP_S3_PATH} - chunk-size = ${?OTOROSHI_CLUSTER_BACKUP_S3_CHUNK_SIZE} - v4auth = ${?OTOROSHI_CLUSTER_BACKUP_S3_V4AUTH} - acl = ${?OTOROSHI_CLUSTER_BACKUP_S3_ACL} - } - } - relay { # relay routing settings - enabled = ${?OTOROSHI_CLUSTER_RELAY_ENABLED} # enable relay routing - leaderOnly = ${?OTOROSHI_CLUSTER_RELAY_LEADER_ONLY} # workers always pass through leader for relay routing - location { - provider = ${?otoroshi.instance.provider} - provider = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_PROVIDER} - provider = ${?app.instance.provider} - zone = ${?otoroshi.instance.zone} - zone = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_ZONE} - zone = ${?app.instance.zone} - region = ${?otoroshi.instance.region} - region = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_REGION} - region = ${?app.instance.region} - datacenter = ${?otoroshi.instance.dc} - datacenter = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_DATACENTER} - datacenter = ${?app.instance.dc} - rack = ${?otoroshi.instance.rack} - rack = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_RACK} - rack = ${?app.instance.rack} - } - exposition { - url = ${?OTOROSHI_CLUSTER_RELAY_EXPOSITION_URL} - urlsStr = ${?OTOROSHI_CLUSTER_RELAY_EXPOSITION_URLS} - hostname = ${?OTOROSHI_CLUSTER_RELAY_EXPOSITION_HOSTNAME} - clientId = ${?OTOROSHI_CLUSTER_RELAY_EXPOSITION_CLIENT_ID} - clientSecret = ${?OTOROSHI_CLUSTER_RELAY_EXPOSITION_CLIENT_SECRET} - ipAddress = ${?OTOROSHI_CLUSTER_RELAY_EXPOSITION_IP_ADDRESS} - } - } - mtls { - enabled = ${?CLUSTER_MTLS_ENABLED} # enable mtls - enabled = ${?OTOROSHI_CLUSTER_MTLS_ENABLED} # enable mtls - loose = ${?CLUSTER_MTLS_LOOSE} # loose verification - loose = ${?OTOROSHI_CLUSTER_MTLS_LOOSE} # loose verification - trustAll = ${?CLUSTER_MTLS_TRUST_ALL} # trust any CA - trustAll = ${?OTOROSHI_CLUSTER_MTLS_TRUST_ALL} # trust any CA - } - proxy { - enabled = ${?CLUSTER_PROXY_ENABLED} # enable proxy - host = ${?CLUSTER_PROXY_HOST}, - port = ${?CLUSTER_PROXY_PORT}, - principal = ${?CLUSTER_PROXY_PRINCIPAL}, - password = ${?CLUSTER_PROXY_PASSWORD}, - ntlmDomain = ${?CLUSTER_PROXY_NTLM_DOMAIN}, - encoding = ${?CLUSTER_PROXY_ENCODING}, - } - leader { - name = ${?CLUSTER_LEADER_NAME} # the leader name - name = ${?OTOROSHI_CLUSTER_LEADER_NAME} # the leader name - urlsStr = ${?CLUSTER_LEADER_URLS} # the leader urls - urlsStr = ${?OTOROSHI_CLUSTER_LEADER_URLS} # the leader urls - url = ${?CLUSTER_LEADER_URL} # the leader url - url = ${?OTOROSHI_CLUSTER_LEADER_URL} # the leader url - host = ${?CLUSTER_LEADER_HOST} # the leaders api hostname - host = ${?OTOROSHI_CLUSTER_LEADER_HOST} # the leaders api hostname - clientId = ${?CLUSTER_LEADER_CLIENT_ID} # the leaders apikey id to access otoroshi admin api - clientId = ${?OTOROSHI_CLUSTER_LEADER_CLIENT_ID} # the leaders apikey id to access otoroshi admin api - clientSecret = ${?CLUSTER_LEADER_CLIENT_SECRET} # the leaders apikey secret to access otoroshi admin api - clientSecret = ${?OTOROSHI_CLUSTER_LEADER_CLIENT_SECRET} # the leaders apikey secret to access otoroshi admin api - groupingBy = ${?CLUSTER_LEADER_GROUP_BY} # items grouping when streaming state - groupingBy = ${?OTOROSHI_CLUSTER_LEADER_GROUP_BY} # items grouping when streaming state - cacheStateFor = ${?CLUSTER_LEADER_CACHE_STATE_FOR} # the ttl for local state cache - cacheStateFor = ${?OTOROSHI_CLUSTER_LEADER_CACHE_STATE_FOR} # the ttl for local state cache - stateDumpPath = ${?CLUSTER_LEADER_DUMP_PATH} # eventually a dump state path for debugging purpose - stateDumpPath = ${?OTOROSHI_CLUSTER_LEADER_DUMP_PATH} # eventually a dump state path for debugging purpose - } - worker { - name = ${?CLUSTER_WORKER_NAME} # the workers name - name = ${?OTOROSHI_CLUSTER_WORKER_NAME} # the workers name - retries = ${?CLUSTER_WORKER_RETRIES} # the number of retries when pushing quotas/pulling state - retries = ${?OTOROSHI_CLUSTER_WORKER_RETRIES} # the number of retries when pushing quotas/pulling state - timeout = ${?CLUSTER_WORKER_TIMEOUT} # the workers timeout when interacting with leaders - timeout = ${?OTOROSHI_CLUSTER_WORKER_TIMEOUT} # the workers timeout when interacting with leaders - tenantsStr = ${?CLUSTER_WORKER_TENANTS} # the list (coma separated) of organization served by this worker. If none, it's all - tenantsStr = ${?OTOROSHI_CLUSTER_WORKER_TENANTS} # the list (coma separated) of organization served by this worker. If none, it's all - dbpath = ${?CLUSTER_WORKER_DB_PATH} # state dump path for debugging purpose - dbpath = ${?OTOROSHI_CLUSTER_WORKER_DB_PATH} # state dump path for debugging purpose - dataStaleAfter = ${?CLUSTER_WORKER_DATA_STALE_AFTER} # the amount of time needed to consider state is stale - dataStaleAfter = ${?OTOROSHI_CLUSTER_WORKER_DATA_STALE_AFTER} # the amount of time needed to consider state is stale - swapStrategy = ${?CLUSTER_WORKER_SWAP_STRATEGY} # the internal memory store strategy, can be Replace or Merge - swapStrategy = ${?OTOROSHI_CLUSTER_WORKER_SWAP_STRATEGY} # the internal memory store strategy, can be Replace or Merge - modern = ${?CLUSTER_WORKER_STORE_MODERN} - modern = ${?OTOROSHI_CLUSTER_WORKER_STORE_MODERN} - state { - retries = ${otoroshi.cluster.worker.retries} # the number of retries when pulling state - retries = ${?CLUSTER_WORKER_STATE_RETRIES} # the number of retries when pulling state - retries = ${?OTOROSHI_CLUSTER_WORKER_STATE_RETRIES} # the number of retries when pulling state - pollEvery = ${?CLUSTER_WORKER_POLL_EVERY} # polling interval - pollEvery = ${?OTOROSHI_CLUSTER_WORKER_POLL_EVERY} # polling interval - timeout = ${otoroshi.cluster.worker.timeout} # the workers timeout when polling state - timeout = ${?CLUSTER_WORKER_POLL_TIMEOUT} # the workers timeout when polling state - timeout = ${?OTOROSHI_CLUSTER_WORKER_POLL_TIMEOUT} # the workers timeout when polling state - } - quotas { - retries = ${otoroshi.cluster.worker.retries} # the number of retries when pushing quotas - retries = ${?CLUSTER_WORKER_QUOTAS_RETRIES} # the number of retries when pushing quotas - retries = ${?OTOROSHI_CLUSTER_WORKER_QUOTAS_RETRIES} # the number of retries when pushing quotas - pushEvery = ${?CLUSTER_WORKER_PUSH_EVERY} # pushing interval - pushEvery = ${?OTOROSHI_CLUSTER_WORKER_PUSH_EVERY} # pushing interval - timeout = ${otoroshi.cluster.worker.timeout} # the workers timeout when pushing quotas - timeout = ${?CLUSTER_WORKER_PUSH_TIMEOUT} # the workers timeout when pushing quotas - timeout = ${?OTOROSHI_CLUSTER_WORKER_PUSH_TIMEOUT} # the workers timeout when pushing quotas - } - } - analytics { # settings for the analytics actor system which is separated from otoroshi default one for performance reasons - pressure { - enabled = ${?OTOROSHI_ANALYTICS_PRESSURE_ENABLED} - } - actorsystem { - akka { - version = ${akka.version} - default-dispatcher { - fork-join-executor { - } - } - # http { - # parsing { - # max-chunk-size = ${?AKKA_HTTP_CLIENT_ANALYTICS_MAX_CHUNK_SIZE} - # max-chunk-size = ${?OTOROSHI_AKKA_HTTP_CLIENT_ANALYTICS_MAX_CHUNK_SIZE} - # max-content-length = ${?AKKA_HTTP_CLIENT_ANALYTICS_MAX_CONTENT_LENGHT} - # max-content-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_ANALYTICS_MAX_CONTENT_LENGHT} - # max-to-strict-bytes = ${?AKKA_HTTP_CLIENT_ANALYTICS_MAX_TO_STRICT_BYTES} - # max-to-strict-bytes = ${?OTOROSHI_AKKA_HTTP_CLIENT_ANALYTICS_MAX_TO_STRICT_BYTES} - # } - # } - } - } - } - } - headers { # the default headers value for specific otoroshi headers - } - requests { - validate = ${?OTOROSHI_REQUESTS_VALIDATE} - maxUrlLength = ${akka.http.parsing.max-uri-length} - maxCookieLength = ${akka.http.parsing.max-header-value-length} - maxHeaderNameLength = ${akka.http.parsing.max-header-name-length} - maxHeaderValueLength = ${akka.http.parsing.max-header-value-length} - } - jmx { - enabled = ${?OTOROSHI_JMX_ENABLED} - port = ${?OTOROSHI_JMX_PORT} - } - loggers { - } - provider { - dashboardUrl = ${?OTOROSHI_PROVIDER_DASHBOARD_URL} - jsUrl = ${?OTOROSHI_PROVIDER_JS_URL} - cssUrl = ${?OTOROSHI_PROVIDER_CSS_URL} - secret = ${?OTOROSHI_PROVIDER_SECRET} - title = ${?OTOROSHI_PROVIDER_TITLE} - } - healthcheck { - workers = ${?OTOROSHI_HEALTHCHECK_WORKERS} - block-on-red = ${?OTOROSHI_HEALTHCHECK_BLOCK_ON_RED} - block-on-red = ${?OTOROSHI_HEALTHCHECK_BLOCK_ON_500} - ttl = ${?OTOROSHI_HEALTHCHECK_TTL} - ttl-only = ${?OTOROSHI_HEALTHCHECK_TTL_ONLY} - } - vaults { - enabled = ${?OTOROSHI_VAULTS_ENABLED} - secrets-ttl = ${?OTOROSHI_VAULTS_SECRETS_TTL} - secrets-error-ttl = ${?OTOROSHI_VAULTS_SECRETS_ERROR_TTL} - cached-secrets = ${?OTOROSHI_VAULTS_CACHED_SECRETS} - read-timeout = ${?otoroshi.vaults.read-ttl} - read-timeout = ${?OTOROSHI_VAULTS_READ_TTL} - read-timeout = ${?OTOROSHI_VAULTS_READ_TIMEOUT} - parallel-fetchs = ${?OTOROSHI_VAULTS_PARALLEL_FETCHS} - leader-fetch-only = ${?OTOROSHI_VAULTS_LEADER_FETCH_ONLY} - env { - prefix = ${?OTOROSHI_VAULTS_ENV_PREFIX} - } - local { - root = ${?OTOROSHI_VAULTS_LOCAL_ROOT} - } - # hashicorpvault { - # } - } - tunnels { - enabled = ${?OTOROSHI_TUNNELS_ENABLED} - worker-ws = ${?OTOROSHI_TUNNELS_WORKER_WS} - worker-use-internal-ports = ${?OTOROSHI_TUNNELS_WORKER_USE_INTERNAL_PORTS} - worker-use-loadbalancing = ${?OTOROSHI_TUNNELS_WORKER_USE_LOADBALANCING} - default { - enabled = ${?OTOROSHI_TUNNELS_DEFAULT_ENABLED} - id = ${?OTOROSHI_TUNNELS_DEFAULT_ID} - name = ${?OTOROSHI_TUNNELS_DEFAULT_NAME} - url = ${?OTOROSHI_TUNNELS_DEFAULT_URL} - host = ${?OTOROSHI_TUNNELS_DEFAULT_HOST} - clientId = ${?OTOROSHI_TUNNELS_DEFAULT_CLIENT_ID} - clientSecret = ${?OTOROSHI_TUNNELS_DEFAULT_CLIENT_SECRET} - export-routes = ${?OTOROSHI_TUNNELS_DEFAULT_EXPORT_ROUTES} # send routes information to remote otoroshi instance to facilitate remote route exposition - export-routes-tag = ${?OTOROSHI_TUNNELS_DEFAULT_EXPORT_TAG} # only send routes information if the route has this tag - proxy { - } - } - } - admin-extensions { - enabled = ${?OTOROSHI_ADMIN_EXTENSIONS_ENABLED} - configurations { - otoroshi_extensions_foo { - } - } - } -} -http.port = ${?otoroshi.http.port} # the main http port for the otoroshi server -http.port = ${?PORT} # the main http port for the otoroshi server -http.port = ${?OTOROSHI_PORT} # the main http port for the otoroshi server -http.port = ${?OTOROSHI_HTTP_PORT} # the main http port for the otoroshi server -play.server.http.port = ${http.port} # the main http port for the otoroshi server -play.server.http.port = ${?PORT} # the main http port for the otoroshi server -play.server.http.port = ${?OTOROSHI_PORT} # the main http port for the otoroshi server -play.server.http.port = ${?OTOROSHI_HTTP_PORT} # the main http port for the otoroshi server -https.port = ${?otoroshi.https.port} # the main https port for the otoroshi server -https.port = ${?HTTPS_PORT} # the main https port for the otoroshi server -https.port = ${?OTOROSHI_HTTPS_PORT} # the main https port for the otoroshi server -play.server.https.keyStoreDumpPath = ${?HTTPS_KEYSTORE_DUMP_PATH} # the file path where the TLSContext will be dumped (for debugging purposes only) -play.server.https.keyStoreDumpPath = ${?OTOROSHI_HTTPS_KEYSTORE_DUMP_PATH} # the file path where the TLSContext will be dumped (for debugging purposes only) -play.http.secret.key = ${otoroshi.secret} # the secret used to signed session cookies -play.http.secret.key = ${?PLAY_CRYPTO_SECRET} # the secret used to signed session cookies -play.http.secret.key = ${?OTOROSHI_CRYPTO_SECRET} # the secret used to signed session cookies -play.server.http.idleTimeout = ${?PLAY_SERVER_IDLE_TIMEOUT} # the default server idle timeout -play.server.http.idleTimeout = ${?OTOROSHI_SERVER_IDLE_TIMEOUT} # the default server idle timeout -play.server.akka.requestTimeout = ${?PLAY_SERVER_REQUEST_TIMEOUT} # the default server idle timeout (for akka server specifically) -play.server.akka.requestTimeout = ${?OTOROSHI_SERVER_REQUEST_TIMEOUT} # the default server idle timeout (for akka server specifically) -http2.enabled = ${?otoroshi.http2.enabled} -http2.enabled = ${?HTTP2_ENABLED} # enable HTTP2 support -http2.enabled = ${?OTOROSHI_HTTP2_ENABLED} # enable HTTP2 support -play.server.https.keyStore.path=${?HTTPS_KEYSTORE_PATH} # settings for the default server keystore -play.server.https.keyStore.path=${?OTOROSHI_HTTPS_KEYSTORE_PATH} # settings for the default server keystore -play.server.https.keyStore.type=${?HTTPS_KEYSTORE_TYPE} # settings for the default server keystore -play.server.https.keyStore.type=${?OTOROSHI_HTTPS_KEYSTORE_TYPE} # settings for the default server keystore -play.server.https.keyStore.password=${?HTTPS_KEYSTORE_PASSWORD} # settings for the default server keystore -play.server.https.keyStore.password=${?OTOROSHI_HTTPS_KEYSTORE_PASSWORD} # settings for the default server keystore -play.server.https.keyStore.algorithm=${?HTTPS_KEYSTORE_ALGO} # settings for the default server keystore -play.server.https.keyStore.algorithm=${?OTOROSHI_HTTPS_KEYSTORE_ALGO} # settings for the default server keystore -play.server.websocket.frame.maxLength = ${?OTOROSHI_WEBSOCKET_FRAME_MAX_LENGTH} -play.http { - session { - secure = ${?SESSION_SECURE_ONLY} # the cookie for otoroshi backoffice should be exhanged over https only - secure = ${?OTOROSHI_SESSION_SECURE_ONLY} # the cookie for otoroshi backoffice should be exhanged over https only - maxAge = ${?SESSION_MAX_AGE} # the cookie for otoroshi backoffice max age - maxAge = ${?OTOROSHI_SESSION_MAX_AGE} # the cookie for otoroshi backoffice max age - # domain = "."${?app.domain} # the cookie for otoroshi backoffice domain - domain = "."${otoroshi.domain} # the cookie for otoroshi backoffice domain - domain = ${?SESSION_DOMAIN} # the cookie for otoroshi backoffice domain - domain = ${?OTOROSHI_SESSION_DOMAIN} # the cookie for otoroshi backoffice domain - cookieName = ${?SESSION_NAME} # the cookie for otoroshi backoffice name - cookieName = ${?OTOROSHI_SESSION_NAME} # the cookie for otoroshi backoffice name - } -} -akka { # akka specific configuration - actor { - default-dispatcher { - fork-join-executor { - parallelism-factor = ${?OTOROSHI_AKKA_DISPATCHER_PARALLELISM_FACTOR} - parallelism-min = ${?OTOROSHI_AKKA_DISPATCHER_PARALLELISM_MIN} - parallelism-max = ${?OTOROSHI_AKKA_DISPATCHER_PARALLELISM_MAX} - task-peeking-mode = ${?OTOROSHI_AKKA_DISPATCHER_TASK_PEEKING_MODE} - } - throughput = ${?OTOROSHI_AKKA_DISPATCHER_THROUGHPUT} - } - } - http { - server { - max-connections = ${?OTOROSHI_AKKA_HTTP_SERVER_MAX_CONNECTIONS} - pipelining-limit = ${?OTOROSHI_AKKA_HTTP_SERVER_PIPELINING_LIMIT} - backlog = ${?OTOROSHI_AKKA_HTTP_SERVER_BACKLOG} - socket-options { - } - http2 { - } - } - client { - socket-options { - } - } - host-connection-pool { - max-connections = ${?OTOROSHI_AKKA_HTTP_SERVER_HOST_CONNECTION_POOL_MAX_CONNECTIONS} - max-open-requests = ${?OTOROSHI_AKKA_HTTP_SERVER_HOST_CONNECTION_POOL_MAX_OPEN_REQUESTS} - pipelining-limit = ${?OTOROSHI_AKKA_HTTP_SERVER_HOST_CONNECTION_POOL_PIPELINING_LIMIT} - client { - socket-options { - } - } - } - parsing { - max-uri-length = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_URI_LENGTH} - max-method-length = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_METHOD_LENGTH} - max-response-reason-length = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_RESPONSE_REASON_LENGTH} - max-header-name-length = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_HEADER_NAME_LENGTH} - max-header-value-length = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_HEADER_VALUE_LENGTH} - max-header-count = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_HEADER_COUNT} - max-chunk-ext-length = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_CHUNK_EXT_LENGTH} - max-chunk-size = ${?AKKA_HTTP_SERVER_MAX_CHUNK_SIZE} - max-chunk-size = ${?OTOROSHI_AKKA_HTTP_SERVER_MAX_CHUNK_SIZE} - max-chunk-size = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_CHUNK_SIZE} - max-content-length = ${?AKKA_HTTP_SERVER_MAX_CONTENT_LENGHT} - max-content-length = ${?OTOROSHI_AKKA_HTTP_SERVER_MAX_CONTENT_LENGHT} - max-content-length = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_CONTENT_LENGHT} - } - } -} \ No newline at end of file diff --git a/manual/src/main/paradox/snippets/reference.conf b/manual/src/main/paradox/snippets/reference.conf index 8a7b6ff3e3..e69de29bb2 100644 --- a/manual/src/main/paradox/snippets/reference.conf +++ b/manual/src/main/paradox/snippets/reference.conf @@ -1,1657 +0,0 @@ - -app { - storage = "inmemory" # the storage used by otoroshi. possible values are lettuce (for redis), inmemory, file, http, s3, cassandra, postgresql - storage = ${?APP_STORAGE} # the storage used by otoroshi. possible values are lettuce (for redis), inmemory, file, http, s3, cassandra, postgresql - storage = ${?OTOROSHI_STORAGE} # the storage used by otoroshi. possible values are lettuce (for redis), inmemory, file, http, s3, cassandra, postgresql - storageRoot = "otoroshi" # the prefix used for storage keys - storageRoot = ${?APP_STORAGE_ROOT} # the prefix used for storage keys - storageRoot = ${?OTOROSHI_STORAGE_ROOT} # the prefix used for storage keys - eventsName = "otoroshi" # the name of the event producer - eventsName = ${?APP_EVENTS_NAME} # the name of the event producer - eventsName = ${?OTOROSHI_EVENTS_NAME} # the name of the event producer - importFrom = ${?APP_IMPORT_FROM} # file path to import otoroshi initial configuration - importFrom = ${?OTOROSHI_IMPORT_FROM} # file path to import otoroshi initial configuration - env = "prod" # env name, should always be prod except in dev mode - env = ${?APP_ENV} # env name, should always be prod except in dev mode - env = ${?OTOROSHI_ENV} # env name, should always be prod except in dev mode - liveJs = false # enabled live JS loading for dev mode - domain = "oto.tools" # default domain for basic otoroshi services - domain = ${?APP_DOMAIN} # default domain for basic otoroshi services - domain = ${?OTOROSHI_DOMAIN} # default domain for basic otoroshi services - commitId = "HEAD" - commitId = ${?COMMIT_ID} - commitId = ${?OTOROSHI_COMMIT_ID} - rootScheme = "http" # default root scheme when composing urls - rootScheme = ${?APP_ROOT_SCHEME} # default root scheme when composing urls - rootScheme = ${?OTOROSHI_ROOT_SCHEME} # default root scheme when composing urls - throttlingWindow = 10 # the number of second used to compute throttling number - throttlingWindow = ${?THROTTLING_WINDOW} # the number of second used to compute throttling number - throttlingWindow = ${?OTOROSHI_THROTTLING_WINDOW} # the number of second used to compute throttling number - checkForUpdates = true # enable automatic version update checks - checkForUpdates = ${?CHECK_FOR_UPDATES} # enable automatic version update checks - checkForUpdates = ${?OTOROSHI_CHECK_FOR_UPDATES} # enable automatic version update checks - overheadThreshold = 500.0 # the value threshold (in milliseconds) used to send HighOverheadAlert - overheadThreshold = ${?OVERHEAD_THRESHOLD} # the value threshold (in milliseconds) used to send HighOverheadAlert - overheadThreshold = ${?OTOROSHI_OVERHEAD_THRESHOLD} # the value threshold (in milliseconds) used to send HighOverheadAlert - adminLogin = ${?OTOROSHI_INITIAL_ADMIN_LOGIN} # the initial admin login - adminPassword = ${?OTOROSHI_INITIAL_ADMIN_PASSWORD} # the initial admin password - initialCustomization = ${?OTOROSHI_INITIAL_CUSTOMIZATION} # otoroshi inital configuration that will be merged with a new confguration. Shaped like an otoroshi export - boot { - failOnTimeout = false # otoroshi will exit if a subsystem failed its init - failOnTimeout = ${?OTOROSHI_BOOT_FAIL_ON_TIMEOUT} # otoroshi will exit if a subsystem failed its init - - globalWait = true # should we wait until everything is setup to accept http requests - globalWait = ${?OTOROSHI_BOOT_GLOBAL_WAIT} # should we wait until everything is setup to accept http requests - globalWaitTimeout = 60000 # max wait before accepting requests - globalWaitTimeout = ${?OTOROSHI_BOOT_GLOBAL_WAIT_TIMEOUT} # max wait before accepting requests - - waitForPluginsSearch = true # should we wait for classpath plugins search before accepting http requests - waitForPluginsSearch = ${?OTOROSHI_BOOT_WAIT_FOR_PLUGINS_SEARCH} # should we wait for classpath plugins search before accepting http requests - waitForPluginsSearchTimeout = 20000 # max wait for classpath plugins search before accepting http requests - waitForPluginsSearchTimeout = ${?OTOROSHI_BOOT_WAIT_FOR_PLUGINS_SEARCH_TIMEOUT} # max wait for classpath plugins search before accepting http requests - - waitForScriptsCompilation = true # should we wait for plugins compilation before accepting http requests - waitForScriptsCompilation = ${?OTOROSHI_BOOT_WAIT_FOR_SCRIPTS_COMPILATION} # should we wait for plugins compilation before accepting http requests - waitForScriptsCompilationTimeout = 30000 # max wait for plugins compilation before accepting http requests - waitForScriptsCompilationTimeout = ${?OTOROSHI_BOOT_WAIT_FOR_SCRIPTS_COMPILATION_TIMEOUT} # max wait for plugins compilation before accepting http requests - - waitForTlsInit = true # should we wait for first TLS context initialization before accepting http requests - waitForTlsInit = ${?OTOROSHI_BOOT_WAIT_FOR_TLS_INIT} # should we wait for first TLS context initialization before accepting http requests - waitForTlsInitTimeout = 10000 # max wait for first TLS context initialization before accepting http requests - waitForTlsInitTimeout = ${?OTOROSHI_BOOT_WAIT_FOR_TLS_INIT_TIMEOUT} # max wait for first TLS context initialization before accepting http requests - - waitForFirstClusterFetch = true # should we wait for first cluster initialization before accepting http requests - waitForFirstClusterFetch = ${?OTOROSHI_BOOT_WAIT_FOR_FIRST_CLUSTER_FETCH} # should we wait for first cluster initialization before accepting http requests - waitForFirstClusterFetchTimeout = 10000 # max wait for first cluster initialization before accepting http requests - waitForFirstClusterFetchTimeout = ${?OTOROSHI_BOOT_WAIT_FOR_FIRST_CLUSTER_TIMEOUT} # max wait for first cluster initialization before accepting http requests - - waitForFirstClusterStateCache = true # should we wait for first cluster initialization before accepting http requests - waitForFirstClusterStateCache = ${?OTOROSHI_BOOT_WAIT_FOR_FIRST_CLUSTER_STATE_CACHE} # should we wait for first cluster initialization before accepting http requests - waitForFirstClusterStateCacheTimeout = 10000 # max wait for first cluster initialization before accepting http requests - waitForFirstClusterStateCacheTimeout = ${?OTOROSHI_BOOT_WAIT_FOR_FIRST_CLUSTER_STATE_CACHE_TIMEOUT} # max wait for first cluster initialization before accepting http requests - } - instance { - instanceId = ${?OTOROSHI_INSTANCE_ID} # the instance id - number = 0 # the instance number. Can be found in otoroshi events - number = ${?OTOROSHI_INSTANCE_NUMBER} # the instance number. Can be found in otoroshi events - number = ${?INSTANCE_NUMBER} # the instance number. Can be found in otoroshi events - name = "otoroshi" # instance name - name = ${?OTOROSHI_INSTANCE_NAME} # instance name - zone = "local" # instance zone (optional) - zone = ${?OTOROSHI_INSTANCE_ZONE} # instance zone (optional) - region = "local" # instance region (optional) - region = ${?OTOROSHI_INSTANCE_REGION} # instance region (optional) - dc = "local" # instance dc (optional) - dc = ${?OTOROSHI_INSTANCE_DATACENTER} # instance dc (optional) - provider = "local" # instance provider (optional) - provider = ${?OTOROSHI_INSTANCE_PROVIDER} # instance provider (optional) - rack = "local" # instance rack (optional) - rack = ${?OTOROSHI_INSTANCE_RACK} # instance rack (optional) - title = ${?OTOROSHI_INSTANCE_TITLE} # the title displayed in UI top left - } - longRequestTimeout = 10800000 - longRequestTimeout = ${?OTOROSHI_PROXY_LONG_REQUEST_TIMEOUT} - } - health { - limit = 1000 # the value threshold (in milliseconds) used to indicate if an otoroshi instance is healthy or not - limit = ${?HEALTH_LIMIT} # the value threshold (in milliseconds) used to indicate if an otoroshi instance is healthy or not - limit = ${?OTOROSHI_HEALTH_LIMIT} # the value threshold (in milliseconds) used to indicate if an otoroshi instance is healthy or not - accessKey = ${?HEALTH_ACCESS_KEY} # the key to access /health edpoint - accessKey = ${?OTOROSHI_HEALTH_ACCESS_KEY} # the key to access /health edpoint - } - snowflake { - seed = 0 # the seed number used to generate unique ids. Should be different for every instances - seed = ${?INSTANCE_NUMBER} # the seed number used to generate unique ids. Should be different for every instances - seed = ${?OTOROSHI_INSTANCE_NUMBER} # the seed number used to generate unique ids. Should be different for every instances - seed = ${?SNOWFLAKE_SEED} # the seed number used to generate unique ids. Should be different for every instances - seed = ${?OTOROSHI_SNOWFLAKE_SEED} # the seed number used to generate unique ids. Should be different for every instances - } - events { - maxSize = 1000 # the amount of event kept in the datastore - maxSize = ${?MAX_EVENTS_SIZE} # the amount of event kept in the datastore - maxSize = ${?OTOROSHI_MAX_EVENTS_SIZE} # the amount of event kept in the datastore - } - exposed-ports { - http = ${?APP_EXPOSED_PORTS_HTTP} # the exposed http port for otoroshi (when in a container or behind a proxy) - http = ${?OTOROSHI_EXPOSED_PORTS_HTTP} # the exposed http port for otoroshi (when in a container or behind a proxy) - https = ${?APP_EXPOSED_PORTS_HTTPS} # the exposed https port for otoroshi (when in a container or behind a proxy - https = ${?OTOROSHI_EXPOSED_PORTS_HTTPS} # the exposed https port for otoroshi (when in a container or behind a proxy - } - backoffice { - exposed = true # expose the backoffice ui - exposed = ${?APP_BACKOFFICE_EXPOSED} # expose the backoffice ui - exposed = ${?OTOROSHI_BACKOFFICE_EXPOSED} # expose the backoffice ui - subdomain = "otoroshi" # the backoffice subdomain - subdomain = ${?APP_BACKOFFICE_SUBDOMAIN} # the backoffice subdomain - subdomain = ${?OTOROSHI_BACKOFFICE_SUBDOMAIN} # the backoffice subdomain - domains = [] # the backoffice domains - domainsStr = ${?APP_BACKOFFICE_DOMAINS} # the backoffice domains - domainsStr = ${?OTOROSHI_BACKOFFICE_DOMAINS} # the backoffice domains - useNewEngine = true # avoid backoffice admin api proxy - useNewEngine = ${?OTOROSHI_BACKOFFICE_USE_NEW_ENGINE} # avoid backoffice admin api proxy - usePlay = true # avoid backoffice http call for admin api - usePlay = ${?OTOROSHI_BACKOFFICE_USE_PLAY} # avoid backoffice http call for admin api - session { - exp = 86400000 # the backoffice cookie expiration - exp = ${?APP_BACKOFFICE_SESSION_EXP} # the backoffice cookie expiration - exp = ${?OTOROSHI_BACKOFFICE_SESSION_EXP} # the backoffice cookie expiration - } - } - privateapps { - subdomain = "privateapps" # privateapps (proxy sso) domain - subdomain = ${?APP_PRIVATEAPPS_SUBDOMAIN} # privateapps (proxy sso) domain - subdomain = ${?OTOROSHI_PRIVATEAPPS_SUBDOMAIN} # privateapps (proxy sso) domain - domains = [] - domainsStr = ${?APP_PRIVATEAPPS_DOMAINS} - domainsStr = ${?OTOROSHI_PRIVATEAPPS_DOMAINS} - session { - exp = 86400000 # the privateapps cookie expiration - exp = ${?APP_PRIVATEAPPS_SESSION_EXP} # the privateapps cookie expiration - exp = ${?OTOROSHI_PRIVATEAPPS_SESSION_EXP} # the privateapps cookie expiration - } - } - adminapi { - exposed = true # expose the admin api - exposed = ${?ADMIN_API_EXPOSED} # expose the admin api - exposed = ${?OTOROSHI_ADMIN_API_EXPOSED} # expose the admin api - targetSubdomain = "otoroshi-admin-internal-api" # admin api target subdomain as targeted by otoroshi service - targetSubdomain = ${?ADMIN_API_TARGET_SUBDOMAIN} # admin api target subdomain as targeted by otoroshi service - targetSubdomain = ${?OTOROSHI_ADMIN_API_TARGET_SUBDOMAIN} # admin api target subdomain as targeted by otoroshi service - exposedSubdomain = "otoroshi-api" # admin api exposed subdomain as exposed by otoroshi service - exposedSubdomain = ${?ADMIN_API_EXPOSED_SUBDOMAIN} # admin api exposed subdomain as exposed by otoroshi service - exposedSubdomain = ${?OTOROSHI_ADMIN_API_EXPOSED_SUBDOMAIN} # admin api exposed subdomain as exposed by otoroshi service - additionalExposedDomain = ${?ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN} # admin api additional exposed subdomain as exposed by otoroshi service - additionalExposedDomain = ${?OTOROSHI_ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN} # admin api additional exposed subdomain as exposed by otoroshi service - domains = [] - domainsStr = ${?ADMIN_API_DOMAINS} - domainsStr = ${?OTOROSHI_ADMIN_API_DOMAINS} - exposedDomains = [] - exposedDomainsStr = ${?ADMIN_API_EXPOSED_DOMAINS} - exposedDomainsStr = ${?OTOROSHI_ADMIN_API_EXPOSED_DOMAINS} - defaultValues { - backOfficeGroupId = "admin-api-group" # default value for admin api service group - backOfficeGroupId = ${?ADMIN_API_GROUP} # default value for admin api service group - backOfficeGroupId = ${?OTOROSHI_ADMIN_API_GROUP} # default value for admin api service group - backOfficeApiKeyClientId = "admin-api-apikey-id" # default value for admin api apikey id - backOfficeApiKeyClientId = ${?ADMIN_API_CLIENT_ID} # default value for admin api apikey id - backOfficeApiKeyClientId = ${?OTOROSHI_ADMIN_API_CLIENT_ID} # default value for admin api apikey id - backOfficeApiKeyClientSecret = "admin-api-apikey-secret" # default value for admin api apikey secret - backOfficeApiKeyClientSecret = ${?otoroshi.admin-api-secret} # default value for admin api apikey secret - backOfficeApiKeyClientSecret = ${?OTOROSHI_otoroshi.admin-api-secret} # default value for admin api apikey secret - backOfficeApiKeyClientSecret = ${?ADMIN_API_CLIENT_SECRET} # default value for admin api apikey secret - backOfficeApiKeyClientSecret = ${?OTOROSHI_ADMIN_API_CLIENT_SECRET} # default value for admin api apikey secret - backOfficeServiceId = "admin-api-service" # default value for admin api service id - backOfficeServiceId = ${?ADMIN_API_SERVICE_ID} # default value for admin api service id - backOfficeServiceId = ${?OTOROSHI_ADMIN_API_SERVICE_ID} # default value for admin api service id - } - proxy { - https = false # backoffice proxy admin api over https - https = ${?ADMIN_API_HTTPS} # backoffice proxy admin api over https - https = ${?OTOROSHI_ADMIN_API_HTTPS} # backoffice proxy admin api over https - local = true # backoffice proxy admin api on localhost - local = ${?ADMIN_API_LOCAL} # backoffice proxy admin api on localhost - local = ${?OTOROSHI_ADMIN_API_LOCAL} # backoffice proxy admin api on localhost - } - } - claim { - sharedKey = "secret" # the default secret used to sign otoroshi exchange protocol tokens - sharedKey = ${?CLAIM_SHAREDKEY} # the default secret used to sign otoroshi exchange protocol tokens - sharedKey = ${?OTOROSHI_CLAIM_SHAREDKEY} # the default secret used to sign otoroshi exchange protocol tokens - } - webhooks { - } - redis { # configuration to fetch/store otoroshi state from a redis datastore using rediscala - host = "localhost" - host = ${?REDIS_HOST} - host = ${?OTOROSHI_REDIS_HOST} - port = 6379 - port = ${?REDIS_PORT} - port = ${?OTOROSHI_REDIS_PORT} - password = ${?REDIS_PASSWORD} - password = ${?OTOROSHI_REDIS_PASSWORD} - windowSize = 99 - windowSize = ${?REDIS_WINDOW_SIZE} - windowSize = ${?OTOROSHI_REDIS_WINDOW_SIZE} - slaves = [] - slavesStr = ${?REDIS_SLAVES} - slavesStr = ${?OTOROSHI_REDIS_SLAVES} - slavesStr = ${?REDIS_MEMBERS} - slavesStr = ${?OTOROSHI_REDIS_MEMBERS} - useScan = false - useScan = ${?REDIS_USE_SCAN} - useScan = ${?OTOROSHI_REDIS_USE_SCAN} - - pool { - members = [] - members = ${?REDIS_POOL_MEMBERS} - members = ${?OTOROSHI_REDIS_POOL_MEMBERS} - } - - mpool { - members = [] - membersStr = ${?REDIS_MPOOL_MEMBERS} - membersStr = ${?OTOROSHI_REDIS_MPOOL_MEMBERS} - } - - lf { - master { - host = ${?REDIS_LF_HOST} - host = ${?OTOROSHI_REDIS_LF_HOST} - port = ${?REDIS_LF_PORT} - port = ${?OTOROSHI_REDIS_LF_PORT} - password = ${?REDIS_LF_PASSWORD} - password = ${?OTOROSHI_REDIS_LF_PASSWORD} - } - slaves = [] - slavesStr = ${?REDIS_LF_SLAVES} - slavesStr = ${?OTOROSHI_REDIS_LF_SLAVES} - slavesStr = ${?REDIS_LF_MEMBERS} - slavesStr = ${?OTOROSHI_REDIS_LF_MEMBERS} - } - - sentinels { - master = ${?REDIS_SENTINELS_MASTER} - master = ${?OTOROSHI_REDIS_SENTINELS_MASTER} - password = ${?REDIS_SENTINELS_PASSWORD} - password = ${?OTOROSHI_REDIS_SENTINELS_PASSWORD} - db = ${?REDIS_SENTINELS_DB} - db = ${?OTOROSHI_REDIS_SENTINELS_DB} - name = ${?REDIS_SENTINELS_NAME} - name = ${?OTOROSHI_REDIS_SENTINELS_NAME} - members = [] - membersStr = ${?REDIS_SENTINELS_MEMBERS} - membersStr = ${?OTOROSHI_REDIS_SENTINELS_MEMBERS} - - lf { - master = ${?REDIS_SENTINELS_LF_MASTER} - master = ${?OTOROSHI_REDIS_SENTINELS_LF_MASTER} - members = [] - membersStr = ${?REDIS_SENTINELS_LF_MEMBERS} - membersStr = ${?OTOROSHI_REDIS_SENTINELS_LF_MEMBERS} - } - } - - cluster { - members = [] - membersStr = ${?REDIS_CLUSTER_MEMBERS} - membersStr = ${?OTOROSHI_REDIS_CLUSTER_MEMBERS} - } - - lettuce { # configuration to fetch/store otoroshi state from a redis datastore using the lettuce driver (the next default one) - connection = "default" - connection = ${?REDIS_LETTUCE_CONNECTION} - connection = ${?OTOROSHI_REDIS_LETTUCE_CONNECTION} - uri = ${?REDIS_LETTUCE_URI} - uri = ${?OTOROSHI_REDIS_LETTUCE_URI} - uri = ${?REDIS_URL} - uri = ${?OTOROSHI_REDIS_URL} - uris = [] - urisStr = ${?REDIS_LETTUCE_URIS} - urisStr = ${?OTOROSHI_REDIS_LETTUCE_URIS} - readFrom = "MASTER_PREFERRED" - readFrom = ${?REDIS_LETTUCE_READ_FROM} - readFrom = ${?OTOROSHI_REDIS_LETTUCE_READ_FROM} - startTLS = false - startTLS = ${?REDIS_LETTUCE_START_TLS} - startTLS = ${?OTOROSHI_REDIS_LETTUCE_START_TLS} - verifyPeers = true - verifyPeers = ${?REDIS_LETTUCE_VERIFY_PEERS} - verifyPeers = ${?OTOROSHI_REDIS_LETTUCE_VERIFY_PEERS} - } - } - inmemory { # configuration to fetch/store otoroshi state in memory - windowSize = 99 - windowSize = ${?INMEMORY_WINDOW_SIZE} - windowSize = ${?OTOROSHI_INMEMORY_WINDOW_SIZE} - experimental = false - experimental = ${?INMEMORY_EXPERIMENTAL_STORE} - experimental = ${?OTOROSHI_INMEMORY_EXPERIMENTAL_STORE} - optimized = false - optimized = ${?INMEMORY_OPTIMIZED} - optimized = ${?OTOROSHI_INMEMORY_OPTIMIZED} - modern = false - modern = ${?INMEMORY_MODERN} - modern = ${?OTOROSHI_INMEMORY_MODERN} - } - filedb { # configuration to fetch/store otoroshi state from a file - windowSize = 99 - windowSize = ${?FILEDB_WINDOW_SIZE} - windowSize = ${?OTOROSHI_FILEDB_WINDOW_SIZE} - path = "./filedb/state.ndjson" - path = ${?FILEDB_PATH} - path = ${?OTOROSHI_FILEDB_PATH} - } - httpdb { # configuration to fetch/store otoroshi state from an http endpoint - url = "http://127.0.0.1:8888/worker-0/state.json" - headers = {} - timeout = 10000 - pollEvery = 10000 - } - s3db { # configuration to fetch/store otoroshi state from a S3 bucket - bucket = "otoroshi-states" - bucket = ${?OTOROSHI_DB_S3_BUCKET} - endpoint = "https://otoroshi-states.foo.bar" - endpoint = ${?OTOROSHI_DB_S3_ENDPOINT} - region = "eu-west-1" - region = ${?OTOROSHI_DB_S3_REGION} - access = "secret" - access = ${?OTOROSHI_DB_S3_ACCESS} - secret = "secret" - secret = ${?OTOROSHI_DB_S3_SECRET} - key = "/otoroshi/states/state" - key = ${?OTOROSHI_DB_S3_KEY} - chunkSize = 8388608 - chunkSize = ${?OTOROSHI_DB_S3_CHUNK_SIZE} - v4auth = true - v4auth = ${?OTOROSHI_DB_S3_V4_AUTH} - writeEvery = 60000 # write interval - writeEvery = ${?OTOROSHI_DB_S3_WRITE_EVERY} # write interval - acl = "Private" - acl = ${?OTOROSHI_DB_S3_ACL} - } - pg { # postrgesql settings. everything possible with the client - uri = ${?PG_URI} - uri = ${?OTOROSHI_PG_URI} - uri = ${?POSTGRESQL_ADDON_URI} - uri = ${?OTOROSHI_POSTGRESQL_ADDON_URI} - poolSize = 20 - poolSize = ${?PG_POOL_SIZE} - poolSize = ${?OTOROSHI_PG_POOL_SIZE} - port = 5432 - port = ${?PG_PORT} - port = ${?OTOROSHI_PG_PORT} - host = "localhost" - host = ${?PG_HOST} - host = ${?OTOROSHI_PG_HOST} - database = "otoroshi" - database = ${?PG_DATABASE} - database = ${?OTOROSHI_PG_DATABASE} - user = "otoroshi" - user = ${?PG_USER} - user = ${?OTOROSHI_PG_USER} - password = "otoroshi" - password = ${?PG_PASSWORD} - password = ${?OTOROSHI_PG_PASSWORD} - logQueries = ${?PG_DEBUG_QUERIES} - logQueries = ${?OTOROSHI_PG_DEBUG_QUERIES} - avoidJsonPath = false - avoidJsonPath = ${?PG_AVOID_JSON_PATH} - avoidJsonPath = ${?OTOROSHI_PG_AVOID_JSON_PATH} - optimized = true - optimized = ${?PG_OPTIMIZED} - optimized = ${?OTOROSHI_PG_OPTIMIZED} - connect-timeout = ${?PG_CONNECT_TIMEOUT} - connect-timeout = ${?OTOROSHI_PG_CONNECT_TIMEOUT} - idle-timeout = ${?PG_IDLE_TIMEOUT} - idle-timeout = ${?OTOROSHI_PG_IDLE_TIMEOUT} - log-activity = ${?PG_LOG_ACTIVITY} - log-activity = ${?OTOROSHI_PG_LOG_ACTIVITY} - pipelining-limit = ${?PG_PIPELINING_LIMIT} - pipelining-limit = ${?OTOROSHI_PG_PIPELINING_LIMIT} - ssl { - enabled = false - enabled = ${?PG_SSL_ENABLED} - enabled = ${?OTOROSHI_PG_SSL_ENABLED} - mode = "verify_ca" - mode = ${?PG_SSL_MODE} - mode = ${?OTOROSHI_PG_SSL_MODE} - trusted-certs-path = [] - trusted-certs = [] - trusted-cert-path = ${?PG_SSL_TRUSTED_CERT_PATH} - trusted-cert-path = ${?OTOROSHI_PG_SSL_TRUSTED_CERT_PATH} - trusted-cert = ${?PG_SSL_TRUSTED_CERT} - trusted-cert = ${?OTOROSHI_PG_SSL_TRUSTED_CERT} - client-certs-path = [] - client-certs = [] - client-cert-path = ${?PG_SSL_CLIENT_CERT_PATH} - client-cert-path = ${?OTOROSHI_PG_SSL_CLIENT_CERT_PATH} - client-cert = ${?PG_SSL_CLIENT_CERT} - client-cert = ${?OTOROSHI_PG_SSL_CLIENT_CERT} - trust-all = ${?PG_SSL_TRUST_ALL} - trust-all = ${?OTOROSHI_PG_SSL_TRUST_ALL} - } - } - cassandra { # cassandra settings. everything possible with the client - windowSize = 99 - windowSize = ${?CASSANDRA_WINDOW_SIZE} - windowSize = ${?OTOROSHI_CASSANDRA_WINDOW_SIZE} - host = "127.0.0.1" - host = ${?CASSANDRA_HOST} - host = ${?OTOROSHI_CASSANDRA_HOST} - port = 9042 - port = ${?CASSANDRA_PORT} - port = ${?OTOROSHI_CASSANDRA_PORT} - replicationFactor = 1 - replicationFactor = ${?CASSANDRA_REPLICATION_FACTOR} - replicationFactor = ${?OTOROSHI_CASSANDRA_REPLICATION_FACTOR} - replicationOptions = ${?CASSANDRA_REPLICATION_OPTIONS} - replicationOptions = ${?OTOROSHI_CASSANDRA_REPLICATION_OPTIONS} - durableWrites = true - durableWrites = ${?CASSANDRA_DURABLE_WRITES} - durableWrites = ${?OTOROSHI_CASSANDRA_DURABLE_WRITES} - basic.contact-points = [ ${app.cassandra.host}":"${app.cassandra.port} ] - basic.session-name = "otoroshi" - basic.session-name = ${?OTOROSHI_CASSANDRA_SESSION_NAME} - basic.session-keyspace = ${?OTOROSHI_CASSANDRA_SESSION_KEYSPACE} - basic.config-reload-interval = 5 minutes - basic.request { - timeout = 10 seconds - consistency = LOCAL_ONE - consistency = ${?OTOROSHI_CASSANDRA_CONSISTENCY} - page-size = 5000 - page-size = ${?OTOROSHI_CASSANDRA_PAGE_SIZE} - serial-consistency = SERIAL - serial-consistency = ${?OTOROSHI_CASSANDRA_SERIAL_CONSISTENCY} - default-idempotence = false - default-idempotence = ${?OTOROSHI_CASSANDRA_DEFAULT_IDEMPOTENCE} - } - basic.load-balancing-policy { - class = DefaultLoadBalancingPolicy - local-datacenter = datacenter1 - local-datacenter = ${?OTOROSHI_CASSANDRA_LOCAL_DATACENTER} - # filter.class= - slow-replica-avoidance = true - } - basic.cloud { - # secure-connect-bundle = /location/of/secure/connect/bundle - } - basic.application { - # name = - # version = - } - basic.graph { - # name = your-graph-name - traversal-source = "g" - # is-system-query = false - # read-consistency-level = LOCAL_QUORUM - # write-consistency-level = LOCAL_ONE - # timeout = 10 seconds - } - advanced.connection { - connect-timeout = 5 seconds - init-query-timeout = 500 milliseconds - set-keyspace-timeout = ${datastax-java-driver.advanced.connection.init-query-timeout} - pool { - local { - size = 1 - } - remote { - size = 1 - } - } - max-requests-per-connection = 1024 - max-orphan-requests = 256 - warn-on-init-error = true - } - advanced.reconnect-on-init = false - advanced.reconnection-policy { - class = ExponentialReconnectionPolicy - base-delay = 1 second - max-delay = 60 seconds - } - advanced.retry-policy { - class = DefaultRetryPolicy - } - advanced.speculative-execution-policy { - class = NoSpeculativeExecutionPolicy - # max-executions = 3 - # delay = 100 milliseconds - } - advanced.auth-provider { - # class = PlainTextAuthProvider - username = ${?CASSANDRA_USERNAME} - username = ${?OTOROSHI_CASSANDRA_USERNAME} - password = ${?CASSANDRA_PASSWORD} - password = ${?OTOROSHI_CASSANDRA_PASSWORD} - authorization-id = ${?OTOROSHI_CASSANDRA_AUTHORIZATION_ID} - //service = "cassandra" - # login-configuration { - # principal = "cassandra@DATASTAX.COM" - # useKeyTab = "true" - # refreshKrb5Config = "true" - # keyTab = "/path/to/keytab/file" - # } - # sasl-properties { - # javax.security.sasl.qop = "auth-conf" - # } - } - advanced.ssl-engine-factory { - # class = DefaultSslEngineFactory - # cipher-suites = [ "TLS_RSA_WITH_AES_128_CBC_SHA", "TLS_RSA_WITH_AES_256_CBC_SHA" ] - # hostname-validation = true - # truststore-path = /path/to/client.truststore - # truststore-password = password123 - # keystore-path = /path/to/client.keystore - # keystore-password = password123 - } - advanced.timestamp-generator { - class = AtomicTimestampGenerator - drift-warning { - threshold = 1 second - interval = 10 seconds - } - force-java-clock = false - } - advanced.request-tracker { - class = NoopRequestTracker - logs { - # success.enabled = true - slow { - # threshold = 1 second - # enabled = true - } - # error.enabled = true - # max-query-length = 500 - # show-values = true - # max-value-length = 50 - # max-values = 50 - # show-stack-traces = true - } - } - advanced.throttler { - class = PassThroughRequestThrottler - # max-queue-size = 10000 - # max-concurrent-requests = 10000 - # max-requests-per-second = 10000 - # drain-interval = 10 milliseconds - } - advanced.node-state-listener.class = NoopNodeStateListener - advanced.schema-change-listener.class = NoopSchemaChangeListener - advanced.address-translator { - class = PassThroughAddressTranslator - } - advanced.resolve-contact-points = true - advanced.protocol { - version = V4 - version = ${?OTOROSHI_CASSANDRA_PROTOCOL_VERSION} - compression = lz4 - compression = ${?OTOROSHI_CASSANDRA_PROTOCOL_COMPRESSION} - max-frame-length = 256 MB - } - advanced.request { - warn-if-set-keyspace = false - trace { - attempts = 5 - interval = 3 milliseconds - consistency = ONE - } - log-warnings = true - } - advanced.graph { - # sub-protocol = "graphson-2.0" - paging-enabled = "AUTO" - paging-options { - page-size = ${datastax-java-driver.advanced.continuous-paging.page-size} - max-pages = ${datastax-java-driver.advanced.continuous-paging.max-pages} - max-pages-per-second = ${datastax-java-driver.advanced.continuous-paging.max-pages-per-second} - max-enqueued-pages = ${datastax-java-driver.advanced.continuous-paging.max-enqueued-pages} - } - } - advanced.continuous-paging { - page-size = ${datastax-java-driver.basic.request.page-size} - page-size-in-bytes = false - max-pages = 0 - max-pages-per-second = 0 - max-enqueued-pages = 4 - timeout { - first-page = 2 seconds - other-pages = 1 second - } - } - advanced.monitor-reporting { - enabled = true - } - advanced.metrics { - session { - enabled = [ - # bytes-sent, - # bytes-received - # connected-nodes, - # cql-requests, - # cql-client-timeouts, - # cql-prepared-cache-size, - # throttling.delay, - # throttling.queue-size, - # throttling.errors, - # continuous-cql-requests, - # graph-requests, - # graph-client-timeouts - ] - cql-requests { - highest-latency = 3 seconds - significant-digits = 3 - refresh-interval = 5 minutes - } - throttling.delay { - highest-latency = 3 seconds - significant-digits = 3 - refresh-interval = 5 minutes - } - continuous-cql-requests { - highest-latency = 120 seconds - significant-digits = 3 - refresh-interval = 5 minutes - } - graph-requests { - highest-latency = 12 seconds - significant-digits = 3 - refresh-interval = 5 minutes - } - } - node { - enabled = [ - # pool.open-connections, - # pool.available-streams, - # pool.in-flight, - # pool.orphaned-streams, - # bytes-sent, - # bytes-received, - # cql-messages, - # errors.request.unsent, - # errors.request.aborted, - # errors.request.write-timeouts, - # errors.request.read-timeouts, - # errors.request.unavailables, - # errors.request.others, - # retries.total, - # retries.aborted, - # retries.read-timeout, - # retries.write-timeout, - # retries.unavailable, - # retries.other, - # ignores.total, - # ignores.aborted, - # ignores.read-timeout, - # ignores.write-timeout, - # ignores.unavailable, - # ignores.other, - # speculative-executions, - # errors.connection.init, - # errors.connection.auth, - # graph-messages, - ] - cql-messages { - highest-latency = 3 seconds - significant-digits = 3 - refresh-interval = 5 minutes - } - graph-messages { - highest-latency = 3 seconds - significant-digits = 3 - refresh-interval = 5 minutes - } - } - } - advanced.socket { - tcp-no-delay = true - //keep-alive = false - //reuse-address = true - //linger-interval = 0 - //receive-buffer-size = 65535 - //send-buffer-size = 65535 - } - advanced.heartbeat { - interval = 30 seconds - timeout = ${datastax-java-driver.advanced.connection.init-query-timeout} - } - advanced.metadata { - topology-event-debouncer { - window = 1 second - max-events = 20 - } - schema { - enabled = true - # refreshed-keyspaces = [ "ks1", "ks2" ] - request-timeout = ${datastax-java-driver.basic.request.timeout} - request-page-size = ${datastax-java-driver.basic.request.page-size} - debouncer { - window = 1 second - max-events = 20 - } - } - token-map.enabled = true - } - advanced.control-connection { - timeout = ${datastax-java-driver.advanced.connection.init-query-timeout} - schema-agreement { - interval = 200 milliseconds - timeout = 10 seconds - warn-on-failure = true - } - } - advanced.prepared-statements { - prepare-on-all-nodes = true - reprepare-on-up { - enabled = true - check-system-table = false - max-statements = 0 - max-parallelism = 100 - timeout = ${datastax-java-driver.advanced.connection.init-query-timeout} - } - } - advanced.netty { - daemon = false - io-group { - size = 0 - shutdown {quiet-period = 2, timeout = 15, unit = SECONDS} - } - admin-group { - size = 2 - shutdown {quiet-period = 2, timeout = 15, unit = SECONDS} - } - timer { - tick-duration = 100 milliseconds - ticks-per-wheel = 2048 - } - } - advanced.coalescer { - max-runs-with-no-work = 5 - reschedule-interval = 10 microseconds - } - } - actorsystems { - otoroshi { - akka { # otoroshi actorsystem configuration - version = ${akka.version} - log-dead-letters-during-shutdown = false - jvm-exit-on-fatal-error = false - default-dispatcher { - type = Dispatcher - executor = "fork-join-executor" - fork-join-executor { - parallelism-factor = 4.0 - parallelism-factor = ${?OTOROSHI_CORE_DISPATCHER_PARALLELISM_FACTOR} - parallelism-min = 8 - parallelism-min = ${?OTOROSHI_CORE_DISPATCHER_PARALLELISM_MIN} - parallelism-max = 128 - parallelism-max = ${?OTOROSHI_CORE_DISPATCHER_PARALLELISM_MAX} - task-peeking-mode = "FIFO" - task-peeking-mode = ${?OTOROSHI_CORE_DISPATCHER_TASK_PEEKING_MODE} - } - throughput = 1 - throughput = ${?OTOROSHI_CORE_DISPATCHER_THROUGHPUT} - } - http { - parsing { - max-uri-length = 4k - max-uri-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_URI_LENGTH} - max-method-length = 16 - max-method-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_METHOD_LENGTH} - max-response-reason-length = 64 - max-response-reason-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_RESPONSE_REASON_LENGTH} - max-header-name-length = 128 - max-header-name-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_HEADER_NAME_LENGTH} - max-header-value-length = 16k - max-header-value-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_HEADER_VALUE_LENGTH} - max-header-count = 128 - max-header-count = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_HEADER_COUNT} - max-chunk-ext-length = 256 - max-chunk-ext-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_CHUNK_EXT_LENGTH} - max-chunk-size = 256m - max-chunk-size = ${?AKKA_HTTP_CLIENT_MAX_CHUNK_SIZE} - max-chunk-size = ${?OTOROSHI_AKKA_HTTP_CLIENT_MAX_CHUNK_SIZE} - max-chunk-size = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_CHUNK_SIZE} - max-content-length = infinite - max-content-length = ${?AKKA_HTTP_CLIENT_MAX_CONTENT_LENGHT} - max-content-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_MAX_CONTENT_LENGHT} - max-content-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_CONTENT_LENGHT} - max-to-strict-bytes = infinite - max-to-strict-bytes = ${?AKKA_HTTP_CLIENT_MAX_TO_STRICT_BYTES} - max-to-strict-bytes = ${?OTOROSHI_AKKA_HTTP_CLIENT_MAX_TO_STRICT_BYTES} - max-to-strict-bytes = ${?OTOROSHI_AKKA_HTTP_CLIENT_PARSING_MAX_TO_STRICT_BYTES} - } - } - } - } - datastore { - akka { - version = ${akka.version} - log-dead-letters-during-shutdown = false - jvm-exit-on-fatal-error = false - default-dispatcher { - type = Dispatcher - executor = "fork-join-executor" - fork-join-executor { - parallelism-factor = 4.0 - parallelism-min = 4 - parallelism-max = 64 - task-peeking-mode = "FIFO" - } - throughput = 1 - } - } - } - } -} - -otoroshi { - domain = ${?app.domain} - maintenanceMode = false # enable global maintenance mode - maintenanceMode = ${?OTOROSHI_MAINTENANCE_MODE_ENABLED} # enable global maintenance mode - secret = "verysecretvaluethatyoumustoverwrite" # the secret used to sign sessions - secret = ${?OTOROSHI_SECRET} # the secret used to sign sessions - admin-api-secret = ${?OTOROSHI_ADMIN_API_SECRET} # the secret for admin api - next { - state-sync-interval = 10000 - state-sync-interval = ${?OTOROSHI_NEXT_STATE_SYNC_INTERVAL} - export-reporting = false - export-reporting = ${?OTOROSHI_NEXT_EXPORT_REPORTING} - monitor-proxy-state-size = false - monitor-proxy-state-size = ${?OTOROSHI_NEXT_MONITOR_PROXY_STATE_SIZE} - monitor-datastore-size = false - monitor-datastore-size = ${?OTOROSHI_NEXT_MONITOR_DATASTORE_SIZE} - plugins { - merge-sync-steps = true - merge-sync-steps = ${?OTOROSHI_NEXT_PLUGINS_MERGE_SYNC_STEPS} - apply-legacy-checks = true - apply-legacy-checks = ${?OTOROSHI_NEXT_PLUGINS_APPLY_LEGACY_CHECKS} - } - experimental { - netty-client { - wiretap = false - wiretap = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_CLIENT_WIRETAP} - enforce = false - enforce = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_CLIENT_ENFORCE} - enforce-akka = false - enforce-akka = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_CLIENT_ENFORCE_AKKA} - } - netty-server { - enabled = false - enabled = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_ENABLED} - new-engine-only = false - new-engine-only = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_NEW_ENGINE_ONLY} - host = "0.0.0.0" - host = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HOST} - http-port = 10049 - http-port = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_PORT} - exposed-http-port = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_EXPOSED_HTTP_PORT} - https-port = 10048 - https-port = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTPS_PORT} - exposed-https-port = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_EXPOSED_HTTPS_PORT} - wiretap = false - wiretap = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_WIRETAP} - accesslog = false - accesslog = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_ACCESSLOG} - threads = 0 - threads = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_THREADS} - parser { - allowDuplicateContentLengths = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_ALLOW_DUPLICATE_CONTENT_LENGTHS} - validateHeaders = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_VALIDATE_HEADERS} - h2cMaxContentLength = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_H_2_C_MAX_CONTENT_LENGTH} - initialBufferSize = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_INITIAL_BUFFER_SIZE} - maxHeaderSize = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_MAX_HEADER_SIZE} - maxInitialLineLength = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_MAX_INITIAL_LINE_LENGTH} - maxChunkSize = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_MAX_CHUNK_SIZE} - } - http2 { - enabled = true - enabled = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP2_ENABLED} - h2c = true - h2c = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP2_H2C} - } - http3 { - enabled = false - enabled = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP3_ENABLED} - port = 10048 - port = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP3_PORT} - exposedPort = 10048 - exposedPort = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP3_EXPOSED_PORT} - initialMaxStreamsBidirectional = 100000 - initialMaxStreamsBidirectional = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_STREAMS_BIDIRECTIONAL} - initialMaxStreamDataBidirectionalRemote = 1000000 - initialMaxStreamDataBidirectionalRemote = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_STREAM_DATA_BIDIRECTIONAL_REMOTE} - initialMaxStreamDataBidirectionalLocal = 1000000 - initialMaxStreamDataBidirectionalLocal = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_STREAM_DATA_BIDIRECTIONAL_LOCAL} - initialMaxData = 10000000 - initialMaxData = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_DATA} - maxRecvUdpPayloadSize = 1500 - maxRecvUdpPayloadSize = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_MAX_RECV_UDP_PAYLOAD_SIZE} - maxSendUdpPayloadSize = 1500 - maxSendUdpPayloadSize = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_MAX_SEND_UDP_PAYLOAD_SIZE} - disableQpackDynamicTable = true - disableQpackDynamicTable = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_DISABLE_QPACK_DYNAMIC_TABLE} - } - native { - enabled = true - enabled = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_NATIVE_ENABLED} - driver = "Auto" # possible values are Auto, Epoll, KQueue, IOUring - driver = ${?OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_NATIVE_DRIVER} - } - } - } - } - options { - bypassUserRightsCheck = false - bypassUserRightsCheck = ${?OTOROSHI_OPTIONS_BYPASSUSERRIGHTSCHECK} - emptyContentLengthIsChunked = true - emptyContentLengthIsChunked = ${?OTOROSHI_OPTIONS_EMPTYCONTENTLENGTHISCHUNKED} - detectApiKeySooner = true - detectApiKeySooner = ${?OTOROSHI_OPTIONS_DETECTAPIKEYSOONER} - sendClientChainAsPem = false - sendClientChainAsPem = ${?OTOROSHI_OPTIONS_SENDCLIENTCHAINASPEM} - useOldHeadersComposition = false - useOldHeadersComposition = ${?OTOROSHI_OPTIONS_USEOLDHEADERSCOMPOSITION} - manualDnsResolve = true - manualDnsResolve = ${?OTOROSHI_OPTIONS_MANUALDNSRESOLVE} - useEventStreamForScriptEvents = true - useEventStreamForScriptEvents = ${?OTOROSHI_OPTIONS_USEEVENTSTREAMFORSCRIPTEVENTS} - trustXForwarded = true - trustXForwarded = ${?OTOROSHI_OPTIONS_TRUST_XFORWARDED} - disableFunnyLogos = false - disableFunnyLogos = ${?OTOROSHI_OPTIONS_DISABLE_FUNNY_LOGOS} - staticExposedDomain = ${?OTOROSHI_OPTIONS_STATIC_EXPOSED_DOMAIN} - enable-json-media-type-with-open-charset = false # allow application/json media type with charset even if its not standard - enable-json-media-type-with-open-charset = ${?OTOROSHI_OPTIONS_ENABLE_JSON_MEDIA_TYPE_WITH_OPEN_CHARSET} - } - wasm { - cache { - ttl = 10000 - ttl = ${?OTOROSHI_WASM_CACHE_TTL} - size = 100 - size = ${?OTOROSHI_WASM_CACHE_SIZE} - } - queue { - buffer { - size = 2048 - size = ${?OTOROSHI_WASM_QUEUE_BUFFER_SIZE} - } - } - } - anonymous-reporting { - enabled = true - enabled = ${?OTOROSHI_ANONYMOUS_REPORTING_ENABLED} - redirect = false - url = ${?OTOROSHI_ANONYMOUS_REPORTING_REDIRECT} - url = "https://reporting.otoroshi.io/ingest" - url = ${?OTOROSHI_ANONYMOUS_REPORTING_URL} - timeout = 60000 - timeout = ${?OTOROSHI_ANONYMOUS_REPORTING_TIMEOUT} - tls { - # certs = [] - # trustedCerts = [] - enabled = false # enable mtls - enabled = ${?OTOROSHI_ANONYMOUS_REPORTING_TLS_ENABLED} # enable mtls - loose = false # loose verification - loose = ${?OTOROSHI_ANONYMOUS_REPORTING_TLS_LOOSE} # loose verification - trustAll = false # trust any CA - trustAll = ${?OTOROSHI_ANONYMOUS_REPORTING_TLS_ALL} # trust any CA - } - proxy { - enabled = false # enable proxy - enabled = ${?OTOROSHI_ANONYMOUS_REPORTING_PROXY_ENABLED} # enable proxy - host = ${?OTOROSHI_ANONYMOUS_REPORTING_PROXY_HOST}, - port = ${?OTOROSHI_ANONYMOUS_REPORTING_PROXY_PORT}, - principal = ${?OTOROSHI_ANONYMOUS_REPORTING_PROXY_PRINCIPAL}, - password = ${?OTOROSHI_ANONYMOUS_REPORTING_PROXY_PASSWORD}, - ntlmDomain = ${?OTOROSHI_ANONYMOUS_REPORTING_PROXY_DOMAIN}, - encoding = ${?OTOROSHI_ANONYMOUS_REPORTING_PROXY_ENCODING}, - } - } - backoffice { - flags { - useAkkaHttpClient = false - useAkkaHttpClient = ${?OTOROSHI_BACKOFFICE_FLAGS_USE_AKKA_HTTP_CLIENT} - logUrl = false - logUrl = ${?OTOROSHI_BACKOFFICE_FLAGS_LOG_URL} - requestTimeout = 60000 - requestTimeout = ${?OTOROSHI_BACKOFFICE_FLAGS_REQUEST_TIMEOUT} - } - } - sessions { - secret = ${otoroshi.secret} - secret = ${?OTOROSHI_SESSIONS_SECRET} - } - cache { - enabled = false - enabled = ${?USE_CACHE} - enabled = ${?OTOROSHI_USE_CACHE} - enabled = ${?OTOROSHI_ENTITIES_CACHE_ENABLED} - ttl = 2000 - ttl = ${?OTOROSHI_ENTITIES_CACHE_TTL} - } - metrics { - enabled = true - enabled = ${?OTOROSHI_METRICS_ENABLED} - every = 30000 - every = ${?OTOROSHI_METRICS_EVERY} - accessKey = ${?app.health.accessKey} - accessKey = ${?OTOROSHI_app.health.accessKey} - accessKey = ${?OTOROSHI_METRICS_ACCESS_KEY} - } - plugins { - packages = [] - packagesStr = ${?OTOROSHI_PLUGINS_SCAN_PACKAGES} - print = false - print = ${?OTOROSHI_PLUGINS_PRINT} - } - scripts { - enabled = true # enable scripts - enabled = ${?OTOROSHI_SCRIPTS_ENABLED} # enable scripts - static { # settings for statically enabled script/plugins - enabled = false - enabled = ${?OTOROSHI_SCRIPTS_STATIC_ENABLED} - transformersRefs = [] - transformersRefsStr = ${?OTOROSHI_SCRIPTS_STATIC_TRANSFORMER_REFS} - transformersConfig = {} - transformersConfigStr= ${?OTOROSHI_SCRIPTS_STATIC_TRANSFORMER_CONFIG} - validatorRefs = [] - validatorRefsStr = ${?OTOROSHI_SCRIPTS_STATIC_VALIDATOR_REFS} - validatorConfig = {} - validatorConfigStr = ${?OTOROSHI_SCRIPTS_STATIC_VALIDATOR_CONFIG} - preRouteRefs = [] - preRouteRefsStr = ${?OTOROSHI_SCRIPTS_STATIC_PRE_ROUTE_REFS} - preRouteConfig = {} - preRouteConfigStr = ${?OTOROSHI_SCRIPTS_STATIC_PRE_ROUTE_CONFIG} - sinkRefs = [] - sinkRefsStr = ${?OTOROSHI_SCRIPTS_STATIC_SINK_REFS} - sinkConfig = {} - sinkConfigStr = ${?OTOROSHI_SCRIPTS_STATIC_SINK_CONFIG} - jobsRefs = [] - jobsRefsStr = ${?OTOROSHI_SCRIPTS_STATIC_JOBS_REFS} - jobsConfig = {} - jobsConfigStr = ${?OTOROSHI_SCRIPTS_STATIC_JOBS_CONFIG} - } - } - tls = ${otoroshi.ssl} - ssl { - # the cipher suites used by otoroshi TLS termination - cipherSuitesJDK11Plus = ["TLS_AES_256_GCM_SHA384", "TLS_AES_128_GCM_SHA256", "TLS_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_DHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_DHE_DSS_WITH_AES_256_GCM_SHA384", "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_DHE_DSS_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_DHE_RSA_WITH_AES_256_CBC_SHA256", "TLS_DHE_DSS_WITH_AES_256_CBC_SHA256", "TLS_DHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_DHE_DSS_WITH_AES_128_CBC_SHA256", "TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256", "TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA", "TLS_DHE_RSA_WITH_AES_256_CBC_SHA", "TLS_DHE_DSS_WITH_AES_256_CBC_SHA", "TLS_DHE_RSA_WITH_AES_128_CBC_SHA", "TLS_DHE_DSS_WITH_AES_128_CBC_SHA", "TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA", "TLS_ECDH_RSA_WITH_AES_256_CBC_SHA", "TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA", "TLS_ECDH_RSA_WITH_AES_128_CBC_SHA", "TLS_RSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_128_GCM_SHA256", "TLS_RSA_WITH_AES_256_CBC_SHA256", "TLS_RSA_WITH_AES_128_CBC_SHA256", "TLS_RSA_WITH_AES_256_CBC_SHA", "TLS_RSA_WITH_AES_128_CBC_SHA", "TLS_EMPTY_RENEGOTIATION_INFO_SCSV"] - cipherSuitesJDK11 = ["TLS_AES_256_GCM_SHA384", "TLS_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_DHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_DHE_DSS_WITH_AES_256_GCM_SHA384", "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_DHE_DSS_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_DHE_RSA_WITH_AES_256_CBC_SHA256", "TLS_DHE_DSS_WITH_AES_256_CBC_SHA256", "TLS_DHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_DHE_DSS_WITH_AES_128_CBC_SHA256", "TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256", "TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA", "TLS_DHE_RSA_WITH_AES_256_CBC_SHA", "TLS_DHE_DSS_WITH_AES_256_CBC_SHA", "TLS_DHE_RSA_WITH_AES_128_CBC_SHA", "TLS_DHE_DSS_WITH_AES_128_CBC_SHA", "TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA", "TLS_ECDH_RSA_WITH_AES_256_CBC_SHA", "TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA", "TLS_ECDH_RSA_WITH_AES_128_CBC_SHA", "TLS_RSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_128_GCM_SHA256", "TLS_RSA_WITH_AES_256_CBC_SHA256", "TLS_RSA_WITH_AES_128_CBC_SHA256", "TLS_RSA_WITH_AES_256_CBC_SHA", "TLS_RSA_WITH_AES_128_CBC_SHA", "TLS_EMPTY_RENEGOTIATION_INFO_SCSV"] - cipherSuitesJDK8 = ["TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_RSA_WITH_AES_256_CBC_SHA256", "TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384", "TLS_DHE_RSA_WITH_AES_256_CBC_SHA256", "TLS_DHE_DSS_WITH_AES_256_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", "TLS_RSA_WITH_AES_256_CBC_SHA", "TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA", "TLS_ECDH_RSA_WITH_AES_256_CBC_SHA", "TLS_DHE_RSA_WITH_AES_256_CBC_SHA", "TLS_DHE_DSS_WITH_AES_256_CBC_SHA", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256", "TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256", "TLS_DHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_DHE_DSS_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA", "TLS_RSA_WITH_AES_128_CBC_SHA", "TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA", "TLS_ECDH_RSA_WITH_AES_128_CBC_SHA", "TLS_DHE_RSA_WITH_AES_128_CBC_SHA", "TLS_DHE_DSS_WITH_AES_128_CBC_SHA", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384", "TLS_DHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_DHE_DSS_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256", "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_DHE_DSS_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA", "TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA", "SSL_RSA_WITH_3DES_EDE_CBC_SHA", "TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA", "TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA", "SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA", "SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA", "TLS_EMPTY_RENEGOTIATION_INFO_SCSV"] - # cipherSuites = ["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", "TLS_RSA_WITH_AES_128_GCM_SHA256", "TLS_RSA_WITH_AES_128_CBC_SHA", "TLS_RSA_WITH_AES_256_CBC_SHA", "TLS_AES_128_GCM_SHA256", "TLS_AES_256_GCM_SHA384"] - cipherSuites = ${otoroshi.ssl.cipherSuitesJDK11} - # the protocols used by otoroshi TLS termination - protocolsJDK11 = ["TLSv1.3", "TLSv1.2", "TLSv1.1", "TLSv1"] - protocolsJDK8 = ["SSLv2Hello", "TLSv1", "TLSv1.1", "TLSv1.2"] - modernProtocols = ["TLSv1.3", "TLSv1.2"] - protocols = ${otoroshi.ssl.modernProtocols} - # the JDK cacert access - cacert { - path = "$JAVA_HOME/lib/security/cacerts" - password = "changeit" - } - # the mtls mode - fromOutside { - clientAuth = "None" - clientAuth = ${?SSL_OUTSIDE_CLIENT_AUTH} - clientAuth = ${?OTOROSHI_SSL_OUTSIDE_CLIENT_AUTH} - } - # the default trust mode - trust { - all = false - all = ${?OTOROSHI_SSL_TRUST_ALL} - } - rootCa { - ca = ${?OTOROSHI_SSL_ROOTCA_CA} - cert = ${?OTOROSHI_SSL_ROOTCA_CERT} - key = ${?OTOROSHI_SSL_ROOTCA_KEY} - importCa = false - importCa = ${?OTOROSHI_SSL_ROOTCA_IMPORTCA} - } - # some initial cacert access, useful to include non standard CA when starting - initialCacert = ${?CLUSTER_WORKER_INITIAL_CACERT} - initialCacert = ${?OTOROSHI_CLUSTER_WORKER_INITIAL_CACERT} - initialCacert = ${?INITIAL_CACERT} - initialCacert = ${?OTOROSHI_INITIAL_CACERT} - initialCert = ${?CLUSTER_WORKER_INITIAL_CERT} - initialCert = ${?OTOROSHI_CLUSTER_WORKER_INITIAL_CERT} - initialCert = ${?INITIAL_CERT} - initialCert = ${?OTOROSHI_INITIAL_CERT} - initialCertKey = ${?CLUSTER_WORKER_INITIAL_CERT_KEY} - initialCertKey = ${?OTOROSHI_CLUSTER_WORKER_INITIAL_CERT_KEY} - initialCertKey = ${?INITIAL_CERT_KEY} - initialCertKey = ${?OTOROSHI_INITIAL_CERT_KEY} - initialCertImportCa = ${?OTOROSHI_INITIAL_CERT_IMPORTCA} - # initialCerts = [] - } - cluster { - mode = "off" # can be "off", "leader", "worker" - mode = ${?CLUSTER_MODE} # can be "off", "leader", "worker" - mode = ${?OTOROSHI_CLUSTER_MODE} # can be "off", "leader", "worker" - compression = -1 # compression of the data sent between leader cluster and worker cluster. From -1 (disabled) to 9 - compression = ${?CLUSTER_COMPRESSION} # compression of the data sent between leader cluster and worker cluster. From -1 (disabled) to 9 - compression = ${?OTOROSHI_CLUSTER_COMPRESSION} # compression of the data sent between leader cluster and worker cluster. From -1 (disabled) to 9 - retryDelay = 300 # the delay before retrying a request to leader - retryDelay = ${?CLUSTER_RETRY_DELAY} # the delay before retrying a request to leader - retryDelay = ${?OTOROSHI_CLUSTER_RETRY_DELAY} # the delay before retrying a request to leader - retryFactor = 2 # the retry factor to avoid high load on failing nodes - retryFactor = ${?CLUSTER_RETRY_FACTOR} # the retry factor to avoid high load on failing nodes - retryFactor = ${?OTOROSHI_CLUSTER_RETRY_FACTOR} # the retry factor to avoid high load on failing nodes - selfAddress = ${?CLUSTER_SELF_ADDRESS} # the instance ip address - selfAddress = ${?OTOROSHI_CLUSTER_SELF_ADDRESS} # the instance ip address - autoUpdateState = true # auto update cluster state with a job (more efficient) - autoUpdateState = ${?CLUSTER_AUTO_UPDATE_STATE} # auto update cluster state with a job (more efficient - autoUpdateState = ${?OTOROSHI_CLUSTER_AUTO_UPDATE_STATE} # auto update cluster state with a job (more efficient - backup { - enabled = false - enabled = ${?OTOROSHI_CLUSTER_BACKUP_ENABLED} - kind = "S3" - kind = ${?OTOROSHI_CLUSTER_BACKUP_KIND} - instance { - can-write = false - can-write = ${?OTOROSHI_CLUSTER_BACKUP_INSTANCE_CAN_WRITE} - can-read = false - can-read = ${?OTOROSHI_CLUSTER_BACKUP_INSTANCE_CAN_READ} - } - s3 { - bucket = ${?OTOROSHI_CLUSTER_BACKUP_S3_BUCKET} - endpoint = ${?OTOROSHI_CLUSTER_BACKUP_S3_ENDPOINT} - region = ${?OTOROSHI_CLUSTER_BACKUP_S3_REGION} - access = ${?OTOROSHI_CLUSTER_BACKUP_S3_ACCESSKEY} - secret = ${?OTOROSHI_CLUSTER_BACKUP_S3_SECRET} - path = ${?OTOROSHI_CLUSTER_BACKUP_S3_PATH} - chunk-size = ${?OTOROSHI_CLUSTER_BACKUP_S3_CHUNK_SIZE} - v4auth = ${?OTOROSHI_CLUSTER_BACKUP_S3_V4AUTH} - acl = ${?OTOROSHI_CLUSTER_BACKUP_S3_ACL} - } - } - relay { # relay routing settings - enabled = false # enable relay routing - enabled = ${?OTOROSHI_CLUSTER_RELAY_ENABLED} # enable relay routing - leaderOnly = false - leaderOnly = ${?OTOROSHI_CLUSTER_RELAY_LEADER_ONLY} # workers always pass through leader for relay routing - location { - provider = ${?otoroshi.instance.provider} - provider = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_PROVIDER} - provider = ${?app.instance.provider} - zone = ${?otoroshi.instance.zone} - zone = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_ZONE} - zone = ${?app.instance.zone} - region = ${?otoroshi.instance.region} - region = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_REGION} - region = ${?app.instance.region} - datacenter = ${?otoroshi.instance.dc} - datacenter = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_DATACENTER} - datacenter = ${?app.instance.dc} - rack = ${?otoroshi.instance.rack} - rack = ${?OTOROSHI_CLUSTER_RELAY_LOCATION_RACK} - rack = ${?app.instance.rack} - } - exposition { - url = ${?OTOROSHI_CLUSTER_RELAY_EXPOSITION_URL} - urls = [] - urlsStr = ${?OTOROSHI_CLUSTER_RELAY_EXPOSITION_URLS} - hostname = "otoroshi-api.oto.tools" - hostname = ${?OTOROSHI_CLUSTER_RELAY_EXPOSITION_HOSTNAME} - clientId = ${?OTOROSHI_CLUSTER_RELAY_EXPOSITION_CLIENT_ID} - clientSecret = ${?OTOROSHI_CLUSTER_RELAY_EXPOSITION_CLIENT_SECRET} - ipAddress = ${?OTOROSHI_CLUSTER_RELAY_EXPOSITION_IP_ADDRESS} - } - } - mtls { - # certs = [] - # trustedCerts = [] - enabled = false # enable mtls - enabled = ${?CLUSTER_MTLS_ENABLED} # enable mtls - enabled = ${?OTOROSHI_CLUSTER_MTLS_ENABLED} # enable mtls - loose = false # loose verification - loose = ${?CLUSTER_MTLS_LOOSE} # loose verification - loose = ${?OTOROSHI_CLUSTER_MTLS_LOOSE} # loose verification - trustAll = false # trust any CA - trustAll = ${?CLUSTER_MTLS_TRUST_ALL} # trust any CA - trustAll = ${?OTOROSHI_CLUSTER_MTLS_TRUST_ALL} # trust any CA - } - proxy { - enabled = false # enable proxy - enabled = ${?CLUSTER_PROXY_ENABLED} # enable proxy - host = ${?CLUSTER_PROXY_HOST}, - port = ${?CLUSTER_PROXY_PORT}, - principal = ${?CLUSTER_PROXY_PRINCIPAL}, - password = ${?CLUSTER_PROXY_PASSWORD}, - ntlmDomain = ${?CLUSTER_PROXY_NTLM_DOMAIN}, - encoding = ${?CLUSTER_PROXY_ENCODING}, - } - leader { - name = ${?CLUSTER_LEADER_NAME} # the leader name - name = ${?OTOROSHI_CLUSTER_LEADER_NAME} # the leader name - urls = ["http://127.0.0.1:8080"] # the leader urls - urlsStr = ${?CLUSTER_LEADER_URLS} # the leader urls - urlsStr = ${?OTOROSHI_CLUSTER_LEADER_URLS} # the leader urls - url = ${?CLUSTER_LEADER_URL} # the leader url - url = ${?OTOROSHI_CLUSTER_LEADER_URL} # the leader url - host = "otoroshi-api.oto.tools" # the leaders api hostname - host = ${?CLUSTER_LEADER_HOST} # the leaders api hostname - host = ${?OTOROSHI_CLUSTER_LEADER_HOST} # the leaders api hostname - clientId = "admin-api-apikey-id" # the leaders apikey id to access otoroshi admin api - clientId = ${?CLUSTER_LEADER_CLIENT_ID} # the leaders apikey id to access otoroshi admin api - clientId = ${?OTOROSHI_CLUSTER_LEADER_CLIENT_ID} # the leaders apikey id to access otoroshi admin api - clientSecret = "admin-api-apikey-secret" # the leaders apikey secret to access otoroshi admin api - clientSecret = ${?CLUSTER_LEADER_CLIENT_SECRET} # the leaders apikey secret to access otoroshi admin api - clientSecret = ${?OTOROSHI_CLUSTER_LEADER_CLIENT_SECRET} # the leaders apikey secret to access otoroshi admin api - groupingBy = 50 # items grouping when streaming state - groupingBy = ${?CLUSTER_LEADER_GROUP_BY} # items grouping when streaming state - groupingBy = ${?OTOROSHI_CLUSTER_LEADER_GROUP_BY} # items grouping when streaming state - cacheStateFor = 10000 # the ttl for local state cache - cacheStateFor = ${?CLUSTER_LEADER_CACHE_STATE_FOR} # the ttl for local state cache - cacheStateFor = ${?OTOROSHI_CLUSTER_LEADER_CACHE_STATE_FOR} # the ttl for local state cache - stateDumpPath = ${?CLUSTER_LEADER_DUMP_PATH} # eventually a dump state path for debugging purpose - stateDumpPath = ${?OTOROSHI_CLUSTER_LEADER_DUMP_PATH} # eventually a dump state path for debugging purpose - } - worker { - name = ${?CLUSTER_WORKER_NAME} # the workers name - name = ${?OTOROSHI_CLUSTER_WORKER_NAME} # the workers name - retries = 3 # the number of retries when pushing quotas/pulling state - retries = ${?CLUSTER_WORKER_RETRIES} # the number of retries when pushing quotas/pulling state - retries = ${?OTOROSHI_CLUSTER_WORKER_RETRIES} # the number of retries when pushing quotas/pulling state - timeout = 10000 # the workers timeout when interacting with leaders - timeout = ${?CLUSTER_WORKER_TIMEOUT} # the workers timeout when interacting with leaders - timeout = ${?OTOROSHI_CLUSTER_WORKER_TIMEOUT} # the workers timeout when interacting with leaders - tenants = [] # the list of organization served by this worker. If none, it's all - tenantsStr = ${?CLUSTER_WORKER_TENANTS} # the list (coma separated) of organization served by this worker. If none, it's all - tenantsStr = ${?OTOROSHI_CLUSTER_WORKER_TENANTS} # the list (coma separated) of organization served by this worker. If none, it's all - dbpath = ${?CLUSTER_WORKER_DB_PATH} # state dump path for debugging purpose - dbpath = ${?OTOROSHI_CLUSTER_WORKER_DB_PATH} # state dump path for debugging purpose - dataStaleAfter = 600000 # the amount of time needed to consider state is stale - dataStaleAfter = ${?CLUSTER_WORKER_DATA_STALE_AFTER} # the amount of time needed to consider state is stale - dataStaleAfter = ${?OTOROSHI_CLUSTER_WORKER_DATA_STALE_AFTER} # the amount of time needed to consider state is stale - swapStrategy = "Merge" # the internal memory store strategy, can be Replace or Merge - swapStrategy = ${?CLUSTER_WORKER_SWAP_STRATEGY} # the internal memory store strategy, can be Replace or Merge - swapStrategy = ${?OTOROSHI_CLUSTER_WORKER_SWAP_STRATEGY} # the internal memory store strategy, can be Replace or Merge - modern = false # use a modern store implementation - modern = ${?CLUSTER_WORKER_STORE_MODERN} - modern = ${?OTOROSHI_CLUSTER_WORKER_STORE_MODERN} - state { - retries = ${otoroshi.cluster.worker.retries} # the number of retries when pulling state - retries = ${?CLUSTER_WORKER_STATE_RETRIES} # the number of retries when pulling state - retries = ${?OTOROSHI_CLUSTER_WORKER_STATE_RETRIES} # the number of retries when pulling state - pollEvery = 10000 # polling interval - pollEvery = ${?CLUSTER_WORKER_POLL_EVERY} # polling interval - pollEvery = ${?OTOROSHI_CLUSTER_WORKER_POLL_EVERY} # polling interval - timeout = ${otoroshi.cluster.worker.timeout} # the workers timeout when polling state - timeout = ${?CLUSTER_WORKER_POLL_TIMEOUT} # the workers timeout when polling state - timeout = ${?OTOROSHI_CLUSTER_WORKER_POLL_TIMEOUT} # the workers timeout when polling state - } - quotas { - retries = ${otoroshi.cluster.worker.retries} # the number of retries when pushing quotas - retries = ${?CLUSTER_WORKER_QUOTAS_RETRIES} # the number of retries when pushing quotas - retries = ${?OTOROSHI_CLUSTER_WORKER_QUOTAS_RETRIES} # the number of retries when pushing quotas - pushEvery = 10000 # pushing interval - pushEvery = ${?CLUSTER_WORKER_PUSH_EVERY} # pushing interval - pushEvery = ${?OTOROSHI_CLUSTER_WORKER_PUSH_EVERY} # pushing interval - timeout = ${otoroshi.cluster.worker.timeout} # the workers timeout when pushing quotas - timeout = ${?CLUSTER_WORKER_PUSH_TIMEOUT} # the workers timeout when pushing quotas - timeout = ${?OTOROSHI_CLUSTER_WORKER_PUSH_TIMEOUT} # the workers timeout when pushing quotas - } - } - analytics { # settings for the analytics actor system which is separated from otoroshi default one for performance reasons - pressure { - enabled = true - enabled = ${?OTOROSHI_ANALYTICS_PRESSURE_ENABLED} - } - actorsystem { - akka { - version = ${akka.version} - log-dead-letters-during-shutdown = false - jvm-exit-on-fatal-error = false - default-dispatcher { - type = Dispatcher - executor = "fork-join-executor" - fork-join-executor { - parallelism-factor = 4.0 - parallelism-min = 4 - parallelism-max = 64 - task-peeking-mode = "FIFO" - } - throughput = 1 - } - # http { - # parsing { - # max-uri-length = 4k - # max-method-length = 16 - # max-response-reason-length = 64 - # max-header-name-length = 128 - # max-header-value-length = 16k - # max-header-count = 128 - # max-chunk-ext-length = 256 - # max-chunk-size = 256m - # max-chunk-size = ${?AKKA_HTTP_CLIENT_ANALYTICS_MAX_CHUNK_SIZE} - # max-chunk-size = ${?OTOROSHI_AKKA_HTTP_CLIENT_ANALYTICS_MAX_CHUNK_SIZE} - # max-content-length = infinite - # max-content-length = ${?AKKA_HTTP_CLIENT_ANALYTICS_MAX_CONTENT_LENGHT} - # max-content-length = ${?OTOROSHI_AKKA_HTTP_CLIENT_ANALYTICS_MAX_CONTENT_LENGHT} - # max-to-strict-bytes = infinite - # max-to-strict-bytes = ${?AKKA_HTTP_CLIENT_ANALYTICS_MAX_TO_STRICT_BYTES} - # max-to-strict-bytes = ${?OTOROSHI_AKKA_HTTP_CLIENT_ANALYTICS_MAX_TO_STRICT_BYTES} - # } - # } - } - } - } - } - headers { # the default headers value for specific otoroshi headers - trace.label = "Otoroshi-Viz-From-Label" - trace.from = "Otoroshi-Viz-From" - trace.parent = "Otoroshi-Parent-Request" - request.adminprofile = "Otoroshi-Admin-Profile" - request.simpleapiclientid = "x-api-key" - request.clientid = "Otoroshi-Client-Id" - request.clientsecret = "Otoroshi-Client-Secret" - request.id = "Otoroshi-Request-Id" - request.timestamp = "Otoroshi-Request-Timestamp" - request.bearer = "Otoroshi-Token" - request.authorization = "Otoroshi-Authorization" - response.proxyhost = "Otoroshi-Proxied-Host" - response.error = "Otoroshi-Error" - response.errormsg = "Otoroshi-Error-Msg" - response.errorcause = "Otoroshi-Error-Cause" - response.proxylatency = "Otoroshi-Proxy-Latency" - response.upstreamlatency = "Otoroshi-Upstream-Latency" - response.dailyquota = "Otoroshi-Daily-Calls-Remaining" - response.monthlyquota = "Otoroshi-Monthly-Calls-Remaining" - comm.state = "Otoroshi-State" - comm.stateresp = "Otoroshi-State-Resp" - comm.claim = "Otoroshi-Claim" - healthcheck.test = "Otoroshi-Health-Check-Logic-Test" - healthcheck.testresult = "Otoroshi-Health-Check-Logic-Test-Result" - jwt.issuer = "Otoroshi" - canary.tracker = "Otoroshi-Canary-Id" - client.cert.chain = "Otoroshi-Client-Cert-Chain" - - request.jwtAuthorization = "access_token" - request.bearerAuthorization = "bearer_auth" - request.basicAuthorization = "basic_auth" - } - requests { - validate = true - validate = ${?OTOROSHI_REQUESTS_VALIDATE} - maxUrlLength = ${akka.http.parsing.max-uri-length} - maxCookieLength = ${akka.http.parsing.max-header-value-length} - maxHeaderNameLength = ${akka.http.parsing.max-header-name-length} - maxHeaderValueLength = ${akka.http.parsing.max-header-value-length} - } - jmx { - enabled = false - enabled = ${?OTOROSHI_JMX_ENABLED} - port = 16000 - port = ${?OTOROSHI_JMX_PORT} - } - loggers { - } - provider { - dashboardUrl = ${?OTOROSHI_PROVIDER_DASHBOARD_URL} - jsUrl = ${?OTOROSHI_PROVIDER_JS_URL} - cssUrl = ${?OTOROSHI_PROVIDER_CSS_URL} - secret = "secret" - secret = ${?OTOROSHI_PROVIDER_SECRET} - title = "Provider's dashboard" - title = ${?OTOROSHI_PROVIDER_TITLE} - } - healthcheck { - workers = 4 - workers = ${?OTOROSHI_HEALTHCHECK_WORKERS} - block-on-red = false - block-on-red = ${?OTOROSHI_HEALTHCHECK_BLOCK_ON_RED} - block-on-red = ${?OTOROSHI_HEALTHCHECK_BLOCK_ON_500} - ttl = 60000 - ttl = ${?OTOROSHI_HEALTHCHECK_TTL} - ttl-only = true - ttl-only = ${?OTOROSHI_HEALTHCHECK_TTL_ONLY} - } - vaults { - enabled = false - enabled = ${?OTOROSHI_VAULTS_ENABLED} - secrets-ttl = 300000 # 5 minutes between each secret read - secrets-ttl = ${?OTOROSHI_VAULTS_SECRETS_TTL} - secrets-error-ttl = 20000 # wait 20000 before retrying on error - secrets-error-ttl = ${?OTOROSHI_VAULTS_SECRETS_ERROR_TTL} - cached-secrets = 10000 - cached-secrets = ${?OTOROSHI_VAULTS_CACHED_SECRETS} - read-ttl = 10000 # 10 seconds - read-timeout = ${?otoroshi.vaults.read-ttl} - read-timeout = ${?OTOROSHI_VAULTS_READ_TTL} - read-timeout = ${?OTOROSHI_VAULTS_READ_TIMEOUT} - parallel-fetchs = 4 - parallel-fetchs = ${?OTOROSHI_VAULTS_PARALLEL_FETCHS} - # if enabled, only leader nodes fetches the secrets. - # entities with secret values filled are then sent to workers when they poll the cluster state. - # only works if `otoroshi.cluster.autoUpdateState=true` - leader-fetch-only = false - leader-fetch-only = ${?OTOROSHI_VAULTS_LEADER_FETCH_ONLY} - env { - type = "env" - prefix = ${?OTOROSHI_VAULTS_ENV_PREFIX} - } - local { - type = "local" - root = ${?OTOROSHI_VAULTS_LOCAL_ROOT} - } - # hashicorpvault { - # type = "hashicorp-vault" - # url = "http://127.0.0.1:8200" - # mount = "kv" - # kv = "v2" - # token = "root" - # } - } - tunnels { - enabled = true - enabled = ${?OTOROSHI_TUNNELS_ENABLED} - worker-ws = true - worker-ws = ${?OTOROSHI_TUNNELS_WORKER_WS} - worker-use-internal-ports = false - worker-use-internal-ports = ${?OTOROSHI_TUNNELS_WORKER_USE_INTERNAL_PORTS} - worker-use-loadbalancing = false - worker-use-loadbalancing = ${?OTOROSHI_TUNNELS_WORKER_USE_LOADBALANCING} - default { - enabled = false - enabled = ${?OTOROSHI_TUNNELS_DEFAULT_ENABLED} - id = "default" - id = ${?OTOROSHI_TUNNELS_DEFAULT_ID} - name = "default" - name = ${?OTOROSHI_TUNNELS_DEFAULT_NAME} - url = "http://127.0.0.1:8080" - url = ${?OTOROSHI_TUNNELS_DEFAULT_URL} - host = "otoroshi-api.oto.tools" - host = ${?OTOROSHI_TUNNELS_DEFAULT_HOST} - clientId = "admin-api-apikey-id" - clientId = ${?OTOROSHI_TUNNELS_DEFAULT_CLIENT_ID} - clientSecret = "admin-api-apikey-secret" - clientSecret = ${?OTOROSHI_TUNNELS_DEFAULT_CLIENT_SECRET} - export-routes = true # send routes information to remote otoroshi instance to facilitate remote route exposition - export-routes = ${?OTOROSHI_TUNNELS_DEFAULT_EXPORT_ROUTES} # send routes information to remote otoroshi instance to facilitate remote route exposition - export-routes-tag = ${?OTOROSHI_TUNNELS_DEFAULT_EXPORT_TAG} # only send routes information if the route has this tag - proxy { - enabled = false - host = none - port = none - principal = none - password = none - nonProxyHosts = [] - } - } - } - admin-extensions { - enabled = true - enabled = ${?OTOROSHI_ADMIN_EXTENSIONS_ENABLED} - configurations { - otoroshi_extensions_foo { - enabled = false - } - } - } -} - - -http.port = 8080 # the main http port for the otoroshi server -http.port = ${?otoroshi.http.port} # the main http port for the otoroshi server -http.port = ${?PORT} # the main http port for the otoroshi server -http.port = ${?OTOROSHI_PORT} # the main http port for the otoroshi server -http.port = ${?OTOROSHI_HTTP_PORT} # the main http port for the otoroshi server -play.server.http.port = ${http.port} # the main http port for the otoroshi server -play.server.http.port = ${?PORT} # the main http port for the otoroshi server -play.server.http.port = ${?OTOROSHI_PORT} # the main http port for the otoroshi server -play.server.http.port = ${?OTOROSHI_HTTP_PORT} # the main http port for the otoroshi server -https.port = 8443 # the main https port for the otoroshi server -https.port = ${?otoroshi.https.port} # the main https port for the otoroshi server -https.port = ${?HTTPS_PORT} # the main https port for the otoroshi server -https.port = ${?OTOROSHI_HTTPS_PORT} # the main https port for the otoroshi server - -play.server.https.engineProvider = "otoroshi.ssl.DynamicSSLEngineProvider" # the module to handle TLS connections dynamically -play.server.https.keyStoreDumpPath = ${?HTTPS_KEYSTORE_DUMP_PATH} # the file path where the TLSContext will be dumped (for debugging purposes only) -play.server.https.keyStoreDumpPath = ${?OTOROSHI_HTTPS_KEYSTORE_DUMP_PATH} # the file path where the TLSContext will be dumped (for debugging purposes only) - -play.http.secret.key = ${otoroshi.secret} # the secret used to signed session cookies -play.http.secret.key = ${?PLAY_CRYPTO_SECRET} # the secret used to signed session cookies -play.http.secret.key = ${?OTOROSHI_CRYPTO_SECRET} # the secret used to signed session cookies - -play.server.http.idleTimeout = 3600s # the default server idle timeout -play.server.http.idleTimeout = ${?PLAY_SERVER_IDLE_TIMEOUT} # the default server idle timeout -play.server.http.idleTimeout = ${?OTOROSHI_SERVER_IDLE_TIMEOUT} # the default server idle timeout -play.server.akka.requestTimeout = 3600s # the default server idle timeout (for akka server specifically) -play.server.akka.requestTimeout = ${?PLAY_SERVER_REQUEST_TIMEOUT} # the default server idle timeout (for akka server specifically) -play.server.akka.requestTimeout = ${?OTOROSHI_SERVER_REQUEST_TIMEOUT} # the default server idle timeout (for akka server specifically) - -http2.enabled = true # enable HTTP2 support -http2.enabled = ${?otoroshi.http2.enabled} -http2.enabled = ${?HTTP2_ENABLED} # enable HTTP2 support -http2.enabled = ${?OTOROSHI_HTTP2_ENABLED} # enable HTTP2 support - -play.server.https.keyStore.path=${?HTTPS_KEYSTORE_PATH} # settings for the default server keystore -play.server.https.keyStore.path=${?OTOROSHI_HTTPS_KEYSTORE_PATH} # settings for the default server keystore -play.server.https.keyStore.type=${?HTTPS_KEYSTORE_TYPE} # settings for the default server keystore -play.server.https.keyStore.type=${?OTOROSHI_HTTPS_KEYSTORE_TYPE} # settings for the default server keystore -play.server.https.keyStore.password=${?HTTPS_KEYSTORE_PASSWORD} # settings for the default server keystore -play.server.https.keyStore.password=${?OTOROSHI_HTTPS_KEYSTORE_PASSWORD} # settings for the default server keystore -play.server.https.keyStore.algorithm=${?HTTPS_KEYSTORE_ALGO} # settings for the default server keystore -play.server.https.keyStore.algorithm=${?OTOROSHI_HTTPS_KEYSTORE_ALGO} # settings for the default server keystore - - -play.server.websocket.frame.maxLength = 1024k -play.server.websocket.frame.maxLength = ${?OTOROSHI_WEBSOCKET_FRAME_MAX_LENGTH} - - - -play.application.loader = "otoroshi.loader.OtoroshiLoader" # the loader used to launch otoroshi - -play.http { - session { - secure = false # the cookie for otoroshi backoffice should be exhanged over https only - secure = ${?SESSION_SECURE_ONLY} # the cookie for otoroshi backoffice should be exhanged over https only - secure = ${?OTOROSHI_SESSION_SECURE_ONLY} # the cookie for otoroshi backoffice should be exhanged over https only - httpOnly = true # the cookie for otoroshi backoffice is not accessible from javascript - maxAge = 259200000 # the cookie for otoroshi backoffice max age - maxAge = ${?SESSION_MAX_AGE} # the cookie for otoroshi backoffice max age - maxAge = ${?OTOROSHI_SESSION_MAX_AGE} # the cookie for otoroshi backoffice max age - # domain = "."${?app.domain} # the cookie for otoroshi backoffice domain - domain = "."${otoroshi.domain} # the cookie for otoroshi backoffice domain - domain = ${?SESSION_DOMAIN} # the cookie for otoroshi backoffice domain - domain = ${?OTOROSHI_SESSION_DOMAIN} # the cookie for otoroshi backoffice domain - cookieName = "otoroshi-session" # the cookie for otoroshi backoffice name - cookieName = ${?SESSION_NAME} # the cookie for otoroshi backoffice name - cookieName = ${?OTOROSHI_SESSION_NAME} # the cookie for otoroshi backoffice name - } -} - - - - - - -akka { # akka specific configuration - version = "2.6.20" - loglevel = ERROR - logger-startup-timeout = 60s - log-dead-letters-during-shutdown = false - jvm-exit-on-fatal-error = false - actor { - default-dispatcher { - type = Dispatcher - executor = "fork-join-executor" - fork-join-executor { - parallelism-factor = 4.0 - parallelism-factor = ${?OTOROSHI_AKKA_DISPATCHER_PARALLELISM_FACTOR} - parallelism-min = 8 - parallelism-min = ${?OTOROSHI_AKKA_DISPATCHER_PARALLELISM_MIN} - parallelism-max = 64 - parallelism-max = ${?OTOROSHI_AKKA_DISPATCHER_PARALLELISM_MAX} - task-peeking-mode = "FIFO" - task-peeking-mode = ${?OTOROSHI_AKKA_DISPATCHER_TASK_PEEKING_MODE} - } - throughput = 1 - throughput = ${?OTOROSHI_AKKA_DISPATCHER_THROUGHPUT} - } - } - http { - server { - server-header = otoroshi - max-connections = 2048 - max-connections = ${?OTOROSHI_AKKA_HTTP_SERVER_MAX_CONNECTIONS} - remote-address-header = on - raw-request-uri-header = on - pipelining-limit = 64 - pipelining-limit = ${?OTOROSHI_AKKA_HTTP_SERVER_PIPELINING_LIMIT} - backlog = 512 - backlog = ${?OTOROSHI_AKKA_HTTP_SERVER_BACKLOG} - socket-options { - so-receive-buffer-size = undefined - so-send-buffer-size = undefined - so-reuse-address = undefined - so-traffic-class = undefined - tcp-keep-alive = true - tcp-oob-inline = undefined - tcp-no-delay = undefined - } - http2 { - request-entity-chunk-size = 65536 b - incoming-connection-level-buffer-size = 10 MB - incoming-stream-level-buffer-size = 512kB - } - } - client { - user-agent-header = Otoroshi-akka - socket-options { - so-receive-buffer-size = undefined - so-send-buffer-size = undefined - so-reuse-address = undefined - so-traffic-class = undefined - tcp-keep-alive = true - tcp-oob-inline = undefined - tcp-no-delay = undefined - } - } - host-connection-pool { - max-connections = 512 - max-connections = ${?OTOROSHI_AKKA_HTTP_SERVER_HOST_CONNECTION_POOL_MAX_CONNECTIONS} - max-open-requests = 2048 - max-open-requests = ${?OTOROSHI_AKKA_HTTP_SERVER_HOST_CONNECTION_POOL_MAX_OPEN_REQUESTS} - pipelining-limit = 32 - pipelining-limit = ${?OTOROSHI_AKKA_HTTP_SERVER_HOST_CONNECTION_POOL_PIPELINING_LIMIT} - client { - user-agent-header = otoroshi - socket-options { - so-receive-buffer-size = undefined - so-send-buffer-size = undefined - so-reuse-address = undefined - so-traffic-class = undefined - tcp-keep-alive = true - tcp-oob-inline = undefined - tcp-no-delay = undefined - } - } - } - parsing { - max-uri-length = 4k - max-uri-length = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_URI_LENGTH} - max-method-length = 16 - max-method-length = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_METHOD_LENGTH} - max-response-reason-length = 128 - max-response-reason-length = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_RESPONSE_REASON_LENGTH} - max-header-name-length = 128 - max-header-name-length = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_HEADER_NAME_LENGTH} - max-header-value-length = 16k - max-header-value-length = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_HEADER_VALUE_LENGTH} - max-header-count = 128 - max-header-count = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_HEADER_COUNT} - max-chunk-ext-length = 256 - max-chunk-ext-length = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_CHUNK_EXT_LENGTH} - max-chunk-size = 256m - max-chunk-size = ${?AKKA_HTTP_SERVER_MAX_CHUNK_SIZE} - max-chunk-size = ${?OTOROSHI_AKKA_HTTP_SERVER_MAX_CHUNK_SIZE} - max-chunk-size = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_CHUNK_SIZE} - max-content-length = infinite - max-content-length = ${?AKKA_HTTP_SERVER_MAX_CONTENT_LENGHT} - max-content-length = ${?OTOROSHI_AKKA_HTTP_SERVER_MAX_CONTENT_LENGHT} - max-content-length = ${?OTOROSHI_AKKA_HTTP_SERVER_PARSING_MAX_CONTENT_LENGHT} - } - } -} \ No newline at end of file diff --git a/otoroshi/app/auth/wasm.scala b/otoroshi/app/auth/wasm.scala index 6f247b24c2..18b55fa726 100644 --- a/otoroshi/app/auth/wasm.scala +++ b/otoroshi/app/auth/wasm.scala @@ -1,15 +1,16 @@ package otoroshi.auth import otoroshi.env.Env +import otoroshi.gateway.Errors import otoroshi.models._ import otoroshi.next.models.NgRoute import otoroshi.next.plugins.BodyHelper -import otoroshi.next.plugins.api.NgCachedConfigContext +import otoroshi.next.plugins.api.{NgAccess, NgCachedConfigContext} import otoroshi.next.utils.JsonHelpers import otoroshi.security.IdGenerator -import otoroshi.utils.JsonPathValidator +import otoroshi.utils.{JsonPathValidator, TypedMap} import otoroshi.utils.syntax.implicits._ -import otoroshi.wasm.WasmUtils +import otoroshi.wasm.{WasmFunctionParameters, WasmUtils, WasmVm} import play.api.Logger import play.api.libs.json._ import play.api.mvc._ @@ -128,29 +129,46 @@ class WasmAuthModule(val authConfig: WasmAuthModuleConfig) extends AuthModule { "is_route" -> isRoute ) val ctx = WasmAuthModuleContext(authConfig.json, route) - WasmUtils.execute(plugin.config, "pa_login_page", input, None, None).map { - case Left(err) => Results.InternalServerError(err) - case Right(output) => { - val response = - try { - Json.parse(output) - } catch { - case e: Exception => - WasmAuthModule.logger.error("error during json parsing", e) - Json.obj() + WasmVm.fromConfig(plugin.config).flatMap { + case None => Errors + .craftResponseResult( + "plugin not found !", + Results.Status(500), + request, + None, + None, + attrs = TypedMap.empty, + maybeRoute = ctx.route.some + ) + case Some((vm, _)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall("pa_login_page", input.stringify), None) + .map { + case Left(err) => Results.InternalServerError(err) + case Right(output) => { + val response = + try { + Json.parse(output._1) + } catch { + case e: Exception => + WasmAuthModule.logger.error("error during json parsing", e) + Json.obj() + } + val body = BodyHelper.extractBodyFrom(response) + val headers = response + .select("headers") + .asOpt[Map[String, String]] + .getOrElse(Map("Content-Type" -> "text/html")) + val contentType = headers.getIgnoreCase("Content-Type").getOrElse("text/html") + Results + .Status(response.select("status").asOpt[Int].getOrElse(200)) + .apply(body) + .withHeaders(headers.toSeq: _*) + .as(contentType) + } + } + .andThen { + case _ => vm.release() } - val body = BodyHelper.extractBodyFrom(response) - val headers = response - .select("headers") - .asOpt[Map[String, String]] - .getOrElse(Map("Content-Type" -> "text/html")) - val contentType = headers.getIgnoreCase("Content-Type").getOrElse("text/html") - Results - .Status(response.select("status").asOpt[Int].getOrElse(200)) - .apply(body) - .withHeaders(headers.toSeq: _*) - .as(contentType) - } } } getOrElse { Results @@ -177,20 +195,37 @@ class WasmAuthModule(val authConfig: WasmAuthModuleConfig) extends AuthModule { "user" -> user.map(_.json).getOrElse(JsNull).asValue ) val ctx = WasmAuthModuleContext(authConfig.json, route) - WasmUtils.execute(plugin.config, "pa_logout", input, None, None).map { - case Left(err) => Results.InternalServerError(err).left - case Right(output) => { - val response = - try { - Json.parse(output) - } catch { - case e: Exception => - WasmAuthModule.logger.error("error during json parsing", e) - Json.obj() + WasmVm.fromConfig(plugin.config).flatMap { + case None => Errors + .craftResponseResult( + "plugin not found !", + Results.Status(500), + request, + None, + None, + attrs = TypedMap.empty, + maybeRoute = ctx.route.some + ).map(_.left) + case Some((vm, _)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall("pa_logout", input.stringify), None) + .map { + case Left(err) => Results.InternalServerError(err).left + case Right(output) => { + val response = + try { + Json.parse(output._1) + } catch { + case e: Exception => + WasmAuthModule.logger.error("error during json parsing", e) + Json.obj() + } + val logoutUrl = response.select("logout_url").asOpt[String] + logoutUrl.right + } + } + .andThen { + case _ => vm.release() } - val logoutUrl = response.select("logout_url").asOpt[String] - logoutUrl.right - } } } getOrElse { Results @@ -215,23 +250,31 @@ class WasmAuthModule(val authConfig: WasmAuthModuleConfig) extends AuthModule { "route" -> route.json ) val ctx = WasmAuthModuleContext(authConfig.json, route) - WasmUtils.execute(plugin.config, "pa_callback", input, None, None).map { - case Left(err) => err.stringify.left - case Right(output) => { - val response = { - try { - Json.parse(output) - } catch { - case e: Exception => - WasmAuthModule.logger.error("error during json parsing", e) - Json.obj() + WasmVm.fromConfig(plugin.config).flatMap { + case None => "plugin not found !".leftf + case Some((vm, _)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall("pa_callback", input.stringify), None) + .map { + case Left(err) => err.stringify.left + case Right(output) => { + val response = { + try { + Json.parse(output._1) + } catch { + case e: Exception => + WasmAuthModule.logger.error("error during json parsing", e) + Json.obj() + } + } + PrivateAppsUser.fmt.reads(response) match { + case JsError(errors) => errors.toString().left + case JsSuccess(user, _) => user.validate(authConfig.userValidators) + } + } + } + .andThen { + case _ => vm.release() } - } - PrivateAppsUser.fmt.reads(response) match { - case JsError(errors) => errors.toString().left - case JsSuccess(user, _) => user.validate(authConfig.userValidators) - } - } } } getOrElse { "wasm module not found".left.vfuture @@ -248,29 +291,46 @@ class WasmAuthModule(val authConfig: WasmAuthModuleConfig) extends AuthModule { "global_config" -> config.json ) val ctx = WasmAuthModuleContext(authConfig.json, NgRoute.empty) - WasmUtils.execute(plugin.config, "bo_login_page", input, None, None).map { - case Left(err) => Results.InternalServerError(err) - case Right(output) => { - val response = - try { - Json.parse(output) - } catch { - case e: Exception => - WasmAuthModule.logger.error("error during json parsing", e) - Json.obj() + WasmVm.fromConfig(plugin.config).flatMap { + case None => Errors + .craftResponseResult( + "plugin not found !", + Results.Status(500), + request, + None, + None, + attrs = TypedMap.empty, + maybeRoute = ctx.route.some + ) + case Some((vm, _)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall("bo_login_page", input.stringify), None) + .map { + case Left(err) => Results.InternalServerError(err) + case Right(output) => { + val response = + try { + Json.parse(output._1) + } catch { + case e: Exception => + WasmAuthModule.logger.error("error during json parsing", e) + Json.obj() + } + val body = BodyHelper.extractBodyFrom(response) + val headers = response + .select("headers") + .asOpt[Map[String, String]] + .getOrElse(Map("Content-Type" -> "text/html")) + val contentType = headers.getIgnoreCase("Content-Type").getOrElse("text/html") + Results + .Status(response.select("status").asOpt[Int].getOrElse(200)) + .apply(body) + .withHeaders(headers.toSeq: _*) + .as(contentType) + } + } + .andThen { + case _ => vm.release() } - val body = BodyHelper.extractBodyFrom(response) - val headers = response - .select("headers") - .asOpt[Map[String, String]] - .getOrElse(Map("Content-Type" -> "text/html")) - val contentType = headers.getIgnoreCase("Content-Type").getOrElse("text/html") - Results - .Status(response.select("status").asOpt[Int].getOrElse(200)) - .apply(body) - .withHeaders(headers.toSeq: _*) - .as(contentType) - } } } getOrElse { Results @@ -292,20 +352,37 @@ class WasmAuthModule(val authConfig: WasmAuthModuleConfig) extends AuthModule { "user" -> user.json ) val ctx = WasmAuthModuleContext(authConfig.json, NgRoute.empty) - WasmUtils.execute(plugin.config, "bo_logout", input, None, None).map { - case Left(err) => Results.InternalServerError(err).left - case Right(output) => { - val response = - try { - Json.parse(output) - } catch { - case e: Exception => - WasmAuthModule.logger.error("error during json parsing", e) - Json.obj() + WasmVm.fromConfig(plugin.config).flatMap { + case None => Errors + .craftResponseResult( + "plugin not found !", + Results.Status(500), + request, + None, + None, + attrs = TypedMap.empty, + maybeRoute = ctx.route.some + ).map(_.left) + case Some((vm, _)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall("bo_logout", input.stringify), None) + .map { + case Left(err) => Results.InternalServerError(err).left + case Right(output) => { + val response = + try { + Json.parse(output._1) + } catch { + case e: Exception => + WasmAuthModule.logger.error("error during json parsing", e) + Json.obj() + } + val logoutUrl = response.select("logout_url").asOpt[String] + logoutUrl.right + } + } + .andThen { + case _ => vm.release() } - val logoutUrl = response.select("logout_url").asOpt[String] - logoutUrl.right - } } } getOrElse { Results @@ -327,23 +404,31 @@ class WasmAuthModule(val authConfig: WasmAuthModuleConfig) extends AuthModule { "global_config" -> config.json ) val ctx = WasmAuthModuleContext(authConfig.json, NgRoute.empty) - WasmUtils.execute(plugin.config, "bo_callback", input, None, None).map { - case Left(err) => err.stringify.left - case Right(output) => { - val response = { - try { - Json.parse(output) - } catch { - case e: Exception => - WasmAuthModule.logger.error("error during json parsing", e) - Json.obj() + WasmVm.fromConfig(plugin.config).flatMap { + case None => "plugin not found !".leftf + case Some((vm, _)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall("bo_callback", input.stringify), None) + .map { + case Left(err) => err.stringify.left + case Right(output) => { + val response = { + try { + Json.parse(output._1) + } catch { + case e: Exception => + WasmAuthModule.logger.error("error during json parsing", e) + Json.obj() + } + } + BackOfficeUser.fmt.reads(response) match { + case JsError(errors) => errors.toString().left + case JsSuccess(user, _) => user.validate(authConfig.userValidators) + } + } + } + .andThen { + case _ => vm.release() } - } - BackOfficeUser.fmt.reads(response) match { - case JsError(errors) => errors.toString().left - case JsSuccess(user, _) => user.validate(authConfig.userValidators) - } - } } } getOrElse { "wasm module not found".left.vfuture diff --git a/otoroshi/app/cluster/cluster.scala b/otoroshi/app/cluster/cluster.scala index 4f90751712..ef15350539 100644 --- a/otoroshi/app/cluster/cluster.scala +++ b/otoroshi/app/cluster/cluster.scala @@ -39,7 +39,7 @@ import otoroshi.storage.stores._ import otoroshi.tcp.{KvTcpServiceDataStoreDataStore, TcpServiceDataStore} import otoroshi.utils import otoroshi.utils.SchedulerHelper -import otoroshi.utils.cache.types.{LegitConcurrentHashMap, LegitTrieMap} +import otoroshi.utils.cache.types.{UnboundedConcurrentHashMap, UnboundedTrieMap} import otoroshi.utils.http.Implicits._ import otoroshi.utils.http.MtlsConfig import otoroshi.utils.syntax.implicits._ @@ -1452,15 +1452,15 @@ class ClusterAgent(config: ClusterConfig, env: Env) { ///////////// private val apiIncrementsRef = - new AtomicReference[TrieMap[String, AtomicLong]](new LegitTrieMap[String, AtomicLong]()) + new AtomicReference[TrieMap[String, AtomicLong]](new UnboundedTrieMap[String, AtomicLong]()) private val servicesIncrementsRef = new AtomicReference[TrieMap[String, (AtomicLong, AtomicLong, AtomicLong)]]( - new LegitTrieMap[String, (AtomicLong, AtomicLong, AtomicLong)]() + new UnboundedTrieMap[String, (AtomicLong, AtomicLong, AtomicLong)]() ) private val workerSessionsCache = Scaffeine() .maximumSize(1000L) .expireAfterWrite(env.clusterConfig.worker.state.pollEvery.millis * 3) .build[String, PrivateAppsUser]() - private[cluster] val counters = new LegitTrieMap[String, AtomicLong]() + private[cluster] val counters = new UnboundedTrieMap[String, AtomicLong]() ///////////// def lastSync: DateTime = lastPoll.get() @@ -1844,8 +1844,8 @@ class ClusterAgent(config: ClusterConfig, env: Env) { Cluster.logger.error(s"unable to load cluster state from backup: ${err}") false.vfuture case Right(payload) => { - val store = new LegitConcurrentHashMap[String, Any]() - val expirations = new LegitConcurrentHashMap[String, Long]() + val store = new UnboundedConcurrentHashMap[String, Any]() + val expirations = new UnboundedConcurrentHashMap[String, Long]() payload .chunks(32 * 1024) .via(Framing.delimiter(ByteString("\n"), 32 * 1024 * 1024, true)) @@ -1891,7 +1891,7 @@ class ClusterAgent(config: ClusterConfig, env: Env) { Some(list) } case "hash" if modern => { - val map = new LegitTrieMap[String, ByteString]() + val map = new UnboundedTrieMap[String, ByteString]() map.++=(value.as[JsObject].value.map(t => (t._1, ByteString(t._2.as[String])))) Some(map) } @@ -1906,7 +1906,7 @@ class ClusterAgent(config: ClusterConfig, env: Env) { Some(list) } case "hash" => { - val map = new LegitConcurrentHashMap[String, ByteString] + val map = new UnboundedConcurrentHashMap[String, ByteString] map.putAll(value.as[JsObject].value.map(t => (t._1, ByteString(t._2.as[String]))).asJava) Some(map) } @@ -1979,8 +1979,8 @@ class ClusterAgent(config: ClusterConfig, env: Env) { Cluster.logger.debug( s"[${env.clusterConfig.mode.name}] Fetching state from Otoroshi leader cluster done ! (${DateTime.now()})" ) - val store = new LegitConcurrentHashMap[String, Any]() - val expirations = new LegitConcurrentHashMap[String, Long]() + val store = new UnboundedConcurrentHashMap[String, Any]() + val expirations = new UnboundedConcurrentHashMap[String, Long]() val responseFrom = resp.header("X-Data-From").map(_.toLong) val responseDigest = resp.header("X-Data-Digest") val responseCount = resp.header("X-Data-Count") @@ -2101,9 +2101,9 @@ class ClusterAgent(config: ClusterConfig, env: Env) { try { implicit val _env = env if (isPushingQuotas.compareAndSet(false, true)) { - val oldApiIncr = apiIncrementsRef.getAndSet(new LegitTrieMap[String, AtomicLong]()) + val oldApiIncr = apiIncrementsRef.getAndSet(new UnboundedTrieMap[String, AtomicLong]()) val oldServiceIncr = - servicesIncrementsRef.getAndSet(new LegitTrieMap[String, (AtomicLong, AtomicLong, AtomicLong)]()) + servicesIncrementsRef.getAndSet(new UnboundedTrieMap[String, (AtomicLong, AtomicLong, AtomicLong)]()) //if (oldApiIncr.nonEmpty || oldServiceIncr.nonEmpty) { val start = System.currentTimeMillis() Retry @@ -2404,8 +2404,8 @@ class SwappableInMemoryDataStores( private def readStateFromDisk(source: Seq[String]): Unit = { if (Cluster.logger.isDebugEnabled) Cluster.logger.debug("Reading state from disk ...") - val store = new LegitConcurrentHashMap[String, Any]() - val expirations = new LegitConcurrentHashMap[String, Long]() + val store = new UnboundedConcurrentHashMap[String, Any]() + val expirations = new UnboundedConcurrentHashMap[String, Long]() source.foreach { raw => val item = Json.parse(raw) val key = (item \ "k").as[String] @@ -2438,7 +2438,7 @@ class SwappableInMemoryDataStores( Some(list) } case "hash" if modern => { - val map = new LegitTrieMap[String, ByteString]() + val map = new UnboundedTrieMap[String, ByteString]() map.++=(value.as[JsObject].value.map(t => (t._1, ByteString(t._2.as[String])))) Some(map) } @@ -2453,7 +2453,7 @@ class SwappableInMemoryDataStores( Some(list) } case "hash" => { - val map = new LegitConcurrentHashMap[String, ByteString] + val map = new UnboundedConcurrentHashMap[String, ByteString] map.putAll(value.as[JsObject].value.map(t => (t._1, ByteString(t._2.as[String]))).asJava) Some(map) } diff --git a/otoroshi/app/events/OtoroshiEventsActor.scala b/otoroshi/app/events/OtoroshiEventsActor.scala index 0fb73995d7..2ca3e0fb33 100644 --- a/otoroshi/app/events/OtoroshiEventsActor.scala +++ b/otoroshi/app/events/OtoroshiEventsActor.scala @@ -8,14 +8,7 @@ import akka.actor.{Actor, Props} import akka.http.scaladsl.model.{ContentType, ContentTypes} import akka.http.scaladsl.util.FastFuture import akka.stream.alpakka.s3.scaladsl.S3 -import akka.stream.alpakka.s3.{ - ApiVersion, - ListBucketResultContents, - MemoryBufferType, - MetaHeaders, - S3Attributes, - S3Settings -} +import akka.stream.alpakka.s3.{ApiVersion, ListBucketResultContents, MemoryBufferType, MetaHeaders, S3Attributes, S3Settings} import akka.stream.scaladsl.{Keep, Sink, Source, SourceQueueWithComplete} import akka.stream.{Attributes, OverflowStrategy, QueueOfferResult} import com.sksamuel.pulsar4s.Producer @@ -33,31 +26,18 @@ import otoroshi.script._ import otoroshi.security.IdGenerator import otoroshi.storage.drivers.inmemory.S3Configuration import otoroshi.utils.TypedMap -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.json.JsonOperationsHelper import otoroshi.utils.mailer.{EmailLocation, MailerSettings} import play.api.Logger -import play.api.libs.json.{ - Format, - JsArray, - JsBoolean, - JsError, - JsNull, - JsNumber, - JsObject, - JsResult, - JsString, - JsSuccess, - JsValue, - Json -} +import play.api.libs.json.{Format, JsArray, JsBoolean, JsError, JsNull, JsNumber, JsObject, JsResult, JsString, JsSuccess, JsValue, Json} import scala.collection.concurrent.TrieMap import scala.concurrent.duration._ import scala.concurrent.{ExecutionContext, Future, Promise} import scala.util.{Failure, Success, Try} import otoroshi.utils.syntax.implicits._ -import otoroshi.wasm.{WasmConfig, WasmUtils} +import otoroshi.wasm.{WasmConfig, WasmFunctionParameters, WasmUtils, WasmVm} import software.amazon.awssdk.auth.credentials.{AwsBasicCredentials, StaticCredentialsProvider} import software.amazon.awssdk.regions.Region import software.amazon.awssdk.regions.providers.AwsRegionProvider @@ -80,7 +60,7 @@ class OtoroshiEventsActorSupervizer(env: Env) extends Actor { implicit val e = env implicit val ec = env.analyticsExecutionContext - val dataExporters: TrieMap[String, DataExporter] = new LegitTrieMap[String, DataExporter]() + val dataExporters: TrieMap[String, DataExporter] = new UnboundedTrieMap[String, DataExporter]() val lastUpdate = new AtomicReference[Long](0L) override def receive: Receive = { @@ -1244,20 +1224,26 @@ object Exporters { "config" -> configUnsafe.json ) // println(s"call send: ${events.size}") - WasmUtils - .execute(plugin.config, "export_events", input ++ Json.obj("events" -> JsArray(events)), attrs, None) - .map { - case Left(err) => ExportResult.ExportResultFailure(err.stringify) - case Right(res) => - res.parseJson.select("error").asOpt[JsValue] match { - case None => ExportResult.ExportResultSuccess - case Some(error) => ExportResult.ExportResultFailure(error.stringify) + WasmVm.fromConfig(plugin.config).flatMap { + case None => ExportResult.ExportResultFailure("plugin not found !").vfuture + case Some((vm, _)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall("export_events", (input ++ Json.obj("events" -> JsArray(events))).stringify), None) + .map { + case Left(err) => ExportResult.ExportResultFailure(err.stringify) + case Right(res) => + res._1.parseJson.select("error").asOpt[JsValue] match { + case None => ExportResult.ExportResultSuccess + case Some(error) => ExportResult.ExportResultFailure(error.stringify) + } + } + .recover { case e => + e.printStackTrace() + ExportResult.ExportResultFailure(e.getMessage) } - } - .recover { case e => - e.printStackTrace() - ExportResult.ExportResultFailure(e.getMessage) - } + .andThen { + case _ => vm.release() + } + } } } .getOrElse(ExportResult.ExportResultSuccess.vfuture) diff --git a/otoroshi/app/events/impl/ElasticAnalytics.scala b/otoroshi/app/events/impl/ElasticAnalytics.scala index 05e9115ce0..d59b1f4e98 100644 --- a/otoroshi/app/events/impl/ElasticAnalytics.scala +++ b/otoroshi/app/events/impl/ElasticAnalytics.scala @@ -11,7 +11,7 @@ import otoroshi.events._ import otoroshi.models.{ApiKey, ElasticAnalyticsConfig, IndexSettingsInterval, ServiceDescriptor, ServiceGroup} import org.joda.time.format.{DateTimeFormatterBuilder, ISODateTimeFormat} import org.joda.time.{DateTime, Interval} -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import play.api.libs.json.Json.JsValueWrapper import play.api.libs.json._ import play.api.libs.ws.{WSClient, WSRequest} @@ -335,7 +335,7 @@ object ElasticTemplates { object ElasticWritesAnalytics { - val clusterInitializedCache = new LegitTrieMap[String, (Boolean, ElasticVersion)]() + val clusterInitializedCache = new UnboundedTrieMap[String, (Boolean, ElasticVersion)]() def toKey(config: ElasticAnalyticsConfig): String = { val index: String = config.index.getOrElse("otoroshi-events") diff --git a/otoroshi/app/gateway/circuitbreakers.scala b/otoroshi/app/gateway/circuitbreakers.scala index 156c75ff43..2bfabec444 100644 --- a/otoroshi/app/gateway/circuitbreakers.scala +++ b/otoroshi/app/gateway/circuitbreakers.scala @@ -14,7 +14,7 @@ import otoroshi.events._ import otoroshi.health.HealthCheck import otoroshi.models.{ApiKey, ClientConfig, GlobalConfig, LoadBalancing, ServiceDescriptor, Target} import otoroshi.utils.TypedMap -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import play.api.Logger import play.api.http.websocket.{Message => PlayWSMessage} import play.api.mvc.{RequestHeader, Result} @@ -125,7 +125,7 @@ case class AkkaCircuitBreakerWrapper( class ServiceDescriptorCircuitBreaker()(implicit ec: ExecutionContext, scheduler: Scheduler, env: Env) { val reqCounter = new AtomicInteger(0) - val breakers = new LegitTrieMap[String, AkkaCircuitBreakerWrapper]() + val breakers = new UnboundedTrieMap[String, AkkaCircuitBreakerWrapper]() lazy val logger = Logger("otoroshi-circuit-breaker") @@ -394,7 +394,7 @@ class ServiceDescriptorCircuitBreaker()(implicit ec: ExecutionContext, scheduler class CircuitBreakersHolder() { - private val circuitBreakers = new LegitTrieMap[String, ServiceDescriptorCircuitBreaker]() + private val circuitBreakers = new UnboundedTrieMap[String, ServiceDescriptorCircuitBreaker]() def get(id: String, defaultValue: () => ServiceDescriptorCircuitBreaker): ServiceDescriptorCircuitBreaker = { circuitBreakers.getOrElseUpdate(id, defaultValue()) diff --git a/otoroshi/app/health/healthchecker.scala b/otoroshi/app/health/healthchecker.scala index b1888f1ae6..5ed7f5fb16 100644 --- a/otoroshi/app/health/healthchecker.scala +++ b/otoroshi/app/health/healthchecker.scala @@ -15,7 +15,7 @@ import otoroshi.next.plugins.api.NgPluginCategory import otoroshi.script.{Job, JobContext, JobId, JobInstantiation, JobKind, JobStarting, JobVisibility} import play.api.Logger import otoroshi.security.{IdGenerator, OtoroshiClaim} -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import scala.concurrent.{ExecutionContext, Future} import scala.concurrent.duration.{Duration, FiniteDuration} @@ -31,7 +31,7 @@ object HealthCheck { import otoroshi.utils.http.Implicits._ - val badHealth = new LegitTrieMap[String, Unit]() + val badHealth = new UnboundedTrieMap[String, Unit]() def checkTarget(desc: ServiceDescriptor, target: Target, logger: Logger)(implicit env: Env, diff --git a/otoroshi/app/models/descriptor.scala b/otoroshi/app/models/descriptor.scala index 4a52705b9c..6af0fbdc3d 100644 --- a/otoroshi/app/models/descriptor.scala +++ b/otoroshi/app/models/descriptor.scala @@ -33,7 +33,7 @@ import otoroshi.utils.config.ConfigUtils import otoroshi.utils.gzip.GzipConfig import otoroshi.utils.ReplaceAllWith import otoroshi.utils.cache.Caches -import otoroshi.utils.cache.types.{LegitConcurrentHashMap, LegitTrieMap} +import otoroshi.utils.cache.types.{UnboundedConcurrentHashMap, UnboundedTrieMap} import otoroshi.utils.http.{CacheConnectionSettings, MtlsConfig} import scala.collection.concurrent.TrieMap @@ -62,9 +62,9 @@ case class ServiceDescriptorQuery( case s => s"$subdomain.$line.$domain" } - private val existsCache = new LegitConcurrentHashMap[String, Boolean] - private val serviceIdsCache = new LegitConcurrentHashMap[String, Seq[String]] - private val servicesCache = new LegitConcurrentHashMap[String, Seq[ServiceDescriptor]] + private val existsCache = new UnboundedConcurrentHashMap[String, Boolean] + private val serviceIdsCache = new UnboundedConcurrentHashMap[String, Seq[String]] + private val servicesCache = new UnboundedConcurrentHashMap[String, Seq[ServiceDescriptor]] def exists()(implicit ec: ExecutionContext, env: Env): Future[Boolean] = { val key = this.asKey @@ -288,7 +288,7 @@ case class AtomicAverage(count: AtomicLong, sum: AtomicLong) { object BestResponseTime extends LoadBalancing { private[models] val random = new scala.util.Random - private[models] val responseTimes = new LegitTrieMap[String, AtomicAverage]() + private[models] val responseTimes = new UnboundedTrieMap[String, AtomicAverage]() def incrementAverage(desc: ServiceDescriptor, target: Target, responseTime: Long): Unit = { val key = s"${desc.id}-${target.asKey}" @@ -2507,7 +2507,7 @@ trait ServiceDescriptorDataStore extends BasicStore[ServiceDescriptor] { } */ - val matched = new LegitTrieMap[String, String]() + val matched = new UnboundedTrieMap[String, String]() val filtered1 = services.filter { sr => val allHeadersMatched = matchAllHeaders(sr, query) val rootMatched = sr.allPaths match { diff --git a/otoroshi/app/models/wasm.scala b/otoroshi/app/models/wasm.scala index 12b1238ce6..0bcf80b48f 100644 --- a/otoroshi/app/models/wasm.scala +++ b/otoroshi/app/models/wasm.scala @@ -12,6 +12,7 @@ import play.api.libs.json._ import scala.concurrent.duration.{DurationInt, FiniteDuration} import scala.concurrent.{ExecutionContext, Future} import scala.util.{Failure, Success, Try} +import otoroshi.wasm.WasmVmPool case class WasmPlugin( id: String, @@ -30,6 +31,7 @@ case class WasmPlugin( override def theDescription: String = description override def theTags: Seq[String] = tags override def theMetadata: Map[String, String] = metadata + def pool()(implicit env: Env): WasmVmPool = WasmVmPool.forPlugin(this) } object WasmPlugin { diff --git a/otoroshi/app/netty/res.scala b/otoroshi/app/netty/res.scala index 8b4d44caf5..3625e95534 100644 --- a/otoroshi/app/netty/res.scala +++ b/otoroshi/app/netty/res.scala @@ -1,12 +1,13 @@ package reactor.netty.resources +import otoroshi.utils.cache.types.UnboundedTrieMap + import java.lang.reflect.{Field, Modifier} -import scala.collection.concurrent.TrieMap import scala.util.Try object DefaultLoopResourcesHelper { - private val cache = new TrieMap[String, LoopResources]() + private val cache = new UnboundedTrieMap[String, LoopResources]() def getDefaultLoop(name: String, workers: Int, daemon: Boolean): LoopResources = { val key = s"default-$name-$workers-$daemon" diff --git a/otoroshi/app/next/extensions/example.scala b/otoroshi/app/next/extensions/example.scala index a9be21a546..e433b80798 100644 --- a/otoroshi/app/next/extensions/example.scala +++ b/otoroshi/app/next/extensions/example.scala @@ -13,7 +13,7 @@ import otoroshi.next.plugins.api.{ NgStep } import otoroshi.storage._ -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.syntax.implicits._ import play.api.libs.json._ import play.api.mvc.Results @@ -79,7 +79,7 @@ class FooAdminExtensionDatastores(env: Env, extensionId: AdminExtensionId) { class FooAdminExtensionState(env: Env) { - private val foos = new LegitTrieMap[String, Foo]() + private val foos = new UnboundedTrieMap[String, Foo]() def foo(id: String): Option[Foo] = foos.get(id) def allFoos(): Seq[Foo] = foos.values.toSeq diff --git a/otoroshi/app/next/extensions/extension.scala b/otoroshi/app/next/extensions/extension.scala index 1c9cbe120d..4b514794a9 100644 --- a/otoroshi/app/next/extensions/extension.scala +++ b/otoroshi/app/next/extensions/extension.scala @@ -7,7 +7,7 @@ import otoroshi.api.Resource import otoroshi.env.Env import otoroshi.models.{ApiKey, BackOfficeUser, EntityLocationSupport, PrivateAppsUser} import otoroshi.storage.BasicStore -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.syntax.implicits._ import play.api.Configuration import play.api.libs.json.{Format, JsObject, JsResult, JsSuccess, JsValue, Reads} @@ -245,7 +245,7 @@ class AdminExtensions(env: Env, _extensions: Seq[AdminExtension]) { ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// - private val extCache = new LegitTrieMap[Class[_], Any] + private val extCache = new UnboundedTrieMap[Class[_], Any] def extension[A](implicit ct: ClassTag[A]): Option[A] = { if (hasExtensions) { diff --git a/otoroshi/app/next/models/treerouter.scala b/otoroshi/app/next/models/treerouter.scala index c61bdc639c..a546405ecb 100644 --- a/otoroshi/app/next/models/treerouter.scala +++ b/otoroshi/app/next/models/treerouter.scala @@ -3,7 +3,7 @@ package otoroshi.next.models import com.github.blemale.scaffeine.Scaffeine import otoroshi.env.Env import otoroshi.models.{ClientConfig, EntityLocation} -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.http.RequestImplicits.EnhancedRequestHeader import otoroshi.utils.syntax.implicits._ import otoroshi.utils.{RegexPool, TypedMap} @@ -65,7 +65,7 @@ case class NgMatchedRoute( } object NgTreeRouter { - def empty = NgTreeRouter(new LegitTrieMap[String, NgTreeNodePath](), scala.collection.mutable.MutableList.empty) + def empty = NgTreeRouter(new UnboundedTrieMap[String, NgTreeNodePath](), scala.collection.mutable.MutableList.empty) def build(routes: Seq[NgRoute]): NgTreeRouter = { val root = NgTreeRouter.empty routes.foreach { route => @@ -153,7 +153,7 @@ object NgTreeNodePath { } def empty: NgTreeNodePath = - NgTreeNodePath(scala.collection.mutable.MutableList.empty, new LegitTrieMap[String, NgTreeNodePath]) + NgTreeNodePath(scala.collection.mutable.MutableList.empty, new UnboundedTrieMap[String, NgTreeNodePath]) } case class NgTreeNodePath( diff --git a/otoroshi/app/next/plugins/Keys.scala b/otoroshi/app/next/plugins/Keys.scala index 039a8128ae..40206d068e 100644 --- a/otoroshi/app/next/plugins/Keys.scala +++ b/otoroshi/app/next/plugins/Keys.scala @@ -3,7 +3,7 @@ package otoroshi.next.plugins import otoroshi.models.{ApiKey, ApikeyTuple, JwtInjection} import otoroshi.next.models._ import otoroshi.next.proxy.NgExecutionReport -import otoroshi.wasm.WasmContext +import otoroshi.wasm.{WasmContext, WasmVm} import play.api.libs.typedmap.TypedKey import play.api.mvc.Result diff --git a/otoroshi/app/next/plugins/clientcert.scala b/otoroshi/app/next/plugins/clientcert.scala index 4536c67ba7..cf598b1b0b 100644 --- a/otoroshi/app/next/plugins/clientcert.scala +++ b/otoroshi/app/next/plugins/clientcert.scala @@ -9,10 +9,10 @@ import otoroshi.env.Env import otoroshi.gateway.Errors import otoroshi.models.{ApiKey, RemainingQuotas, RouteIdentifier, ServiceDescriptorIdentifier} import otoroshi.next.models.NgTlsConfig -import otoroshi.next.plugins.api.{NgPluginConfig, _} +import otoroshi.next.plugins.api._ import otoroshi.security.IdGenerator import otoroshi.utils.RegexPool -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.http.DN import otoroshi.utils.http.RequestImplicits.EnhancedRequestHeader import otoroshi.utils.syntax.implicits._ @@ -465,7 +465,7 @@ class NgHasClientCertMatchingHttpValidator extends NgAccessValidator { override def visibility: NgPluginVisibility = NgPluginVisibility.NgUserLand override def categories: Seq[NgPluginCategory] = Seq(NgPluginCategory.AccessControl) override def steps: Seq[NgStep] = Seq(NgStep.ValidateAccess) - private val cache = new LegitTrieMap[String, (Long, JsValue)] + private val cache = new UnboundedTrieMap[String, (Long, JsValue)] def forbidden(ctx: NgAccessContext)(implicit env: Env, ec: ExecutionContext): Future[NgAccess] = { Errors diff --git a/otoroshi/app/next/plugins/graphql.scala b/otoroshi/app/next/plugins/graphql.scala index a0d2838bf1..b6c3921a22 100644 --- a/otoroshi/app/next/plugins/graphql.scala +++ b/otoroshi/app/next/plugins/graphql.scala @@ -744,53 +744,53 @@ class GraphQLBackend extends NgBackendCall { "route" -> ctx.route.json, "request" -> ctx.request.json ) - WasmUtils - .execute( - WasmConfig( - source = WasmSource(WasmSourceKind(wasmSourceKind.getOrElse("")), wasmSourcePath.getOrElse("")), - memoryPages = wasmMemoryPages.getOrElse(30), - functionName = wasmFunctionName, - config = Map.empty, - allowedHosts = wasmAllowedHosts.getOrElse(Seq.empty), - wasi = wasmWasi, - authorizations = WasmAuthorizations( - proxyHttpCallTimeout = wasmProxyHttpCallTimeout.getOrElse(5000), - httpAccess = wasmHttpAccess.getOrElse(false), - globalDataStoreAccess = WasmDataRights( - read = wasmGlobalDataStoreAccessRead.getOrElse(false), - write = wasmGlobalDataStoreAccessWrite.getOrElse(false) - ), - pluginDataStoreAccess = WasmDataRights( - read = wasmPluginDataStoreAccessRead.getOrElse(false), - write = wasmPluginDataStoreAccessWrite.getOrElse(false) - ), - globalMapAccess = WasmDataRights( - read = wasmGlobalMapAccessRead.getOrElse(false), - write = wasmGlobalMapAccessWrite.getOrElse(false) - ), - pluginMapAccess = WasmDataRights( - read = wasmPluginMapAccessRead.getOrElse(false), - write = wasmPluginMapAccessWrite.getOrElse(false) - ), - proxyStateAccess = wasmProxyStateAccess.getOrElse(false), - configurationAccess = wasmConfigurationAccess.getOrElse(false) - ) + val wsmCfg = WasmConfig( + source = WasmSource(WasmSourceKind(wasmSourceKind.getOrElse("")), wasmSourcePath.getOrElse("")), + memoryPages = wasmMemoryPages.getOrElse(30), + functionName = wasmFunctionName, + config = Map.empty, + allowedHosts = wasmAllowedHosts.getOrElse(Seq.empty), + wasi = wasmWasi, + authorizations = WasmAuthorizations( + proxyHttpCallTimeout = wasmProxyHttpCallTimeout.getOrElse(5000), + httpAccess = wasmHttpAccess.getOrElse(false), + globalDataStoreAccess = WasmDataRights( + read = wasmGlobalDataStoreAccessRead.getOrElse(false), + write = wasmGlobalDataStoreAccessWrite.getOrElse(false) + ), + pluginDataStoreAccess = WasmDataRights( + read = wasmPluginDataStoreAccessRead.getOrElse(false), + write = wasmPluginDataStoreAccessWrite.getOrElse(false) + ), + globalMapAccess = WasmDataRights( + read = wasmGlobalMapAccessRead.getOrElse(false), + write = wasmGlobalMapAccessWrite.getOrElse(false) ), - "execute", - input, - ctx.attrs.some, - None + pluginMapAccess = WasmDataRights( + read = wasmPluginMapAccessRead.getOrElse(false), + write = wasmPluginMapAccessWrite.getOrElse(false) + ), + proxyStateAccess = wasmProxyStateAccess.getOrElse(false), + configurationAccess = wasmConfigurationAccess.getOrElse(false) ) - .map { - case Right(output) => - try { - Json.parse(output) - } catch { - case _: Exception => - output + ) + WasmVm.fromConfig(wsmCfg).flatMap { + case None => Future.failed(WasmException("plugin not found !")) + case Some((vm, _)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall("execute", input.stringify), None) + .map { + case Right(output) => + try { + Json.parse(output._1) + } catch { + case _: Exception => output + } + case Left(error) => error } - case Left(error) => error - } + .andThen { + case _ => vm.release() + } + } } } diff --git a/otoroshi/app/next/plugins/izanami.scala b/otoroshi/app/next/plugins/izanami.scala index aaf1241ebd..87cd73bd8e 100644 --- a/otoroshi/app/next/plugins/izanami.scala +++ b/otoroshi/app/next/plugins/izanami.scala @@ -10,7 +10,7 @@ import otoroshi.next.models.NgTlsConfig import otoroshi.next.plugins.api._ import otoroshi.security.IdGenerator import otoroshi.utils.RegexPool -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.http.RequestImplicits.EnhancedRequestHeader import otoroshi.utils.http.WSCookieWithSameSite import otoroshi.utils.syntax.implicits._ @@ -341,7 +341,7 @@ class NgIzanamiV1Canary extends NgRequestTransformer { override def categories: Seq[NgPluginCategory] = Seq(NgPluginCategory.Integrations) override def steps: Seq[NgStep] = Seq(NgStep.TransformRequest, NgStep.TransformResponse) - private val cookieJar = new LegitTrieMap[String, WSCookie]() + private val cookieJar = new UnboundedTrieMap[String, WSCookie]() private val cache: Cache[String, JsValue] = Scaffeine() .recordStats() diff --git a/otoroshi/app/next/plugins/mirror.scala b/otoroshi/app/next/plugins/mirror.scala index 73a2f220fb..23d5ddae9e 100644 --- a/otoroshi/app/next/plugins/mirror.scala +++ b/otoroshi/app/next/plugins/mirror.scala @@ -12,7 +12,7 @@ import otoroshi.next.models.NgRoute import otoroshi.next.plugins.api._ import otoroshi.plugins.mirror.MirroringPluginConfig import otoroshi.utils.UrlSanitizer -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.http.Implicits._ import otoroshi.utils.http.RequestImplicits.EnhancedRequestHeader import otoroshi.utils.syntax.implicits._ @@ -248,7 +248,7 @@ class NgTrafficMirroring extends NgRequestTransformer { override def categories: Seq[NgPluginCategory] = Seq(NgPluginCategory.Other) override def steps: Seq[NgStep] = Seq(NgStep.TransformRequest, NgStep.TransformResponse) - private val inFlightRequests = new LegitTrieMap[String, NgRequestContext]() + private val inFlightRequests = new UnboundedTrieMap[String, NgRequestContext]() override def beforeRequest( ctx: NgBeforeRequestContext diff --git a/otoroshi/app/next/plugins/tracing.scala b/otoroshi/app/next/plugins/tracing.scala index c1e6f3bf96..0ecd789d75 100644 --- a/otoroshi/app/next/plugins/tracing.scala +++ b/otoroshi/app/next/plugins/tracing.scala @@ -18,7 +18,7 @@ import io.opentelemetry.sdk.trace.data.SpanData import otoroshi.el.GlobalExpressionLanguage import otoroshi.env.Env import otoroshi.next.plugins.api._ -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.http.RequestImplicits.EnhancedRequestHeader import otoroshi.utils.syntax.implicits._ import play.api.libs.json._ @@ -103,7 +103,7 @@ case class SdkWrapper(config: W3CTracingConfig, sdk: OpenTelemetrySdk, traceProv class W3CTracing extends NgRequestTransformer { - private val opentelemetrysdks = new LegitTrieMap[String, SdkWrapper]() + private val opentelemetrysdks = new UnboundedTrieMap[String, SdkWrapper]() override def steps: Seq[NgStep] = Seq(NgStep.TransformRequest, NgStep.TransformResponse) override def categories: Seq[NgPluginCategory] = Seq(NgPluginCategory.Monitoring) diff --git a/otoroshi/app/next/plugins/wasm.scala b/otoroshi/app/next/plugins/wasm.scala index c76c7dc1fa..4fe071d958 100644 --- a/otoroshi/app/next/plugins/wasm.scala +++ b/otoroshi/app/next/plugins/wasm.scala @@ -11,9 +11,10 @@ import otoroshi.next.plugins.api._ import otoroshi.next.proxy.NgProxyEngineError import otoroshi.next.utils.JsonHelpers import otoroshi.script._ -import otoroshi.utils.{ConcurrentMutableTypedMap, TypedMap} +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.http.RequestImplicits.EnhancedRequestHeader import otoroshi.utils.syntax.implicits._ +import otoroshi.utils.{ConcurrentMutableTypedMap, TypedMap} import otoroshi.wasm._ import play.api.Logger import play.api.http.HttpEntity @@ -21,7 +22,6 @@ import play.api.libs.json._ import play.api.libs.ws.WSCookie import play.api.mvc.{Request, Result, Results} -import scala.collection.concurrent.TrieMap import scala.concurrent.duration.{DurationInt, DurationLong, FiniteDuration} import scala.concurrent.{Await, ExecutionContext, Future} import scala.util.{Failure, Success, Try} @@ -110,11 +110,17 @@ class WasmRouteMatcher extends NgRouteMatcher { val config = ctx .cachedConfig(internalName)(WasmConfig.format) .getOrElse(WasmConfig()) - val res = - Await.result(WasmUtils.execute(config, "matches_route", ctx.wasmJson, ctx.attrs.some, None), 10.seconds) + val fu = WasmVm.fromConfig(config).flatMap { + case None => Left(Json.obj("error" -> "plugin not found")).vfuture + case Some((vm, localConfig)) => vm.call(WasmFunctionParameters.ExtismFuntionCall(config.functionName.orElse(localConfig.functionName).getOrElse("matches_route"), ctx.wasmJson.stringify), None) + .andThen { + case _ => vm.release() + } + } + val res = Await.result(fu, 10.seconds) res match { case Right(res) => { - val response = Json.parse(res) + val response = Json.parse(res._1) AttrsHelper.updateAttrs(ctx.attrs, response) (response \ "result").asOpt[Boolean].getOrElse(false) } @@ -144,37 +150,55 @@ class WasmPreRoute extends NgPreRouting { .cachedConfig(internalName)(WasmConfig.format) .getOrElse(WasmConfig()) val input = ctx.wasmJson - WasmUtils.execute(config, "pre_route", input, ctx.attrs.some, None).map { - case Left(err) => Left(NgPreRoutingErrorWithResult(Results.InternalServerError(err))) - case Right(resStr) => { - Try(Json.parse(resStr)) match { - case Failure(e) => - Left(NgPreRoutingErrorWithResult(Results.InternalServerError(Json.obj("error" -> e.getMessage)))) - case Success(response) => { - AttrsHelper.updateAttrs(ctx.attrs, response) - val error = response.select("error").asOpt[Boolean].getOrElse(false) - if (error) { - val body = BodyHelper.extractBodyFrom(response) - val headers: Map[String, String] = response - .select("headers") - .asOpt[Map[String, String]] - .getOrElse(Map("Content-Type" -> "application/json")) - val contentType = headers.getIgnoreCase("Content-Type").getOrElse("application/json") - Left( - NgPreRoutingErrorRaw( - code = response.select("status").asOpt[Int].getOrElse(200), - headers = headers, - contentType = contentType, - body = body - ) - ) - } else { - // TODO: handle attrs - Right(Done) + WasmVm.fromConfig(config).flatMap { + case None => Errors + .craftResponseResult( + "plugin not found !", + Results.Status(500), + ctx.request, + None, + None, + attrs = ctx.attrs, + maybeRoute = ctx.route.some + ) + .map(r => NgPreRoutingErrorWithResult(r).left) + case Some((vm, localConfig)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall(config.functionName.orElse(localConfig.functionName).getOrElse("pre_route"), input.stringify), None) + .map { + case Left(err) => Left(NgPreRoutingErrorWithResult(Results.InternalServerError(err))) + case Right(resStr) => { + Try(Json.parse(resStr._1)) match { + case Failure(e) => + Left(NgPreRoutingErrorWithResult(Results.InternalServerError(Json.obj("error" -> e.getMessage)))) + case Success(response) => { + AttrsHelper.updateAttrs(ctx.attrs, response) + val error = response.select("error").asOpt[Boolean].getOrElse(false) + if (error) { + val body = BodyHelper.extractBodyFrom(response) + val headers: Map[String, String] = response + .select("headers") + .asOpt[Map[String, String]] + .getOrElse(Map("Content-Type" -> "application/json")) + val contentType = headers.getIgnoreCase("Content-Type").getOrElse("application/json") + Left( + NgPreRoutingErrorRaw( + code = response.select("status").asOpt[Int].getOrElse(200), + headers = headers, + contentType = contentType, + body = body + ) + ) + } else { + // TODO: handle attrs + Right(Done) + } + } + } } } - } - } + .andThen { + case _ => vm.release() + } } } } @@ -206,33 +230,52 @@ class WasmBackend extends NgBackendCall { .getOrElse(WasmConfig()) WasmUtils.debugLog.debug("callBackend") ctx.wasmJson - .flatMap(input => WasmUtils.execute(config, "call_backend", input, ctx.attrs.some, None)) - .map { - case Right(output) => - val response = - try { - Json.parse(output) - } catch { - case e: Exception => - logger.error("error during json parsing", e) - Json.obj() - } - AttrsHelper.updateAttrs(ctx.attrs, response) - val body = BodyHelper.extractBodyFrom(response) - bodyResponse( - status = response.select("status").asOpt[Int].getOrElse(200), - headers = response - .select("headers") - .asOpt[Map[String, String]] - .getOrElse(Map("Content-Type" -> "application/json")), - body = body.chunks(16 * 1024) - ) - case Left(value) => - bodyResponse( - status = 400, - headers = Map.empty, - body = Json.stringify(value).byteString.chunks(16 * 1024) - ) + .flatMap { input => + WasmVm.fromConfig(config).flatMap { + case None => Errors + .craftResponseResult( + "plugin not found !", + Results.Status(500), + ctx.rawRequest, + None, + None, + attrs = ctx.attrs, + maybeRoute = ctx.route.some + ) + .map(r => NgProxyEngineError.NgResultProxyEngineError(r).left) + case Some((vm, localConfig)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall(config.functionName.orElse(localConfig.functionName).getOrElse("call_backend"), input.stringify), None) + .map { + case Right(output) => + val response = + try { + Json.parse(output._1) + } catch { + case e: Exception => + logger.error("error during json parsing", e) + Json.obj() + } + AttrsHelper.updateAttrs(ctx.attrs, response) + val body = BodyHelper.extractBodyFrom(response) + bodyResponse( + status = response.select("status").asOpt[Int].getOrElse(200), + headers = response + .select("headers") + .asOpt[Map[String, String]] + .getOrElse(Map("Content-Type" -> "application/json")), + body = body.chunks(16 * 1024) + ) + case Left(value) => + bodyResponse( + status = 400, + headers = Map.empty, + body = Json.stringify(value).byteString.chunks(16 * 1024) + ) + } + .andThen { + case _ => vm.release() + } + } } } } @@ -328,46 +371,101 @@ class WasmAccessValidator extends NgAccessValidator { override def defaultConfigObject: Option[NgPluginConfig] = WasmConfig().some override def access(ctx: NgAccessContext)(implicit env: Env, ec: ExecutionContext): Future[NgAccess] = { + val config = ctx .cachedConfig(internalName)(WasmConfig.format) .getOrElse(WasmConfig()) - WasmUtils - .execute(config, "access", ctx.wasmJson, ctx.attrs.some, None) - .flatMap { - case Right(res) => - val response = Json.parse(res) - AttrsHelper.updateAttrs(ctx.attrs, response) - val result = (response \ "result").asOpt[Boolean].getOrElse(false) - if (result) { - NgAccess.NgAllowed.vfuture - } else { - val error = (response \ "error").asOpt[JsObject].getOrElse(Json.obj()) - Errors - .craftResponseResult( - (error \ "message").asOpt[String].getOrElse("An error occured"), - Results.Status((error \ "status").asOpt[Int].getOrElse(403)), - ctx.request, - None, - None, - attrs = ctx.attrs, - maybeRoute = ctx.route.some - ) - .map(r => NgAccess.NgDenied(r)) + WasmVm.fromConfig(config).flatMap { + case None => Errors + .craftResponseResult( + "plugin not found !", + Results.Status(500), + ctx.request, + None, + None, + attrs = ctx.attrs, + maybeRoute = ctx.route.some + ) + .map(r => NgAccess.NgDenied(r)) + case Some((vm, localConfig)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall(config.functionName.orElse(localConfig.functionName).getOrElse("access"), ctx.wasmJson.stringify), None) + .flatMap { + case Right(res) => + val response = Json.parse(res._1) + AttrsHelper.updateAttrs(ctx.attrs, response) + val result = (response \ "result").asOpt[Boolean].getOrElse(false) + if (result) { + NgAccess.NgAllowed.vfuture + } else { + val error = (response \ "error").asOpt[JsObject].getOrElse(Json.obj()) + Errors + .craftResponseResult( + (error \ "message").asOpt[String].getOrElse("An error occured"), + Results.Status((error \ "status").asOpt[Int].getOrElse(403)), + ctx.request, + None, + None, + attrs = ctx.attrs, + maybeRoute = ctx.route.some + ) + .map(r => NgAccess.NgDenied(r)) + } + case Left(err) => + Errors + .craftResponseResult( + (err \ "error").asOpt[String].getOrElse("An error occured"), + Results.Status(400), + ctx.request, + None, + None, + attrs = ctx.attrs, + maybeRoute = ctx.route.some + ) + .map(r => NgAccess.NgDenied(r)) } - case Left(err) => - Errors - .craftResponseResult( - (err \ "error").asOpt[String].getOrElse("An error occured"), - Results.Status(400), - ctx.request, - None, - None, - attrs = ctx.attrs, - maybeRoute = ctx.route.some - ) - .map(r => NgAccess.NgDenied(r)) - } + .andThen { + case _ => vm.release() + } + } + //} else { + // WasmUtils + // .execute(config, "access", ctx.wasmJson, ctx.attrs.some, None) + // .flatMap { + // case Right(res) => + // val response = Json.parse(res) + // AttrsHelper.updateAttrs(ctx.attrs, response) + // val result = (response \ "result").asOpt[Boolean].getOrElse(false) + // if (result) { + // NgAccess.NgAllowed.vfuture + // } else { + // val error = (response \ "error").asOpt[JsObject].getOrElse(Json.obj()) + // Errors + // .craftResponseResult( + // (error \ "message").asOpt[String].getOrElse("An error occured"), + // Results.Status((error \ "status").asOpt[Int].getOrElse(403)), + // ctx.request, + // None, + // None, + // attrs = ctx.attrs, + // maybeRoute = ctx.route.some + // ) + // .map(r => NgAccess.NgDenied(r)) + // } + // case Left(err) => + // Errors + // .craftResponseResult( + // (err \ "error").asOpt[String].getOrElse("An error occured"), + // Results.Status(400), + // ctx.request, + // None, + // None, + // attrs = ctx.attrs, + // maybeRoute = ctx.route.some + // ) + // .map(r => NgAccess.NgDenied(r)) + // } + //} } } @@ -396,40 +494,56 @@ class WasmRequestTransformer extends NgRequestTransformer { .getOrElse(WasmConfig()) ctx.wasmJson .flatMap(input => { - WasmUtils - .execute(config, "transform_request", input, ctx.attrs.some, None) - .map { - case Right(res) => - val response = Json.parse(res) - AttrsHelper.updateAttrs(ctx.attrs, response) - if (response.select("error").asOpt[Boolean].getOrElse(false)) { - val status = response.select("status").asOpt[Int].getOrElse(500) - val headers = (response \ "headers").asOpt[Map[String, String]].getOrElse(Map.empty) - val cookies = WasmUtils.convertJsonPlayCookies(response).getOrElse(Seq.empty) - val contentType = headers.getIgnoreCase("Content-Type").getOrElse("application/octet-stream") - val body = BodyHelper.extractBodyFrom(response) - Left( - Results - .Status(status)(body) - .withCookies(cookies: _*) - .withHeaders(headers.toSeq: _*) - .as(contentType) - ) - } else { - val body = BodyHelper.extractBodyFromOpt(response) - Right( - ctx.otoroshiRequest.copy( - // TODO: handle client cert chain and backend - method = (response \ "method").asOpt[String].getOrElse(ctx.otoroshiRequest.method), - url = (response \ "url").asOpt[String].getOrElse(ctx.otoroshiRequest.url), - headers = (response \ "headers").asOpt[Map[String, String]].getOrElse(ctx.otoroshiRequest.headers), - cookies = WasmUtils.convertJsonCookies(response).getOrElse(ctx.otoroshiRequest.cookies), - body = body.map(_.chunks(16 * 1024)).getOrElse(ctx.otoroshiRequest.body) - ) - ) + WasmVm.fromConfig(config).flatMap { + case None => Errors + .craftResponseResult( + "plugin not found !", + Results.Status(500), + ctx.request, + None, + None, + attrs = ctx.attrs, + maybeRoute = ctx.route.some + ) + .map(r => Left(r)) + case Some((vm, localConfig)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall(config.functionName.orElse(localConfig.functionName).getOrElse("transform_request"), input.stringify), None) + .map { + case Right(res) => + val response = Json.parse(res._1) + AttrsHelper.updateAttrs(ctx.attrs, response) + if (response.select("error").asOpt[Boolean].getOrElse(false)) { + val status = response.select("status").asOpt[Int].getOrElse(500) + val headers = (response \ "headers").asOpt[Map[String, String]].getOrElse(Map.empty) + val cookies = WasmUtils.convertJsonPlayCookies(response).getOrElse(Seq.empty) + val contentType = headers.getIgnoreCase("Content-Type").getOrElse("application/octet-stream") + val body = BodyHelper.extractBodyFrom(response) + Left( + Results + .Status(status)(body) + .withCookies(cookies: _*) + .withHeaders(headers.toSeq: _*) + .as(contentType) + ) + } else { + val body = BodyHelper.extractBodyFromOpt(response) + Right( + ctx.otoroshiRequest.copy( + // TODO: handle client cert chain and backend + method = (response \ "method").asOpt[String].getOrElse(ctx.otoroshiRequest.method), + url = (response \ "url").asOpt[String].getOrElse(ctx.otoroshiRequest.url), + headers = (response \ "headers").asOpt[Map[String, String]].getOrElse(ctx.otoroshiRequest.headers), + cookies = WasmUtils.convertJsonCookies(response).getOrElse(ctx.otoroshiRequest.cookies), + body = body.map(_.chunks(16 * 1024)).getOrElse(ctx.otoroshiRequest.body) + ) + ) + } + case Left(value) => Left(Results.BadRequest(value)) } - case Left(value) => Left(Results.BadRequest(value)) - } + .andThen { + case _ => vm.release() + } + } }) } } @@ -461,38 +575,54 @@ class WasmResponseTransformer extends NgRequestTransformer { .getOrElse(WasmConfig()) ctx.wasmJson .flatMap(input => { - WasmUtils - .execute(config, "transform_response", input, ctx.attrs.some, None) - .map { - case Right(res) => - val response = Json.parse(res) - AttrsHelper.updateAttrs(ctx.attrs, response) - if (response.select("error").asOpt[Boolean].getOrElse(false)) { - val status = response.select("status").asOpt[Int].getOrElse(500) - val headers = (response \ "headers").asOpt[Map[String, String]].getOrElse(Map.empty) - val cookies = WasmUtils.convertJsonPlayCookies(response).getOrElse(Seq.empty) - val contentType = headers.getIgnoreCase("Content-Type").getOrElse("application/octet-stream") - val body = BodyHelper.extractBodyFrom(response) - Left( - Results - .Status(status)(body) - .withCookies(cookies: _*) - .withHeaders(headers.toSeq: _*) - .as(contentType) - ) - } else { - val body = BodyHelper.extractBodyFromOpt(response) - ctx.otoroshiResponse - .copy( - headers = (response \ "headers").asOpt[Map[String, String]].getOrElse(ctx.otoroshiResponse.headers), - status = (response \ "status").asOpt[Int].getOrElse(200), - cookies = WasmUtils.convertJsonCookies(response).getOrElse(ctx.otoroshiResponse.cookies), - body = body.map(_.chunks(16 * 1024)).getOrElse(ctx.otoroshiResponse.body) - ) - .right + WasmVm.fromConfig(config).flatMap { + case None => Errors + .craftResponseResult( + "plugin not found !", + Results.Status(500), + ctx.request, + None, + None, + attrs = ctx.attrs, + maybeRoute = ctx.route.some + ) + .map(r => Left(r)) + case Some((vm, localConfig)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall(config.functionName.orElse(localConfig.functionName).getOrElse("transform_response"), input.stringify), None) + .map { + case Right(res) => + val response = Json.parse(res._1) + AttrsHelper.updateAttrs(ctx.attrs, response) + if (response.select("error").asOpt[Boolean].getOrElse(false)) { + val status = response.select("status").asOpt[Int].getOrElse(500) + val headers = (response \ "headers").asOpt[Map[String, String]].getOrElse(Map.empty) + val cookies = WasmUtils.convertJsonPlayCookies(response).getOrElse(Seq.empty) + val contentType = headers.getIgnoreCase("Content-Type").getOrElse("application/octet-stream") + val body = BodyHelper.extractBodyFrom(response) + Left( + Results + .Status(status)(body) + .withCookies(cookies: _*) + .withHeaders(headers.toSeq: _*) + .as(contentType) + ) + } else { + val body = BodyHelper.extractBodyFromOpt(response) + ctx.otoroshiResponse + .copy( + headers = (response \ "headers").asOpt[Map[String, String]].getOrElse(ctx.otoroshiResponse.headers), + status = (response \ "status").asOpt[Int].getOrElse(200), + cookies = WasmUtils.convertJsonCookies(response).getOrElse(ctx.otoroshiResponse.cookies), + body = body.map(_.chunks(16 * 1024)).getOrElse(ctx.otoroshiResponse.body) + ) + .right + } + case Left(value) => Left(Results.BadRequest(value)) } - case Left(value) => Left(Results.BadRequest(value)) - } + .andThen { + case _ => vm.release() + } + } }) } } @@ -513,22 +643,22 @@ class WasmSink extends NgRequestSink { case JsSuccess(value, _) => value case JsError(_) => WasmConfig() } - val fu = WasmUtils - .execute( - config.copy(functionName = "sink_matches".some), - "matches", - ctx.wasmJson, - ctx.attrs.some, - None - ) - .map { - case Left(error) => false - case Right(res) => { - val response = Json.parse(res) - AttrsHelper.updateAttrs(ctx.attrs, response) - (response \ "result").asOpt[Boolean].getOrElse(false) - } - } + val fu = WasmVm.fromConfig(config).flatMap { + case None => false.vfuture + case Some((vm, localConfig)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall("sink_matches", ctx.wasmJson.stringify), None) + .map { + case Left(error) => false + case Right(res) => { + val response = Json.parse(res._1) + AttrsHelper.updateAttrs(ctx.attrs, response) + (response \ "result").asOpt[Boolean].getOrElse(false) + } + } + .andThen { + case _ => vm.release() + } + } Await.result(fu, 10.seconds) } @@ -551,39 +681,55 @@ class WasmSink extends NgRequestSink { } requestToWasmJson(ctx.body).flatMap { body => val input = ctx.wasmJson.asObject ++ Json.obj("body_bytes" -> body) - WasmUtils - .execute(config, "sink_handle", input, ctx.attrs.some, None) - .map { - case Left(error) => Results.InternalServerError(error) - case Right(res) => { - val response = Json.parse(res) - AttrsHelper.updateAttrs(ctx.attrs, response) - val status = response - .select("status") - .asOpt[Int] - .getOrElse(200) - - val _headers = response - .select("headers") - .asOpt[Map[String, String]] - .getOrElse(Map("Content-Type" -> "application/json")) - - val contentType = _headers - .get("Content-Type") - .orElse(_headers.get("content-type")) - .getOrElse("application/json") - - val headers = _headers - .filterNot(_._1.toLowerCase() == "content-type") - val body = BodyHelper.extractBodyFrom(response) - - Results - .Status(status)(body) - .withHeaders(headers.toSeq: _*) - .as(contentType) - } - } + WasmVm.fromConfig(config).flatMap { + case None => Errors + .craftResponseResult( + "plugin not found !", + Results.Status(500), + ctx.request, + None, + None, + maybeRoute = None, + attrs = ctx.attrs + ) + case Some((vm, localConfig)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall(config.functionName.orElse(localConfig.functionName).getOrElse("sink_handle"), input.stringify), None) + .map { + case Left(error) => Results.InternalServerError(error) + case Right(res) => { + val response = Json.parse(res._1) + AttrsHelper.updateAttrs(ctx.attrs, response) + val status = response + .select("status") + .asOpt[Int] + .getOrElse(200) + + val _headers = response + .select("headers") + .asOpt[Map[String, String]] + .getOrElse(Map("Content-Type" -> "application/json")) + + val contentType = _headers + .get("Content-Type") + .orElse(_headers.get("content-type")) + .getOrElse("application/json") + + val headers = _headers + .filterNot(_._1.toLowerCase() == "content-type") + + val body = BodyHelper.extractBodyFrom(response) + + Results + .Status(status)(body) + .withHeaders(headers.toSeq: _*) + .as(contentType) + } + } + .andThen { + case _ => vm.release() + } + } } } } @@ -654,37 +800,52 @@ class WasmRequestHandler extends RequestHandler { case Some(config) => { requestToWasmJson(request).flatMap { json => val fakeCtx = FakeWasmContext(configJson) - WasmUtils - .execute(config, "handle_request", Json.obj("request" -> json), None, None) - .flatMap { - case Right(ok) => { - val response = Json.parse(ok) - val headers: Map[String, String] = - response.select("headers").asOpt[Map[String, String]].getOrElse(Map.empty) - val contentLength: Option[Long] = headers.getIgnoreCase("Content-Length").map(_.toLong) - val contentType: Option[String] = headers.getIgnoreCase("Content-Type") - val status: Int = (response \ "status").asOpt[Int].getOrElse(200) - val cookies: Seq[WSCookie] = WasmUtils.convertJsonCookies(response).getOrElse(Seq.empty) - val body: Source[ByteString, _] = - response.select("body").asOpt[String].map(b => ByteString(b)) match { - case None => ByteString.empty.singleSource - case Some(b) => Source.single(b) + WasmVm.fromConfig(config).flatMap { + case None => Errors + .craftResponseResult( + "plugin not found !", + Results.Status(500), + request, + None, + None, + maybeRoute = None, + attrs = TypedMap.empty + ) + case Some((vm, localConfig)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall(config.functionName.orElse(localConfig.functionName).getOrElse("handle_request"), Json.obj("request" -> json).stringify), None) + .flatMap { + case Right(ok) => { + val response = Json.parse(ok._1) + val headers: Map[String, String] = + response.select("headers").asOpt[Map[String, String]].getOrElse(Map.empty) + val contentLength: Option[Long] = headers.getIgnoreCase("Content-Length").map(_.toLong) + val contentType: Option[String] = headers.getIgnoreCase("Content-Type") + val status: Int = (response \ "status").asOpt[Int].getOrElse(200) + val cookies: Seq[WSCookie] = WasmUtils.convertJsonCookies(response).getOrElse(Seq.empty) + val body: Source[ByteString, _] = + response.select("body").asOpt[String].map(b => ByteString(b)) match { + case None => ByteString.empty.singleSource + case Some(b) => Source.single(b) + } + Results + .Status(status) + .sendEntity( + HttpEntity.Streamed( + data = body, + contentLength = contentLength, + contentType = contentType + ) + ) + .withHeaders(headers.toSeq: _*) + .withCookies(cookies.map(_.toCookie): _*) + .vfuture } - Results - .Status(status) - .sendEntity( - HttpEntity.Streamed( - data = body, - contentLength = contentLength, - contentType = contentType - ) - ) - .withHeaders(headers.toSeq: _*) - .withCookies(cookies.map(_.toCookie): _*) - .vfuture - } - case Left(bad) => Results.InternalServerError(bad).vfuture - } + case Left(bad) => Results.InternalServerError(bad).vfuture + } + .andThen { + case _ => vm.release() + } + } } } } @@ -747,18 +908,18 @@ class WasmJob(config: WasmJobsConfig) extends Job { None // TODO: make it configurable base on global env ??? override def jobStart(ctx: JobContext)(implicit env: Env, ec: ExecutionContext): Future[Unit] = Try { - WasmUtils - .execute( - config.config.copy(functionName = "job_start".some), - "job_start", - ctx.wasmJson, - attrs.some, - None - ) - .map { - case Left(err) => logger.error(s"error while starting wasm job ${config.uniqueId}: ${err.stringify}") - case Right(_) => () - } + WasmVm.fromConfig(config.config).flatMap { + case None => Future.failed(new RuntimeException("no plugin found")) + case Some((vm, _)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall("job_start", ctx.wasmJson.stringify), None) + .map { + case Left(err) => logger.error(s"error while starting wasm job ${config.uniqueId}: ${err.stringify}") + case Right(_) => () + } + .andThen { + case _ => vm.release() + } + } } match { case Failure(e) => logger.error("error during wasm job start", e) @@ -766,31 +927,37 @@ class WasmJob(config: WasmJobsConfig) extends Job { case Success(s) => s } override def jobStop(ctx: JobContext)(implicit env: Env, ec: ExecutionContext): Future[Unit] = Try { - WasmUtils - .execute( - config.config.copy(functionName = "job_stop".some), - "job_stop", - ctx.wasmJson, - attrs.some, - None - ) - .map { - case Left(err) => logger.error(s"error while stopping wasm job ${config.uniqueId}: ${err.stringify}") - case Right(_) => () - } + WasmVm.fromConfig(config.config).flatMap { + case None => Future.failed(new RuntimeException("no plugin found")) + case Some((vm, _)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall("job_stop", ctx.wasmJson.stringify), None) + .map { + case Left(err) => logger.error(s"error while stopping wasm job ${config.uniqueId}: ${err.stringify}") + case Right(_) => () + } + .andThen { + case _ => vm.release() + } + } } match { case Failure(e) => logger.error("error during wasm job stop", e) funit case Success(s) => s } - override def jobRun(ctx: JobContext)(implicit env: Env, ec: ExecutionContext): Future[Unit] = Try { - WasmUtils - .execute(config.config, "job_run", ctx.wasmJson, attrs.some, None) - .map { - case Left(err) => logger.error(s"error while running wasm job ${config.uniqueId}: ${err.stringify}") - case Right(_) => () - } + override def jobRun(ctx: JobContext)(implicit env: Env, ec: ExecutionContext): Future[Unit] = Try { + WasmVm.fromConfig(config.config).flatMap { + case None => Future.failed(new RuntimeException("no plugin found")) + case Some((vm, localConfig)) => + vm.call(WasmFunctionParameters.ExtismFuntionCall(config.config.functionName.orElse(localConfig.functionName).getOrElse("job_run"), ctx.wasmJson.stringify), None) + .map { + case Left(err) => logger.error(s"error while running wasm job ${config.uniqueId}: ${err.stringify}") + case Right(_) => () + } + .andThen { + case _ => vm.release() + } + } } match { case Failure(e) => logger.error("error during wasm job run", e) @@ -819,7 +986,7 @@ class WasmJobsLauncher extends Job { override def cronExpression(ctx: JobContext, env: Env): Option[String] = None override def predicate(ctx: JobContext, env: Env): Option[Boolean] = None - private val handledJobs = new TrieMap[String, Job]() + private val handledJobs = new UnboundedTrieMap[String, Job]() override def jobRun(ctx: JobContext)(implicit env: Env, ec: ExecutionContext): Future[Unit] = Try { val globalConfig = env.datastores.globalConfigDataStore.latest() @@ -877,48 +1044,73 @@ class WasmOPA extends NgAccessValidator { opa = true ).some - override def access(ctx: NgAccessContext)(implicit env: Env, ec: ExecutionContext): Future[NgAccess] = { - val config = ctx - .cachedConfig(internalName)(WasmConfig.format) - .getOrElse(WasmConfig()) + private def onError(error: String, ctx: NgAccessContext, status: Option[Int] = Some(400))(implicit env: Env, ec: ExecutionContext) = Errors + .craftResponseResult( + error, + Results.Status(status.get), + ctx.request, + None, + None, + attrs = ctx.attrs, + maybeRoute = ctx.route.some + ) + .map(r => NgAccess.NgDenied(r)) - WasmUtils - .execute(config, "access", ctx.wasmJson, ctx.attrs.some, None) + private def execute(vm: WasmVm, ctx: NgAccessContext)(implicit env: Env, ec: ExecutionContext) = { + vm.call(WasmFunctionParameters.OPACall("execute", vm.opaPointers, ctx.wasmJson.stringify), None) .flatMap { - case Right(res) => - val response = Json.parse(res) - AttrsHelper.updateAttrs(ctx.attrs, response) - val result = response.asOpt[JsArray].getOrElse(Json.arr()) + case Right((rawResult, _)) => + val response = Json.parse(rawResult) + val result = response.asOpt[JsArray].getOrElse(Json.arr()) val canAccess = (result.value.head \ "result").asOpt[Boolean].getOrElse(false) - if (canAccess) { + if (canAccess) NgAccess.NgAllowed.vfuture - } else { - Errors - .craftResponseResult( - "Forbidden access", - Results.Status(403), - ctx.request, - None, - None, - attrs = ctx.attrs, - maybeRoute = ctx.route.some - ) - .map(r => NgAccess.NgDenied(r)) - } - case Left(err) => - Errors - .craftResponseResult( - (err \ "error").asOpt[String].getOrElse("An error occured"), - Results.Status(400), - ctx.request, - None, - None, - attrs = ctx.attrs, - maybeRoute = ctx.route.some - ) - .map(r => NgAccess.NgDenied(r)) + else + onError("Forbidden access", ctx, 403.some) + case Left(err) => onError((err \ "error").asOpt[String].getOrElse("An error occured"), ctx) + } + .andThen { + case _ => vm.release() } } + + override def access(ctx: NgAccessContext)(implicit env: Env, ec: ExecutionContext): Future[NgAccess] = { + val config = ctx + .cachedConfig(internalName)(WasmConfig.format) + .getOrElse(WasmConfig()) + + WasmVm.fromConfig(config).flatMap { + case None => Errors + .craftResponseResult( + "plugin not found !", + Results.Status(500), + ctx.request, + None, + None, + attrs = ctx.attrs, + maybeRoute = ctx.route.some + ) + .map(r => NgAccess.NgDenied(r)) + case Some((vm, localConfig)) => + if (!vm.initialized()) { + vm.call(WasmFunctionParameters.OPACall("initialize", in = ctx.wasmJson.stringify), None) + .flatMap { + case Left(error) => onError(error.stringify, ctx) + case Right(value) => + vm.initialize { + val pointers = Json.parse(value._1) + vm.opaPointers = OPAWasmVm( + opaDataAddr = (pointers \ "dataAddr").as[Int], + opaBaseHeapPtr = (pointers \ "baseHeapPtr").as[Int] + ).some + } + execute(vm, ctx) + } + } else { + execute(vm, ctx) + } + } + } } class WasmRouter extends NgRouter { @@ -937,11 +1129,20 @@ class WasmRouter extends NgRouter { override def findRoute(ctx: NgRouterContext)(implicit env: Env, ec: ExecutionContext): Option[NgMatchedRoute] = { val config = WasmConfig.format.reads(ctx.config).getOrElse(WasmConfig()) Await.result( - WasmUtils.execute(config, "find_route", ctx.json, ctx.attrs.some, None), + WasmVm.fromConfig(config).flatMap { + case None => Left(Json.obj("error" -> "plugin not found")).vfuture + case Some((vm, localConfig)) => + val ret = vm.call(WasmFunctionParameters.ExtismFuntionCall(config.functionName.orElse(localConfig.functionName).getOrElse("find_route"), ctx.json.stringify), None) + .andThen { + case _ => vm.release() + } + vm.release() + ret + }, 3.seconds ) match { case Right(res) => - val response = Json.parse(res) + val response = Json.parse(res._1) AttrsHelper.updateAttrs(ctx.attrs, response) Try { NgMatchedRoute( diff --git a/otoroshi/app/next/proxy/state.scala b/otoroshi/app/next/proxy/state.scala index 15af5a2e67..9fc547d7e3 100644 --- a/otoroshi/app/next/proxy/state.scala +++ b/otoroshi/app/next/proxy/state.scala @@ -12,7 +12,7 @@ import otoroshi.script._ import otoroshi.ssl.{Cert, DynamicSSLEngineProvider} import otoroshi.tcp.TcpService import otoroshi.utils.TypedMap -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.syntax.implicits._ import play.api.Logger import play.api.libs.json._ @@ -33,26 +33,26 @@ class NgProxyState(env: Env) { .newBuilder[String, NgRoute] .result() - private val raw_routes = new LegitTrieMap[String, NgRoute]() - private val apikeys = new LegitTrieMap[String, ApiKey]() - private val backends = new LegitTrieMap[String, NgBackend]() - private val ngroutecompositions = new LegitTrieMap[String, NgRouteComposition]() - private val ngbackends = new LegitTrieMap[String, StoredNgBackend]() - private val jwtVerifiers = new LegitTrieMap[String, GlobalJwtVerifier]() - private val certificates = new LegitTrieMap[String, Cert]() - private val authModules = new LegitTrieMap[String, AuthModuleConfig]() - private val errorTemplates = new LegitTrieMap[String, ErrorTemplate]() - private val services = new LegitTrieMap[String, ServiceDescriptor]() - private val teams = new LegitTrieMap[String, Team]() - private val tenants = new LegitTrieMap[String, Tenant]() - private val serviceGroups = new LegitTrieMap[String, ServiceGroup]() - private val dataExporters = new LegitTrieMap[String, DataExporterConfig]() - private val otoroshiAdmins = new LegitTrieMap[String, OtoroshiAdmin]() - private val backofficeSessions = new LegitTrieMap[String, BackOfficeUser]() - private val privateAppsSessions = new LegitTrieMap[String, PrivateAppsUser]() - private val tcpServices = new LegitTrieMap[String, TcpService]() - private val scripts = new LegitTrieMap[String, Script]() - private val wasmPlugins = new LegitTrieMap[String, WasmPlugin]() + private val raw_routes = new UnboundedTrieMap[String, NgRoute]() + private val apikeys = new UnboundedTrieMap[String, ApiKey]() + private val backends = new UnboundedTrieMap[String, NgBackend]() + private val ngroutecompositions = new UnboundedTrieMap[String, NgRouteComposition]() + private val ngbackends = new UnboundedTrieMap[String, StoredNgBackend]() + private val jwtVerifiers = new UnboundedTrieMap[String, GlobalJwtVerifier]() + private val certificates = new UnboundedTrieMap[String, Cert]() + private val authModules = new UnboundedTrieMap[String, AuthModuleConfig]() + private val errorTemplates = new UnboundedTrieMap[String, ErrorTemplate]() + private val services = new UnboundedTrieMap[String, ServiceDescriptor]() + private val teams = new UnboundedTrieMap[String, Team]() + private val tenants = new UnboundedTrieMap[String, Tenant]() + private val serviceGroups = new UnboundedTrieMap[String, ServiceGroup]() + private val dataExporters = new UnboundedTrieMap[String, DataExporterConfig]() + private val otoroshiAdmins = new UnboundedTrieMap[String, OtoroshiAdmin]() + private val backofficeSessions = new UnboundedTrieMap[String, BackOfficeUser]() + private val privateAppsSessions = new UnboundedTrieMap[String, PrivateAppsUser]() + private val tcpServices = new UnboundedTrieMap[String, TcpService]() + private val scripts = new UnboundedTrieMap[String, Script]() + private val wasmPlugins = new UnboundedTrieMap[String, WasmPlugin]() private val tryItEnabledReports = Scaffeine() .expireAfterWrite(5.minutes) .maximumSize(100) @@ -62,7 +62,7 @@ class NgProxyState(env: Env) { .maximumSize(100) .build[String, NgExecutionReport]() - private val routesByDomain = new LegitTrieMap[String, Seq[NgRoute]]() + private val routesByDomain = new UnboundedTrieMap[String, Seq[NgRoute]]() private val domainPathTreeRef = new AtomicReference[NgTreeRouter](NgTreeRouter.empty) def enableReportFor(id: String): Unit = { diff --git a/otoroshi/app/next/tunnel/tunnel.scala b/otoroshi/app/next/tunnel/tunnel.scala index 42c5f7a3cf..9ee0638e43 100644 --- a/otoroshi/app/next/tunnel/tunnel.scala +++ b/otoroshi/app/next/tunnel/tunnel.scala @@ -38,7 +38,7 @@ import scala.util.{Failure, Success, Try} import otoroshi.cluster.ClusterConfig import play.api.libs.ws.DefaultWSProxyServer import akka.http.scaladsl.ClientTransport -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap case class TunnelPluginConfig(tunnelId: String) extends NgPluginConfig { override def json: JsValue = Json.obj("tunnel_id" -> tunnelId) @@ -625,7 +625,7 @@ class TunnelManager(env: Env) { } } - private val leaderConnections = new LegitTrieMap[String, LeaderConnection]() + private val leaderConnections = new UnboundedTrieMap[String, LeaderConnection]() private def forwardRequestWs( tunnelId: String, @@ -695,7 +695,7 @@ class LeaderConnection( q } private val source: Source[akka.http.scaladsl.model.ws.Message, _] = pushSource.merge(pingSource) - private val awaitingResponse = new LegitTrieMap[String, Promise[Result]]() + private val awaitingResponse = new UnboundedTrieMap[String, Promise[Result]]() def close(): Unit = { unregister(this) @@ -1185,7 +1185,7 @@ class TunnelActor( private val logger = Logger(s"otoroshi-tunnel-actor") // private val counter = new AtomicLong(0L) - private val awaitingResponse = new LegitTrieMap[String, Promise[Result]]() + private val awaitingResponse = new UnboundedTrieMap[String, Promise[Result]]() private def closeTunnel(): Unit = { awaitingResponse.values.map(p => p.trySuccess(Results.InternalServerError(Json.obj("error" -> "tunnel closed !")))) diff --git a/otoroshi/app/next/utils/vault.scala b/otoroshi/app/next/utils/vault.scala index 35ebc5ac72..5fc81e72ab 100644 --- a/otoroshi/app/next/utils/vault.scala +++ b/otoroshi/app/next/utils/vault.scala @@ -16,7 +16,7 @@ import otoroshi.plugins.jobs.kubernetes.{KubernetesClient, KubernetesConfig} import otoroshi.ssl.SSLImplicits._ import otoroshi.utils.ReplaceAllWith import otoroshi.utils.cache.Caches -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.crypto.Signatures import otoroshi.utils.syntax.implicits._ import play.api.libs.json._ @@ -859,7 +859,7 @@ class Vaults(env: Env) { private val cache = Caches.bounded[String, CachedVaultSecret](cachedSecrets.toInt) // Scaffeine().expireAfterWrite(secretsTtl).maximumSize(cachedSecrets).build[String, CachedVaultSecret]() private val expressionReplacer = ReplaceAllWith("\\$\\{vault://([^}]*)\\}") - private val vaults: TrieMap[String, Vault] = new LegitTrieMap[String, Vault]() + private val vaults: TrieMap[String, Vault] = new UnboundedTrieMap[String, Vault]() private implicit val _env = env private implicit val ec = env.otoroshiExecutionContext diff --git a/otoroshi/app/openapi/ClassGraphScanner.scala b/otoroshi/app/openapi/ClassGraphScanner.scala index 39d9d366a7..5ae1859434 100644 --- a/otoroshi/app/openapi/ClassGraphScanner.scala +++ b/otoroshi/app/openapi/ClassGraphScanner.scala @@ -2,7 +2,7 @@ package otoroshi.openapi import io.github.classgraph.{ClassGraph, ScanResult} import otoroshi.env.Env -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.syntax.implicits.BetterJsValue import play.api.Logger import play.api.libs.json.{JsObject, JsValue, Json} @@ -92,13 +92,13 @@ class ClassGraphScanner { val flattenedOpenapiSchema = { val jsonRaw = new String(openapiflatres.readAllBytes(), StandardCharsets.UTF_8) val obj = Json.parse(jsonRaw).as[JsObject] - val map = new LegitTrieMap[String, JsValue]() + val map = new UnboundedTrieMap[String, JsValue]() map.++=(obj.value) } val asForms = { val jsonRaw = new String(openapiformres.readAllBytes(), StandardCharsets.UTF_8) val obj = Json.parse(jsonRaw).as[JsObject] - val map = new LegitTrieMap[String, Form]() + val map = new UnboundedTrieMap[String, Form]() map.++=(obj.value.mapValues(Form.fromJson)).toMap } OpenApiSchema( diff --git a/otoroshi/app/openapi/CrdsGenerator.scala b/otoroshi/app/openapi/CrdsGenerator.scala index 2f8de0f5b0..b22d02344c 100644 --- a/otoroshi/app/openapi/CrdsGenerator.scala +++ b/otoroshi/app/openapi/CrdsGenerator.scala @@ -1,6 +1,6 @@ package otoroshi.openapi -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.syntax.implicits._ import otoroshi.utils.yaml.Yaml.write import play.api.libs.json._ @@ -154,7 +154,7 @@ class CrdsGenerator(spec: JsValue = Json.obj()) { } def restrictResultAtCrdsEntities(data: TrieMap[String, JsValue]): TrieMap[String, JsValue] = { - val out = new LegitTrieMap[String, JsValue]() + val out = new UnboundedTrieMap[String, JsValue]() val schemas = (spec \ "components" \ "schemas").as[JsObject] crdsEntities.fields.foreach(curr => { diff --git a/otoroshi/app/openapi/OpenapiToJson.scala b/otoroshi/app/openapi/OpenapiToJson.scala index 873f43bf46..bb284786b2 100644 --- a/otoroshi/app/openapi/OpenapiToJson.scala +++ b/otoroshi/app/openapi/OpenapiToJson.scala @@ -1,6 +1,6 @@ package otoroshi.openapi -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.syntax.implicits._ import play.api.Logger import play.api.libs.json._ @@ -22,7 +22,7 @@ class OpenapiToJson(spec: JsValue) { def extractSchemasFromOpenapi() = { val schemas = (spec \ "components" \ "schemas").as[JsObject] - val data = new LegitTrieMap[String, JsValue]() + val data = new UnboundedTrieMap[String, JsValue]() schemas.fields.foreach(curr => data.put(curr._1, curr._2)) data } diff --git a/otoroshi/app/openapi/openapi.scala b/otoroshi/app/openapi/openapi.scala index 29286d0557..a380196880 100644 --- a/otoroshi/app/openapi/openapi.scala +++ b/otoroshi/app/openapi/openapi.scala @@ -5,7 +5,7 @@ import io.github.classgraph._ import otoroshi.env.Env import otoroshi.models.Entity import otoroshi.utils.RegexPool -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.syntax.implicits._ import play.api.Logger import play.api.libs.json._ @@ -173,7 +173,7 @@ class OpenApiGenerator( .values).toSeq.distinct var adts = Seq.empty[JsObject] - val foundDescriptions = new LegitTrieMap[String, String]() + val foundDescriptions = new UnboundedTrieMap[String, String]() val found = new AtomicLong(0L) val notFound = new AtomicLong(0L) val resFound = new AtomicLong(0L) @@ -1062,7 +1062,7 @@ class OpenApiGenerator( def runAndMaybeWrite(): (JsValue, Boolean) = { val config = getConfig() - val result = new LegitTrieMap[String, JsValue]() + val result = new UnboundedTrieMap[String, JsValue]() config.add_schemas.value.map { case (key, value) => result.put(key, value) @@ -1161,7 +1161,7 @@ class OpenApiGenerator( val filteredEntities = result.filter(p => returnEntities.contains(p._1)) - val openApiEntities = new LegitTrieMap[String, JsValue]() + val openApiEntities = new UnboundedTrieMap[String, JsValue]() filteredEntities.foreach(ent => { discoverEntities(result, ent, openApiEntities) }) @@ -1246,8 +1246,8 @@ class OpenApiGenerator( val f = new File(oldSpecPath) if (f.exists()) { val oldSpec = Json.parse(Files.readAllLines(f.toPath).asScala.mkString("\n")) - val descriptions = new LegitTrieMap[String, String]() - val examples = new LegitTrieMap[String, String]() + val descriptions = new UnboundedTrieMap[String, String]() + val examples = new UnboundedTrieMap[String, String]() oldSpec.select("components").select("schemas").asObject.value.map { case (key, value) => { val path = s"old.${key}" diff --git a/otoroshi/app/plugins/accesslog.scala b/otoroshi/app/plugins/accesslog.scala index 3a361856aa..25596f0e56 100644 --- a/otoroshi/app/plugins/accesslog.scala +++ b/otoroshi/app/plugins/accesslog.scala @@ -10,7 +10,7 @@ import otoroshi.events.KafkaWrapper import otoroshi.next.plugins.api.{NgPluginCategory, NgPluginVisibility, NgStep} import otoroshi.script.{HttpResponse, RequestTransformer, TransformerErrorContext, TransformerResponseContext} import otoroshi.utils.RegexPool -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import play.api.Logger import play.api.libs.json._ import play.api.mvc.Result @@ -480,7 +480,7 @@ class KafkaAccessLog extends RequestTransformer { private val logger = Logger("otoroshi-plugins-kafka-access-log") - private val kafkaWrapperCache = new LegitTrieMap[String, KafkaWrapper] + private val kafkaWrapperCache = new UnboundedTrieMap[String, KafkaWrapper] override def name: String = "Kafka access log" diff --git a/otoroshi/app/plugins/apikeys.scala b/otoroshi/app/plugins/apikeys.scala index a213ebf4b9..f5f29b8796 100644 --- a/otoroshi/app/plugins/apikeys.scala +++ b/otoroshi/app/plugins/apikeys.scala @@ -21,7 +21,7 @@ import otoroshi.utils.JsonPathUtils import otoroshi.script._ import otoroshi.security.{IdGenerator, OtoroshiClaim} import otoroshi.ssl.{Cert, DynamicSSLEngineProvider} -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.crypto.Signatures import otoroshi.utils.http.DN import otoroshi.utils.jwk.JWKSHelper @@ -363,7 +363,7 @@ class ClientCredentialFlow extends RequestTransformer { override def categories: Seq[NgPluginCategory] = Seq(NgPluginCategory.AccessControl) override def steps: Seq[NgStep] = Seq(NgStep.TransformRequest) - private val awaitingRequests = new LegitTrieMap[String, Promise[Source[ByteString, _]]]() + private val awaitingRequests = new UnboundedTrieMap[String, Promise[Source[ByteString, _]]]() override def beforeRequest( ctx: BeforeRequestContext diff --git a/otoroshi/app/plugins/clientcert.scala b/otoroshi/app/plugins/clientcert.scala index 1f31ed6867..b421e6a35a 100644 --- a/otoroshi/app/plugins/clientcert.scala +++ b/otoroshi/app/plugins/clientcert.scala @@ -7,7 +7,7 @@ import otoroshi.env.Env import otoroshi.next.plugins.api.{NgPluginCategory, NgPluginVisibility, NgStep} import otoroshi.script._ import otoroshi.utils.RegexPool -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.http.{DN, MtlsConfig} import play.api.libs.json._ import play.api.mvc.Result @@ -248,7 +248,7 @@ class HasClientCertMatchingHttpValidator extends AccessValidator { override def categories: Seq[NgPluginCategory] = Seq(NgPluginCategory.AccessControl) override def steps: Seq[NgStep] = Seq(NgStep.ValidateAccess) - private val cache = new LegitTrieMap[String, (Long, JsValue)] + private val cache = new UnboundedTrieMap[String, (Long, JsValue)] private def validate(certs: Seq[X509Certificate], values: JsValue): Boolean = { val allowedSerialNumbers = diff --git a/otoroshi/app/plugins/discovery.scala b/otoroshi/app/plugins/discovery.scala index 329a3ec14d..219dcfcbb3 100644 --- a/otoroshi/app/plugins/discovery.scala +++ b/otoroshi/app/plugins/discovery.scala @@ -8,7 +8,7 @@ import otoroshi.models.Target import otoroshi.next.plugins.api.{NgPluginCategory, NgPluginVisibility, NgStep} import otoroshi.script._ import otoroshi.security.IdGenerator -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.http.RequestImplicits._ import otoroshi.utils.syntax.implicits._ import play.api.libs.json._ @@ -241,7 +241,7 @@ class DiscoverySelfRegistrationTransformer extends RequestTransformer { import kaleidoscope._ - private val awaitingRequests = new LegitTrieMap[String, Promise[Source[ByteString, _]]]() + private val awaitingRequests = new UnboundedTrieMap[String, Promise[Source[ByteString, _]]]() override def name: String = "Self registration endpoints (service discovery)" diff --git a/otoroshi/app/plugins/envoy.scala b/otoroshi/app/plugins/envoy.scala index 4f4b5b2fc4..cabd7bbcec 100644 --- a/otoroshi/app/plugins/envoy.scala +++ b/otoroshi/app/plugins/envoy.scala @@ -20,7 +20,7 @@ import play.api.libs.json.{JsArray, JsNull, JsObject, JsString, JsValue, Json} import play.api.mvc.{Result, Results} import otoroshi.utils.syntax.implicits._ import otoroshi.ssl.Cert -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import scala.collection.concurrent.TrieMap import scala.concurrent.{ExecutionContext, Future, Promise} @@ -30,7 +30,7 @@ class EnvoyControlPlane extends RequestTransformer { override def deprecated: Boolean = true - private val awaitingRequests = new LegitTrieMap[String, Promise[Source[ByteString, _]]]() + private val awaitingRequests = new UnboundedTrieMap[String, Promise[Source[ByteString, _]]]() override def name: String = "[DEPRECATED] Envoy Control Plane" diff --git a/otoroshi/app/plugins/geoloc.scala b/otoroshi/app/plugins/geoloc.scala index 7b9a5a9143..9656929fdd 100644 --- a/otoroshi/app/plugins/geoloc.scala +++ b/otoroshi/app/plugins/geoloc.scala @@ -13,7 +13,7 @@ import otoroshi.next.plugins.api.{NgPluginCategory, NgPluginVisibility, NgStep} import otoroshi.plugins.Keys import otoroshi.script._ import otoroshi.utils.cache.Caches -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import play.api.Logger import play.api.libs.json.{JsNumber, JsObject, JsValue, Json} import play.api.mvc.{Result, Results} @@ -286,7 +286,7 @@ object MaxMindGeolocationHelper { private val logger = Logger("otoroshi-plugins-maxmind-geolocation-helper") private val ipCache = Caches.bounded[String, InetAddress](10000) private val cache = Caches.bounded[String, Option[JsValue]](10000) - private val dbs = new LegitTrieMap[String, (AtomicReference[DatabaseReader], AtomicBoolean, AtomicBoolean)]() + private val dbs = new UnboundedTrieMap[String, (AtomicReference[DatabaseReader], AtomicBoolean, AtomicBoolean)]() private val exc = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(Runtime.getRuntime.availableProcessors() + 1)) diff --git a/otoroshi/app/plugins/izanami.scala b/otoroshi/app/plugins/izanami.scala index da81c43e96..8cbe96456b 100644 --- a/otoroshi/app/plugins/izanami.scala +++ b/otoroshi/app/plugins/izanami.scala @@ -24,7 +24,7 @@ import play.api.mvc.{Cookie, RequestHeader, Result, Results} import otoroshi.utils.syntax.implicits._ import play.api.libs.ws.{DefaultWSCookie, WSAuthScheme, WSCookie} import otoroshi.security.IdGenerator -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.http.RequestImplicits._ import otoroshi.utils.http.{MtlsConfig, WSCookieWithSameSite} import otoroshi.utils.http.WSCookieWithSameSite @@ -135,7 +135,7 @@ class IzanamiProxy extends RequestTransformer { override def categories: Seq[NgPluginCategory] = Seq(NgPluginCategory.Integrations) override def steps: Seq[NgStep] = Seq(NgStep.TransformRequest) - private val awaitingRequests = new LegitTrieMap[String, Promise[Source[ByteString, _]]]() + private val awaitingRequests = new UnboundedTrieMap[String, Promise[Source[ByteString, _]]]() override def beforeRequest( ctx: BeforeRequestContext @@ -335,7 +335,7 @@ object IzanamiCanaryRoutingConfig { // MIGRATED class IzanamiCanary extends RequestTransformer { - private val cookieJar = new LegitTrieMap[String, WSCookie]() + private val cookieJar = new UnboundedTrieMap[String, WSCookie]() private val cache: Cache[String, JsValue] = Scaffeine() .recordStats() diff --git a/otoroshi/app/plugins/mirror.scala b/otoroshi/app/plugins/mirror.scala index a2566c8692..7b9bd88d84 100644 --- a/otoroshi/app/plugins/mirror.scala +++ b/otoroshi/app/plugins/mirror.scala @@ -12,7 +12,7 @@ import org.joda.time.DateTime import otoroshi.next.plugins.api.{NgPluginCategory, NgPluginVisibility, NgStep} import otoroshi.script._ import otoroshi.utils.UrlSanitizer -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.http.HeadersHelper import otoroshi.utils.syntax.implicits._ import play.api.libs.json._ @@ -257,7 +257,7 @@ class MirroringPlugin extends RequestTransformer { override def categories: Seq[NgPluginCategory] = Seq(NgPluginCategory.Other) override def steps: Seq[NgStep] = Seq(NgStep.TransformRequest, NgStep.TransformResponse) - private val inFlightRequests = new LegitTrieMap[String, RequestContext]() + private val inFlightRequests = new UnboundedTrieMap[String, RequestContext]() override def afterRequest( ctx: AfterRequestContext diff --git a/otoroshi/app/plugins/oidc.scala b/otoroshi/app/plugins/oidc.scala index 5903f294ca..7adc32dfb5 100644 --- a/otoroshi/app/plugins/oidc.scala +++ b/otoroshi/app/plugins/oidc.scala @@ -22,7 +22,7 @@ import play.api.libs.ws.WSAuthScheme import play.api.mvc.Results.TooManyRequests import play.api.mvc.{RequestHeader, Result, Results} import otoroshi.security.IdGenerator -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import scala.collection.concurrent.TrieMap import scala.concurrent.duration.Duration @@ -775,7 +775,7 @@ object OIDCThirdPartyApiKeyConfig { lazy val logger = Logger("otoroshi-oidc-apikey-config") - val cache: TrieMap[String, (Long, Boolean)] = new LegitTrieMap[String, (Long, Boolean)]() + val cache: TrieMap[String, (Long, Boolean)] = new UnboundedTrieMap[String, (Long, Boolean)]() implicit val format = new Format[OIDCThirdPartyApiKeyConfig] { diff --git a/otoroshi/app/plugins/workflow.scala b/otoroshi/app/plugins/workflow.scala index 8ec069d07a..87ccb3d681 100644 --- a/otoroshi/app/plugins/workflow.scala +++ b/otoroshi/app/plugins/workflow.scala @@ -7,7 +7,7 @@ import akka.util.ByteString import otoroshi.env.Env import otoroshi.next.plugins.api.{NgPluginCategory, NgPluginVisibility, NgStep} import otoroshi.script._ -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.syntax.implicits._ import otoroshi.utils.workflow.{WorkFlow, WorkFlowRequest, WorkFlowSpec} import play.api.Logger @@ -109,7 +109,7 @@ class WorkflowJob extends Job { class WorkflowEndpoint extends RequestTransformer { - private val awaitingRequests = new LegitTrieMap[String, Promise[Source[ByteString, _]]]() + private val awaitingRequests = new UnboundedTrieMap[String, Promise[Source[ByteString, _]]]() override def beforeRequest( ctx: BeforeRequestContext diff --git a/otoroshi/app/script/job.scala b/otoroshi/app/script/job.scala index b1b8b7fe75..e935c1dba2 100644 --- a/otoroshi/app/script/job.scala +++ b/otoroshi/app/script/job.scala @@ -21,7 +21,7 @@ import otoroshi.utils.{future, JsonPathValidator, SchedulerHelper, TypedMap} import play.api.Logger import play.api.libs.json._ import otoroshi.security.IdGenerator -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.config.ConfigUtils import otoroshi.utils.syntax.implicits._ @@ -590,8 +590,8 @@ class JobManager(env: Env) { private[script] val jobActorSystem = ActorSystem("jobs-system") private[script] val jobScheduler = jobActorSystem.scheduler - private val registeredJobs = new LegitTrieMap[JobId, RegisteredJobContext]() - private val registeredLocks = new LegitTrieMap[JobId, (String, String)]() + private val registeredJobs = new UnboundedTrieMap[JobId, RegisteredJobContext]() + private val registeredLocks = new UnboundedTrieMap[JobId, (String, String)]() private val scanRef = new AtomicReference[Cancellable]() private val lockRef = new AtomicReference[Cancellable]() diff --git a/otoroshi/app/script/script.scala b/otoroshi/app/script/script.scala index 561c2fee27..c1910ee405 100644 --- a/otoroshi/app/script/script.scala +++ b/otoroshi/app/script/script.scala @@ -18,7 +18,7 @@ import otoroshi.next.extensions.AdminExtension import otoroshi.next.plugins.api._ import otoroshi.security.{IdGenerator, OtoroshiClaim} import otoroshi.storage.{BasicStore, RedisLike, RedisLikeStore} -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.config.ConfigUtils import otoroshi.utils.syntax.implicits._ import otoroshi.utils.{SchedulerHelper, TypedMap} @@ -599,7 +599,7 @@ trait NanoApp extends RequestTransformer { override def pluginType: PluginType = PluginType.AppType - private val awaitingRequests = new LegitTrieMap[String, Promise[Source[ByteString, _]]]() + private val awaitingRequests = new UnboundedTrieMap[String, Promise[Source[ByteString, _]]]() override def beforeRequest( ctx: BeforeRequestContext @@ -739,10 +739,10 @@ class ScriptManager(env: Env) { private val logger = Logger("otoroshi-script-manager") private val updateRef = new AtomicReference[Cancellable]() private val firstScan = new AtomicBoolean(false) - private val compiling = new LegitTrieMap[String, Unit]() - private val cache = new LegitTrieMap[String, (String, PluginType, Any)]() - private val cpCache = new LegitTrieMap[String, (PluginType, Any)]() - private val cpTryCache = new LegitTrieMap[String, Unit]() + private val compiling = new UnboundedTrieMap[String, Unit]() + private val cache = new UnboundedTrieMap[String, (String, PluginType, Any)]() + private val cpCache = new UnboundedTrieMap[String, (PluginType, Any)]() + private val cpTryCache = new UnboundedTrieMap[String, Unit]() private val listeningCpScripts = new AtomicReference[Seq[InternalEventListener]](Seq.empty) @@ -1155,7 +1155,7 @@ class ScriptManager(env: Env) { def getAnyScript[A](ref: String)(implicit ec: ExecutionContext): Either[String, A] = { ref match { - case r if r.startsWith("cp:") => { + case r if r.startsWith("cp:") => cpTryCache.synchronized { if (!cpTryCache.contains(ref)) { Try(env.environment.classLoader.loadClass(r.replace("cp:", ""))) // .asSubclass(classOf[A])) .map(clazz => clazz.newInstance()) match { diff --git a/otoroshi/app/ssl/ssl.scala b/otoroshi/app/ssl/ssl.scala index 26a17547df..05e242f7bf 100644 --- a/otoroshi/app/ssl/ssl.scala +++ b/otoroshi/app/ssl/ssl.scala @@ -51,7 +51,7 @@ import play.server.api.SSLEngineProvider import redis.RedisClientMasterSlaves import otoroshi.security.IdGenerator import otoroshi.storage.{BasicStore, RedisLike, RedisLikeStore} -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.http.DN import otoroshi.utils.metrics.{FakeHasMetrics, HasMetrics} import otoroshi.utils.metrics.FakeHasMetrics @@ -1122,12 +1122,12 @@ object DynamicSSLEngineProvider { ) val autogenCerts = Scaffeine().expireAfterWrite(5.minutes).maximumSize(1000).build[String, Cert]() - val _ocspProjectionCertificates = new LegitTrieMap[java.math.BigInteger, OCSPCertProjection]() + val _ocspProjectionCertificates = new UnboundedTrieMap[java.math.BigInteger, OCSPCertProjection]() private def allUnrevokedCertMap: TrieMap[String, Cert] = { val datastoreCerts = getCurrentEnv().proxyState.allCertificatesMap().filter(_._2.notRevoked) val genCerts = autogenCerts.asMap() - new LegitTrieMap[String, Cert]().++=(datastoreCerts).++=(genCerts) + new UnboundedTrieMap[String, Cert]().++=(datastoreCerts).++=(genCerts) } private def allUnrevokedCertSeq: Seq[Cert] = allUnrevokedCertMap.values.toSeq @@ -1165,7 +1165,7 @@ object DynamicSSLEngineProvider { val certificates = allUnrevokedCertMap // _certificates.filter(_._2.notRevoked) val trustedCertificates: TrieMap[String, Cert] = if (trustedCerts.nonEmpty) { - new LegitTrieMap[String, Cert]() ++ trustedCerts + new UnboundedTrieMap[String, Cert]() ++ trustedCerts // .flatMap(k => _certificates.get(k)) .flatMap(k => allUnrevokedCertMap.get(k)) .filter(_.notRevoked) diff --git a/otoroshi/app/storage/drivers/cassandra/NewCassandraRedis.scala b/otoroshi/app/storage/drivers/cassandra/NewCassandraRedis.scala index 11ac595571..188b4b8088 100644 --- a/otoroshi/app/storage/drivers/cassandra/NewCassandraRedis.scala +++ b/otoroshi/app/storage/drivers/cassandra/NewCassandraRedis.scala @@ -17,7 +17,7 @@ import otoroshi.env.Env import play.api.{Configuration, Logger} import otoroshi.storage._ import otoroshi.utils.SchedulerHelper -import otoroshi.utils.cache.types.LegitConcurrentHashMap +import otoroshi.utils.cache.types.UnboundedConcurrentHashMap import scala.concurrent.duration._ import scala.concurrent.{ExecutionContext, Future} @@ -88,7 +88,7 @@ class NewCassandraRedis(actorSystem: ActorSystem, configuration: Configuration)( private val metrics = new MetricRegistry() - private val patterns = new LegitConcurrentHashMap[String, Pattern]() + private val patterns = new UnboundedConcurrentHashMap[String, Pattern]() private val cassandraDurableWrites: String = configuration.getOptionalWithFileSupport[Boolean]("app.cassandra.durableWrites").map(_.toString).getOrElse("true") diff --git a/otoroshi/app/storage/drivers/inmemory/SwappableInMemoryRedis.scala b/otoroshi/app/storage/drivers/inmemory/SwappableInMemoryRedis.scala index 8d54238a6b..9c83cc9c75 100644 --- a/otoroshi/app/storage/drivers/inmemory/SwappableInMemoryRedis.scala +++ b/otoroshi/app/storage/drivers/inmemory/SwappableInMemoryRedis.scala @@ -7,7 +7,7 @@ import otoroshi.cluster.Cluster import otoroshi.env.Env import otoroshi.storage._ import otoroshi.utils.SchedulerHelper -import otoroshi.utils.cache.types.{LegitConcurrentHashMap, LegitTrieMap} +import otoroshi.utils.cache.types.{UnboundedConcurrentHashMap, UnboundedTrieMap} import otoroshi.utils.syntax.implicits.BetterSyntax import play.api.Logger import play.api.libs.json.{JsValue, Json} @@ -29,14 +29,14 @@ object SwapStrategy { object ModernMemory { def apply( - store: TrieMap[String, Any] = new LegitTrieMap[String, Any](), - expirations: TrieMap[String, Long] = new LegitTrieMap[String, Long]() + store: TrieMap[String, Any] = new UnboundedTrieMap[String, Any](), + expirations: TrieMap[String, Long] = new UnboundedTrieMap[String, Long]() ): ModernMemory = new ModernMemory(store, expirations) } class ModernMemory( - store: TrieMap[String, Any] = new LegitTrieMap[String, Any](), - expirations: TrieMap[String, Long] = new LegitTrieMap[String, Long]() + store: TrieMap[String, Any] = new UnboundedTrieMap[String, Any](), + expirations: TrieMap[String, Long] = new UnboundedTrieMap[String, Long]() ) { def size: Int = store.size def get(key: String): Option[Any] = store.get(key) @@ -79,7 +79,7 @@ class ModernMemory( class Memory( val store: ConcurrentHashMap[String, Any], val expirations: ConcurrentHashMap[String, Long], - val newStore: TrieMap[String, Any] = new LegitTrieMap[String, Any]() + val newStore: TrieMap[String, Any] = new UnboundedTrieMap[String, Any]() ) object Memory { @@ -105,12 +105,12 @@ class SwappableInMemoryRedis(_optimized: Boolean, env: Env, actorSystem: ActorSy import collection.JavaConverters._ import scala.concurrent.duration._ - val patterns: ConcurrentHashMap[String, Pattern] = new LegitConcurrentHashMap[String, Pattern]() + val patterns: ConcurrentHashMap[String, Pattern] = new UnboundedConcurrentHashMap[String, Pattern]() private lazy val _storeHolder = new AtomicReference[Memory]( Memory( - store = new LegitConcurrentHashMap[String, Any], - expirations = new LegitConcurrentHashMap[String, Long] + store = new UnboundedConcurrentHashMap[String, Any], + expirations = new UnboundedConcurrentHashMap[String, Long] ) ) @@ -139,7 +139,7 @@ class SwappableInMemoryRedis(_optimized: Boolean, env: Env, actorSystem: ActorSy if (env.clusterConfig.mode.isWorker) { strategy match { case SwapStrategy.Replace => { - val newStore = new LegitConcurrentHashMap[String, Any]() + val newStore = new UnboundedConcurrentHashMap[String, Any]() _memory.store.keySet.asScala .filterNot(key => Cluster.filteredKey(key, env)) .map { k => @@ -274,7 +274,7 @@ class SwappableInMemoryRedis(_optimized: Boolean, env: Env, actorSystem: ActorSy override def hdel(key: String, fields: String*): Future[Long] = { val hash = if (!store.containsKey(key)) { - new LegitConcurrentHashMap[String, ByteString]() + new UnboundedConcurrentHashMap[String, ByteString]() } else { store.get(key).asInstanceOf[ConcurrentHashMap[String, ByteString]] } @@ -292,7 +292,7 @@ class SwappableInMemoryRedis(_optimized: Boolean, env: Env, actorSystem: ActorSy override def hgetall(key: String): Future[Map[String, ByteString]] = { val hash = if (!store.containsKey(key)) { - new LegitConcurrentHashMap[String, ByteString]() + new UnboundedConcurrentHashMap[String, ByteString]() } else { store.get(key).asInstanceOf[ConcurrentHashMap[String, ByteString]] } @@ -303,7 +303,7 @@ class SwappableInMemoryRedis(_optimized: Boolean, env: Env, actorSystem: ActorSy override def hsetBS(key: String, field: String, value: ByteString): Future[Boolean] = { val hash = if (!store.containsKey(key)) { - new LegitConcurrentHashMap[String, ByteString]() + new UnboundedConcurrentHashMap[String, ByteString]() } else { store.get(key).asInstanceOf[ConcurrentHashMap[String, ByteString]] } @@ -445,7 +445,7 @@ class ModernSwappableInMemoryRedis(_optimized: Boolean, env: Env, actorSystem: A lazy val logger = Logger("otoroshi-datastores") - val patterns: TrieMap[String, Pattern] = new LegitTrieMap[String, Pattern]() + val patterns: TrieMap[String, Pattern] = new UnboundedTrieMap[String, Pattern]() // private lazy val _storeHolder = new AtomicReference[ModernMemory](ModernMemory()) @@ -564,7 +564,7 @@ class ModernSwappableInMemoryRedis(_optimized: Boolean, env: Env, actorSystem: A ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// override def hdel(key: String, fields: String*): Future[Long] = { - val hash = memory.getTypedOrUpdate[TrieMap[String, ByteString]](key, new LegitTrieMap[String, ByteString]()) + val hash = memory.getTypedOrUpdate[TrieMap[String, ByteString]](key, new UnboundedTrieMap[String, ByteString]()) hash.keySet .filter(k => fields.contains(k)) .map(k => { @@ -576,14 +576,14 @@ class ModernSwappableInMemoryRedis(_optimized: Boolean, env: Env, actorSystem: A } override def hgetall(key: String): Future[Map[String, ByteString]] = { - val hash = memory.getTypedOrUpdate[TrieMap[String, ByteString]](key, new LegitTrieMap[String, ByteString]()) + val hash = memory.getTypedOrUpdate[TrieMap[String, ByteString]](key, new UnboundedTrieMap[String, ByteString]()) hash.toMap.future } override def hset(key: String, field: String, value: String): Future[Boolean] = hsetBS(key, field, ByteString(value)) override def hsetBS(key: String, field: String, value: ByteString): Future[Boolean] = { - val hash = memory.getTypedOrUpdate[TrieMap[String, ByteString]](key, new LegitTrieMap[String, ByteString]()) + val hash = memory.getTypedOrUpdate[TrieMap[String, ByteString]](key, new UnboundedTrieMap[String, ByteString]()) hash.put(field, value) memory.put(key, hash) true.future diff --git a/otoroshi/app/storage/drivers/inmemory/persistence.scala b/otoroshi/app/storage/drivers/inmemory/persistence.scala index 45ec7f3502..dd2e6a1946 100644 --- a/otoroshi/app/storage/drivers/inmemory/persistence.scala +++ b/otoroshi/app/storage/drivers/inmemory/persistence.scala @@ -6,7 +6,7 @@ import akka.http.scaladsl.model.ContentTypes import akka.http.scaladsl.util.FastFuture import akka.stream.alpakka.s3.headers.CannedAcl import akka.stream.alpakka.s3.scaladsl.S3 -import akka.stream.alpakka.s3.{MultipartUploadResult, _} +import akka.stream.alpakka.s3._ import akka.stream.scaladsl.{Framing, Keep, Sink, Source} import akka.stream.{Attributes, Materializer} import akka.util.ByteString @@ -14,7 +14,7 @@ import com.google.common.base.Charsets import otoroshi.env.Env import otoroshi.next.plugins.api.NgPluginConfig import otoroshi.utils.SchedulerHelper -import otoroshi.utils.cache.types.{LegitConcurrentHashMap, LegitTrieMap} +import otoroshi.utils.cache.types.{UnboundedConcurrentHashMap, UnboundedTrieMap} import otoroshi.utils.http.Implicits._ import otoroshi.utils.syntax.implicits._ import play.api.Logger @@ -97,8 +97,8 @@ class FilePersistence(ds: InMemoryDataStores, env: Env) extends Persistence { private def readStateFromDisk(source: Seq[String]): Unit = { if (logger.isDebugEnabled) logger.debug("Reading state from disk ...") - val store = new LegitConcurrentHashMap[String, Any]() - val expirations = new LegitConcurrentHashMap[String, Long]() + val store = new UnboundedConcurrentHashMap[String, Any]() + val expirations = new UnboundedConcurrentHashMap[String, Long]() source.filterNot(_.trim.isEmpty).foreach { raw => val item = Json.parse(raw) val key = (item \ "k").as[String] @@ -133,7 +133,7 @@ class FilePersistence(ds: InMemoryDataStores, env: Env) extends Persistence { Some(list) } case "hash" if modern => { - val map = new LegitTrieMap[String, ByteString]() + val map = new UnboundedTrieMap[String, ByteString]() map.++=(value.as[JsObject].value.map(t => (t._1, ByteString(t._2.as[String])))) Some(map) } @@ -148,7 +148,7 @@ class FilePersistence(ds: InMemoryDataStores, env: Env) extends Persistence { Some(list) } case "hash" => { - val map = new LegitConcurrentHashMap[String, ByteString] + val map = new UnboundedConcurrentHashMap[String, ByteString] map.putAll(value.as[JsObject].value.map(t => (t._1, ByteString(t._2.as[String]))).asJava) Some(map) } @@ -224,8 +224,8 @@ class HttpPersistence(ds: InMemoryDataStores, env: Env) extends Persistence { if (logger.isDebugEnabled) logger.debug("Reading state from http db ...") implicit val ec = ds.actorSystem.dispatcher implicit val mat = ds.materializer - val store = new LegitConcurrentHashMap[String, Any]() - val expirations = new LegitConcurrentHashMap[String, Long]() + val store = new UnboundedConcurrentHashMap[String, Any]() + val expirations = new UnboundedConcurrentHashMap[String, Long]() val headers = stateHeaders.toSeq ++ Seq( "Accept" -> "application/x-ndjson" ) @@ -278,7 +278,7 @@ class HttpPersistence(ds: InMemoryDataStores, env: Env) extends Persistence { Some(list) } case "hash" if modern => { - val map = new LegitTrieMap[String, ByteString]() + val map = new UnboundedTrieMap[String, ByteString]() map.++=(value.as[JsObject].value.map(t => (t._1, ByteString(t._2.as[String])))) Some(map) } @@ -293,7 +293,7 @@ class HttpPersistence(ds: InMemoryDataStores, env: Env) extends Persistence { Some(list) } case "hash" => { - val map = new LegitConcurrentHashMap[String, ByteString] + val map = new UnboundedConcurrentHashMap[String, ByteString] map.putAll(value.as[JsObject].value.map(t => (t._1, ByteString(t._2.as[String]))).asJava) Some(map) } @@ -482,8 +482,8 @@ class S3Persistence(ds: InMemoryDataStores, env: Env) extends Persistence { private def readStateFromS3(): Future[Unit] = { if (logger.isDebugEnabled) logger.debug(s"Reading state from $url") - val store = new LegitConcurrentHashMap[String, Any]() - val expirations = new LegitConcurrentHashMap[String, Long]() + val store = new UnboundedConcurrentHashMap[String, Any]() + val expirations = new UnboundedConcurrentHashMap[String, Long]() val none: Option[(Source[ByteString, NotUsed], ObjectMetadata)] = None S3.download(conf.bucket, conf.key).withAttributes(s3ClientSettingsAttrs).runFold(none)((_, opt) => opt).map { case None => @@ -533,7 +533,7 @@ class S3Persistence(ds: InMemoryDataStores, env: Env) extends Persistence { Some(list) } case "hash" if modern => { - val map = new LegitTrieMap[String, ByteString]() + val map = new UnboundedTrieMap[String, ByteString]() map.++=(value.as[JsObject].value.map(t => (t._1, ByteString(t._2.as[String])))) Some(map) } @@ -548,7 +548,7 @@ class S3Persistence(ds: InMemoryDataStores, env: Env) extends Persistence { Some(list) } case "hash" => { - val map = new LegitConcurrentHashMap[String, ByteString] + val map = new UnboundedConcurrentHashMap[String, ByteString] map.putAll(value.as[JsObject].value.map(t => (t._1, ByteString(t._2.as[String]))).asJava) Some(map) } diff --git a/otoroshi/app/storage/storage.scala b/otoroshi/app/storage/storage.scala index 63753e871a..4e4b802930 100644 --- a/otoroshi/app/storage/storage.scala +++ b/otoroshi/app/storage/storage.scala @@ -17,7 +17,7 @@ import otoroshi.ssl.{CertificateDataStore, ClientCertificateValidationDataStore} import otoroshi.storage.drivers.inmemory.{Memory, SwapStrategy, SwappableRedis} import otoroshi.storage.stores._ import otoroshi.tcp.TcpServiceDataStore -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.syntax.implicits.BetterSyntax import play.api.inject.ApplicationLifecycle import play.api.libs.json._ @@ -892,7 +892,7 @@ case class IncrOptimizerItem( } class IncrOptimizer(ops: Int, time: Int) { - private val cache = new LegitTrieMap[String, IncrOptimizerItem]() + private val cache = new UnboundedTrieMap[String, IncrOptimizerItem]() def incrBy(key: String, increment: Long)(f: Long => Future[Long])(implicit ec: ExecutionContext): Future[Long] = { cache.get(key) match { case None => diff --git a/otoroshi/app/storage/stores/KvGlobalConfigDataStore.scala b/otoroshi/app/storage/stores/KvGlobalConfigDataStore.scala index 3b0edbcf63..778e7f33cb 100644 --- a/otoroshi/app/storage/stores/KvGlobalConfigDataStore.scala +++ b/otoroshi/app/storage/stores/KvGlobalConfigDataStore.scala @@ -10,7 +10,7 @@ import otoroshi.security.Auth0Config import otoroshi.ssl.{Cert, ClientCertificateValidator} import otoroshi.storage.{RedisLike, RedisLikeStore} import otoroshi.tcp.TcpService -import otoroshi.utils.cache.types.LegitConcurrentHashMap +import otoroshi.utils.cache.types.UnboundedConcurrentHashMap import play.api.Logger import play.api.libs.json._ @@ -36,9 +36,9 @@ class KvGlobalConfigDataStore(redisCli: RedisLike, _env: Env) def throttlingKey(): String = s"${_env.storageRoot}:throttling:global" private val callsForIpAddressCache = - new LegitConcurrentHashMap[String, java.util.concurrent.atomic.AtomicLong]() // TODO: check growth over time + new UnboundedConcurrentHashMap[String, java.util.concurrent.atomic.AtomicLong]() // TODO: check growth over time private val quotasForIpAddressCache = - new LegitConcurrentHashMap[String, java.util.concurrent.atomic.AtomicLong]() // TODO: check growth over time + new UnboundedConcurrentHashMap[String, java.util.concurrent.atomic.AtomicLong]() // TODO: check growth over time def incrementCallsForIpAddressWithTTL(ipAddress: String, ttl: Int = 10)(implicit ec: ExecutionContext diff --git a/otoroshi/app/utils/cache.scala b/otoroshi/app/utils/cache.scala index 579ce7a39f..858b0338cd 100644 --- a/otoroshi/app/utils/cache.scala +++ b/otoroshi/app/utils/cache.scala @@ -7,11 +7,14 @@ import scala.collection.concurrent.TrieMap import scala.concurrent.duration.FiniteDuration package object types { - type LegitTrieMap[A, B] = TrieMap[A, B] - type LegitConcurrentHashMap[A, B] = ConcurrentHashMap[A, B] + type UnboundedTrieMap[A, B] = TrieMap[A, B] + type UnboundedConcurrentHashMap[A, B] = ConcurrentHashMap[A, B] } object Caches { + def unbounded[A, B](): Cache[A, B] = { + Scaffeine().build[A, B]() + } def bounded[A, B](maxItems: Int): Cache[A, B] = { Scaffeine().maximumSize(maxItems).build[A, B]() } diff --git a/otoroshi/app/utils/httpclient.scala b/otoroshi/app/utils/httpclient.scala index bb291b30cb..3a45c40a3c 100644 --- a/otoroshi/app/utils/httpclient.scala +++ b/otoroshi/app/utils/httpclient.scala @@ -30,7 +30,7 @@ import play.api.mvc.MultipartFormData import play.shaded.ahc.org.asynchttpclient.util.Assertions import otoroshi.security.IdGenerator import otoroshi.ssl.{Cert, DynamicSSLEngineProvider} -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.syntax.implicits._ import reactor.netty.http.client.HttpClient @@ -792,7 +792,7 @@ class AkkWsClient(config: WSClientConfig, env: Env)(implicit system: ActorSystem } } - private val queueCache = new LegitTrieMap[String, SourceQueueWithComplete[(HttpRequest, Promise[HttpResponse])]]() + private val queueCache = new UnboundedTrieMap[String, SourceQueueWithComplete[(HttpRequest, Promise[HttpResponse])]]() private def getQueue( request: HttpRequest, diff --git a/otoroshi/app/utils/jsonpath.scala b/otoroshi/app/utils/jsonpath.scala index 6c2a679ed1..5db2e6106b 100644 --- a/otoroshi/app/utils/jsonpath.scala +++ b/otoroshi/app/utils/jsonpath.scala @@ -86,8 +86,8 @@ object JsonPathUtils { } def getAtPolyF(payload: String, path: String): Either[JsonPathReadError, JsValue] = { - val env = OtoroshiEnvHolder.get() - env.metrics.withTimer("JsonPathUtils.getAtPolyF") { + //val env = OtoroshiEnvHolder.get() + //env.metrics.withTimer("JsonPathUtils.getAtPolyF") { Try { val docCtx = JsonPath.parse(payload, config) val read = docCtx.read[JsonNode](path) @@ -101,7 +101,7 @@ object JsonPathUtils { case Failure(e) => Left(JsonPathReadError("error while trying to read", path, payload, e.some)) case Success(s) => Right(s) } - } + //} } def getAtPoly(payload: String, path: String): Option[JsValue] = { diff --git a/otoroshi/app/utils/jwk.scala b/otoroshi/app/utils/jwk.scala index f24f67e0d8..16256cdf4f 100644 --- a/otoroshi/app/utils/jwk.scala +++ b/otoroshi/app/utils/jwk.scala @@ -7,7 +7,7 @@ import com.auth0.jwk.{GuavaCachedJwkProvider, Jwk, JwkProvider, UrlJwkProvider} import com.auth0.jwt.algorithms.Algorithm import com.google.common.collect.Maps import org.apache.commons.codec.binary.{Base64 => ApacheBase64} -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import play.api.libs.json.{JsArray, JsObject, Json} import java.util.Collections @@ -55,7 +55,7 @@ class StringJwkProvider(jwkRaw: String) extends JwkProvider { object JwtVerifierHelper { - val cache = new LegitTrieMap[String, JwkProvider]() + val cache = new UnboundedTrieMap[String, JwkProvider]() def fromBase64(key: String): Array[Byte] = { ApacheBase64.decodeBase64(key) diff --git a/otoroshi/app/utils/metrics.scala b/otoroshi/app/utils/metrics.scala index 06b9e83b0f..611fe49a19 100644 --- a/otoroshi/app/utils/metrics.scala +++ b/otoroshi/app/utils/metrics.scala @@ -19,7 +19,7 @@ import otoroshi.cluster.{ClusterMode, StatsView} import otoroshi.env.Env import otoroshi.events.StatsDReporter import otoroshi.utils.RegexPool -import otoroshi.utils.cache.types.LegitConcurrentHashMap +import otoroshi.utils.cache.types.UnboundedConcurrentHashMap import otoroshi.utils.prometheus.CustomCollector import play.api.Logger import play.api.inject.ApplicationLifecycle @@ -81,7 +81,7 @@ class Metrics(env: Env, applicationLifecycle: ApplicationLifecycle) extends Time private val lastdataInRate = new AtomicLong(0L) private val lastdataOutRate = new AtomicLong(0L) private val lastconcurrentHandledRequests = new AtomicLong(0L) - private val lastData = new LegitConcurrentHashMap[String, AtomicReference[Any]]() // TODO: analyze growth over time + private val lastData = new UnboundedConcurrentHashMap[String, AtomicReference[Any]]() // TODO: analyze growth over time // metricRegistry.register("jvm.buffer", new BufferPoolMetricSet(ManagementFactory.getPlatformMBeanServer())) // metricRegistry.register("jvm.classloading", new ClassLoadingGaugeSet()) diff --git a/otoroshi/app/utils/regex.scala b/otoroshi/app/utils/regex.scala index 4939372ba3..d35b8d64c6 100644 --- a/otoroshi/app/utils/regex.scala +++ b/otoroshi/app/utils/regex.scala @@ -1,6 +1,6 @@ package otoroshi.utils -import otoroshi.utils.cache.types.LegitConcurrentHashMap +import otoroshi.utils.cache.types.UnboundedConcurrentHashMap import otoroshi.utils.syntax.implicits.BetterSyntax import play.api.Logger @@ -17,7 +17,7 @@ object RegexPool { lazy val logger = Logger("otoroshi-regex-pool") - private val pool = new LegitConcurrentHashMap[String, Regex]() // TODO: check growth over time + private val pool = new UnboundedConcurrentHashMap[String, Regex]() // TODO: check growth over time def apply(originalPattern: String): Regex = { if (!pool.containsKey(originalPattern)) { diff --git a/otoroshi/app/utils/syntax.scala b/otoroshi/app/utils/syntax.scala index 25271de463..419e403842 100644 --- a/otoroshi/app/utils/syntax.scala +++ b/otoroshi/app/utils/syntax.scala @@ -22,6 +22,7 @@ import reactor.core.publisher.Mono import java.io.{ByteArrayInputStream, File} import java.nio.charset.StandardCharsets import java.nio.file.Files +import java.security.MessageDigest import java.security.cert.{CertificateFactory, X509Certificate} import java.util.concurrent.TimeUnit import java.util.concurrent.atomic.AtomicReference @@ -246,11 +247,6 @@ object implicits { implicit class RegexOps(sc: StringContext) { def rr = new scala.util.matching.Regex(sc.parts.mkString) } - object BetterString { - import java.security.MessageDigest - val digest256 = MessageDigest.getInstance("SHA-256") - val digest512 = MessageDigest.getInstance("SHA-512") - } implicit class BetterString(private val obj: String) extends AnyVal { import otoroshi.utils.string.Implicits._ def slugify: String = obj.slug @@ -266,8 +262,8 @@ object implicits { def base64UrlSafe: String = Base64.encodeBase64URLSafeString(obj.getBytes(StandardCharsets.UTF_8)) def fromBase64: String = new String(Base64.decodeBase64(obj), StandardCharsets.UTF_8) def decodeBase64: String = new String(Base64.decodeBase64(obj), StandardCharsets.UTF_8) - def sha256: String = Hex.encodeHexString(BetterString.digest256.digest(obj.getBytes(StandardCharsets.UTF_8))) - def sha512: String = Hex.encodeHexString(BetterString.digest512.digest(obj.getBytes(StandardCharsets.UTF_8))) + def sha256: String = Hex.encodeHexString(MessageDigest.getInstance("SHA-256").digest(obj.getBytes(StandardCharsets.UTF_8))) + def sha512: String = Hex.encodeHexString(MessageDigest.getInstance("SHA-512").digest(obj.getBytes(StandardCharsets.UTF_8))) def chunks(size: Int): Source[String, NotUsed] = Source(obj.grouped(size).toList) def camelToSnake: String = { obj.replaceAll("([a-z])([A-Z]+)", "$1_$2").toLowerCase @@ -282,8 +278,8 @@ object implicits { } implicit class BetterByteString(private val obj: ByteString) extends AnyVal { def chunks(size: Int): Source[ByteString, NotUsed] = Source(obj.grouped(size).toList) - def sha256: String = Hex.encodeHexString(BetterString.digest256.digest(obj.toArray)) - def sha512: String = Hex.encodeHexString(BetterString.digest512.digest(obj.toArray)) + def sha256: String = Hex.encodeHexString(MessageDigest.getInstance("SHA-256").digest(obj.toArray)) + def sha512: String = Hex.encodeHexString(MessageDigest.getInstance("SHA-512").digest(obj.toArray)) } implicit class BetterBoolean(private val obj: Boolean) extends AnyVal { def json: JsValue = JsBoolean(obj) diff --git a/otoroshi/app/utils/typedmap.scala b/otoroshi/app/utils/typedmap.scala index 8e6d70088e..4ffa024efd 100644 --- a/otoroshi/app/utils/typedmap.scala +++ b/otoroshi/app/utils/typedmap.scala @@ -4,7 +4,7 @@ import akka.http.scaladsl.model.DateTime import otoroshi.gateway.GwError import otoroshi.models._ import otoroshi.next.models.{NgBackend, NgRoute, NgTarget} -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.json._ import play.api.libs.json._ import play.api.libs.typedmap.{TypedEntry, TypedKey} @@ -28,7 +28,7 @@ trait TypedMap { } object TypedMap { - def empty: TypedMap = new ConcurrentMutableTypedMap(new LegitTrieMap[TypedKey[_], Any]) + def empty: TypedMap = new ConcurrentMutableTypedMap(new UnboundedTrieMap[TypedKey[_], Any]) def apply(entries: TypedEntry[_]*): TypedMap = { TypedMap.empty.put(entries: _*) } diff --git a/otoroshi/app/utils/workflow.scala b/otoroshi/app/utils/workflow.scala index dc3393f30a..44785f5ddd 100644 --- a/otoroshi/app/utils/workflow.scala +++ b/otoroshi/app/utils/workflow.scala @@ -9,7 +9,7 @@ import akka.util.ByteString import otoroshi.env.Env import otoroshi.utils.JsonPathUtils import otoroshi.utils.ReplaceAllWith -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.http.MtlsConfig import otoroshi.utils.syntax.implicits._ import play.api.Logger @@ -253,8 +253,8 @@ class WorkFlow(spec: WorkFlowSpec) { )(implicit ec: ExecutionContext, mat: Materializer, env: Env): Future[WorkFlowResponse] = { val ctx = WorkFlowTaskContext( input.input, - new LegitTrieMap[String, JsValue](), - new LegitTrieMap[String, JsValue](), + new UnboundedTrieMap[String, JsValue](), + new UnboundedTrieMap[String, JsValue](), new AtomicReference[JsValue]( Json.obj( "status" -> 200, diff --git a/otoroshi/app/wasm/host.scala b/otoroshi/app/wasm/host.scala index f437d2afbf..703c82ee94 100644 --- a/otoroshi/app/wasm/host.scala +++ b/otoroshi/app/wasm/host.scala @@ -3,15 +3,14 @@ package otoroshi.wasm import akka.http.scaladsl.model.Uri import akka.stream.Materializer import akka.util.ByteString -import org.extism.sdk._ +import org.extism.sdk.wasmotoroshi._ import org.joda.time.DateTime import otoroshi.cluster.ClusterConfig import otoroshi.env.Env import otoroshi.events.WasmLogEvent import otoroshi.models._ import otoroshi.next.models.NgTarget -import otoroshi.next.plugins.api.NgCachedConfigContext -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.json.JsonOperationsHelper import otoroshi.utils.syntax.implicits._ import otoroshi.utils.{ConcurrentMutableTypedMap, RegexPool, TypedMap} @@ -25,8 +24,8 @@ import scala.concurrent.duration.{Duration, DurationInt, FiniteDuration} import scala.concurrent.{Await, ExecutionContext, Future} object Utils { - def rawBytePtrToString(plugin: ExtismCurrentPlugin, offset: Long, arrSize: Long): String = { - val memoryLength = LibExtism.INSTANCE.extism_current_plugin_memory_length(plugin.pointer, arrSize) + def rawBytePtrToString(plugin: WasmOtoroshiInternal, offset: Long, arrSize: Long): String = { + val memoryLength = plugin.memoryLength(arrSize) val arr = plugin .memory() .share(offset, memoryLength) @@ -34,11 +33,11 @@ object Utils { new String(arr, StandardCharsets.UTF_8) } - def contextParamsToString(plugin: ExtismCurrentPlugin, params: LibExtism.ExtismVal*) = { + def contextParamsToString(plugin: WasmOtoroshiInternal, params: WasmBridge.ExtismVal*) = { rawBytePtrToString(plugin, params(0).v.i64, params(1).v.i32) } - def contextParamsToJson(plugin: ExtismCurrentPlugin, params: LibExtism.ExtismVal*) = { + def contextParamsToJson(plugin: WasmOtoroshiInternal, params: WasmBridge.ExtismVal*) = { Json.parse(rawBytePtrToString(plugin, params(0).v.i64, params(1).v.i32)) } } @@ -48,15 +47,15 @@ case class EnvUserData( executionContext: ExecutionContext, mat: Materializer, config: WasmConfig -) extends HostUserData +) extends WasmOtoroshiHostUserData case class StateUserData( env: Env, executionContext: ExecutionContext, mat: Materializer, - cache: LegitTrieMap[String, LegitTrieMap[String, ByteString]] -) extends HostUserData -case class EmptyUserData() extends HostUserData + cache: UnboundedTrieMap[String, UnboundedTrieMap[String, ByteString]] +) extends WasmOtoroshiHostUserData +case class EmptyUserData() extends WasmOtoroshiHostUserData object LogLevel extends Enumeration { type LogLevel = Value @@ -72,7 +71,7 @@ object Status extends Enumeration { } case class HostFunctionWithAuthorization( - function: HostFunction[_ <: HostUserData], + function: WasmOtoroshiHostFunction[_ <: WasmOtoroshiHostUserData], authorized: WasmAuthorizations => Boolean ) @@ -84,20 +83,20 @@ trait AwaitCapable { object HFunction { - def defineEmptyFunction(fname: String, returnType: LibExtism.ExtismValType, params: LibExtism.ExtismValType*)( - f: (ExtismCurrentPlugin, Array[LibExtism.ExtismVal], Array[LibExtism.ExtismVal]) => Unit - ): org.extism.sdk.HostFunction[EmptyUserData] = { + def defineEmptyFunction(fname: String, returnType: WasmBridge.ExtismValType, params: WasmBridge.ExtismValType*)( + f: (WasmOtoroshiInternal, Array[WasmBridge.ExtismVal], Array[WasmBridge.ExtismVal]) => Unit + ): WasmOtoroshiHostFunction[EmptyUserData] = { defineFunction[EmptyUserData](fname, None, returnType, params: _*)((p1, p2, p3, _) => f(p1, p2, p3)) } def defineClassicFunction( fname: String, config: WasmConfig, - returnType: LibExtism.ExtismValType, - params: LibExtism.ExtismValType* + returnType: WasmBridge.ExtismValType, + params: WasmBridge.ExtismValType* )( - f: (ExtismCurrentPlugin, Array[LibExtism.ExtismVal], Array[LibExtism.ExtismVal], EnvUserData) => Unit - )(implicit env: Env, ec: ExecutionContext, mat: Materializer): org.extism.sdk.HostFunction[EnvUserData] = { + f: (WasmOtoroshiInternal, Array[WasmBridge.ExtismVal], Array[WasmBridge.ExtismVal], EnvUserData) => Unit + )(implicit env: Env, ec: ExecutionContext, mat: Materializer): WasmOtoroshiHostFunction[EnvUserData] = { val ev = EnvUserData(env, ec, mat, config) defineFunction[EnvUserData](fname, ev.some, returnType, params: _*)((p1, p2, p3, _) => f(p1, p2, p3, ev)) } @@ -106,35 +105,35 @@ object HFunction { fname: String, config: WasmConfig )( - f: (ExtismCurrentPlugin, Array[LibExtism.ExtismVal], Array[LibExtism.ExtismVal], EnvUserData) => Unit - )(implicit env: Env, ec: ExecutionContext, mat: Materializer): org.extism.sdk.HostFunction[EnvUserData] = { + f: (WasmOtoroshiInternal, Array[WasmBridge.ExtismVal], Array[WasmBridge.ExtismVal], EnvUserData) => Unit + )(implicit env: Env, ec: ExecutionContext, mat: Materializer): WasmOtoroshiHostFunction[EnvUserData] = { val ev = EnvUserData(env, ec, mat, config) defineFunction[EnvUserData]( fname, ev.some, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64 + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64 )((p1, p2, p3, _) => f(p1, p2, p3, ev)) } - def defineFunction[A <: HostUserData]( + def defineFunction[A <: WasmOtoroshiHostUserData]( fname: String, data: Option[A], - returnType: LibExtism.ExtismValType, - params: LibExtism.ExtismValType* + returnType: WasmBridge.ExtismValType, + params: WasmBridge.ExtismValType* )( - f: (ExtismCurrentPlugin, Array[LibExtism.ExtismVal], Array[LibExtism.ExtismVal], Option[A]) => Unit - ): org.extism.sdk.HostFunction[A] = { - new org.extism.sdk.HostFunction[A]( + f: (WasmOtoroshiInternal, Array[WasmBridge.ExtismVal], Array[WasmBridge.ExtismVal], Option[A]) => Unit + ): WasmOtoroshiHostFunction[A] = { + new WasmOtoroshiHostFunction[A]( fname, Array(params: _*), Array(returnType), - new ExtismFunction[A] { + new WasmOtoroshiExtismFunction[A] { override def invoke( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[A] ): Unit = { f(plugin, params, returns, if (data.isEmpty) None else Some(data.get())) @@ -154,10 +153,10 @@ object Logging extends AwaitCapable { def proxyLog() = HFunction.defineEmptyFunction( "proxy_log", - LibExtism.ExtismValType.I32, - LibExtism.ExtismValType.I32, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64 + WasmBridge.ExtismValType.I32, + WasmBridge.ExtismValType.I32, + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64 ) { (plugin, params, returns) => val logLevel = LogLevel(params(0).v.i32) @@ -178,13 +177,13 @@ object Logging extends AwaitCapable { env: Env, executionContext: ExecutionContext, mat: Materializer - ): org.extism.sdk.HostFunction[EnvUserData] = { + ): WasmOtoroshiHostFunction[EnvUserData] = { HFunction.defineClassicFunction( "proxy_log_event", config, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64 + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64 ) { (plugin, params, returns, ud) => val data = Utils.contextParamsToJson(plugin, params: _*) val route = data.select("route_id").asOpt[String].flatMap(env.proxyState.route) @@ -220,9 +219,9 @@ object Http extends AwaitCapable { def proxyHttpCall(config: WasmConfig)(implicit env: Env, executionContext: ExecutionContext, mat: Materializer) = { HFunction.defineContextualFunction("proxy_http_call", config) { ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], hostData: EnvUserData ) => { @@ -295,12 +294,12 @@ object Http extends AwaitCapable { env: Env, ec: ExecutionContext, mat: Materializer - ): HostFunction[EnvUserData] = { + ): WasmOtoroshiHostFunction[EnvUserData] = { HFunction.defineClassicFunction( "proxy_get_attrs", config, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64 + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64 ) { (plugin, _, returns, hostData) => attrs match { case None => plugin.returnBytes(returns(0), Array.empty[Byte]) @@ -313,12 +312,12 @@ object Http extends AwaitCapable { env: Env, ec: ExecutionContext, mat: Materializer - ): HostFunction[EnvUserData] = { + ): WasmOtoroshiHostFunction[EnvUserData] = { HFunction.defineContextualFunction("proxy_get_attr", config) { ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], hostData: EnvUserData ) => { @@ -339,12 +338,12 @@ object Http extends AwaitCapable { env: Env, ec: ExecutionContext, mat: Materializer - ): HostFunction[EnvUserData] = { + ): WasmOtoroshiHostFunction[EnvUserData] = { HFunction.defineClassicFunction( "proxy_clear_attrs", config, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64 + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64 ) { (plugin, _, returns, hostData) => attrs match { case None => plugin.returnInt(returns(0), 0) @@ -393,12 +392,12 @@ object Http extends AwaitCapable { env: Env, ec: ExecutionContext, mat: Materializer - ): HostFunction[EnvUserData] = { + ): WasmOtoroshiHostFunction[EnvUserData] = { HFunction.defineContextualFunction("proxy_set_attr", config) { ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], hostData: EnvUserData ) => { @@ -426,12 +425,12 @@ object Http extends AwaitCapable { env: Env, ec: ExecutionContext, mat: Materializer - ): HostFunction[EnvUserData] = { + ): WasmOtoroshiHostFunction[EnvUserData] = { HFunction.defineContextualFunction("proxy_del_attr", config) { ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], hostData: EnvUserData ) => { @@ -469,13 +468,13 @@ object DataStore extends AwaitCapable { pluginRestricted: Boolean = false, prefix: Option[String] = None, config: WasmConfig - )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): HostFunction[EnvUserData] = { + )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): WasmOtoroshiHostFunction[EnvUserData] = { val prefixName = if (pluginRestricted) "plugin_" else "" HFunction.defineContextualFunction(s"proxy_${prefixName}datastore_all_matching", config) { ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], hostData: EnvUserData ) => { @@ -494,13 +493,13 @@ object DataStore extends AwaitCapable { env: Env, executionContext: ExecutionContext, mat: Materializer - ): HostFunction[EnvUserData] = { + ): WasmOtoroshiHostFunction[EnvUserData] = { val prefixName = if (pluginRestricted) "plugin_" else "" HFunction.defineContextualFunction(s"proxy_${prefixName}datastore_keys", config) { ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], hostData: EnvUserData ) => { @@ -519,13 +518,13 @@ object DataStore extends AwaitCapable { env: Env, executionContext: ExecutionContext, mat: Materializer - ): HostFunction[EnvUserData] = { + ): WasmOtoroshiHostFunction[EnvUserData] = { val prefixName = if (pluginRestricted) "plugin_" else "" HFunction.defineContextualFunction(s"proxy_${prefixName}datastore_get", config) { ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], hostData: EnvUserData ) => { @@ -543,13 +542,13 @@ object DataStore extends AwaitCapable { pluginRestricted: Boolean = false, prefix: Option[String] = None, config: WasmConfig - )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): HostFunction[EnvUserData] = { + )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): WasmOtoroshiHostFunction[EnvUserData] = { val prefixName = if (pluginRestricted) "plugin_" else "" HFunction.defineContextualFunction(s"proxy_${prefixName}datastore_exists", config) { ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], hostData: EnvUserData ) => { @@ -566,13 +565,13 @@ object DataStore extends AwaitCapable { env: Env, executionContext: ExecutionContext, mat: Materializer - ): HostFunction[EnvUserData] = { + ): WasmOtoroshiHostFunction[EnvUserData] = { val prefixName = if (pluginRestricted) "plugin_" else "" HFunction.defineContextualFunction(s"proxy_${prefixName}datastore_pttl", config) { ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], hostData: EnvUserData ) => { @@ -588,13 +587,13 @@ object DataStore extends AwaitCapable { env: Env, executionContext: ExecutionContext, mat: Materializer - ): HostFunction[EnvUserData] = { + ): WasmOtoroshiHostFunction[EnvUserData] = { val prefixName = if (pluginRestricted) "plugin_" else "" HFunction.defineContextualFunction(s"proxy_${prefixName}datastore_setnx", config) { ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], hostData: EnvUserData ) => { @@ -618,13 +617,13 @@ object DataStore extends AwaitCapable { env: Env, executionContext: ExecutionContext, mat: Materializer - ): HostFunction[EnvUserData] = { + ): WasmOtoroshiHostFunction[EnvUserData] = { val prefixName = if (pluginRestricted) "plugin_" else "" HFunction.defineContextualFunction(s"proxy_${prefixName}datastore_set", config) { ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], hostData: EnvUserData ) => { @@ -648,13 +647,13 @@ object DataStore extends AwaitCapable { env: Env, executionContext: ExecutionContext, mat: Materializer - ): HostFunction[EnvUserData] = { + ): WasmOtoroshiHostFunction[EnvUserData] = { val prefixName = if (pluginRestricted) "plugin_" else "" HFunction.defineContextualFunction(s"proxy_${prefixName}datastore_del", config) { ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], hostData: EnvUserData ) => { @@ -674,13 +673,13 @@ object DataStore extends AwaitCapable { pluginRestricted: Boolean = false, prefix: Option[String] = None, config: WasmConfig - )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): HostFunction[EnvUserData] = { + )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): WasmOtoroshiHostFunction[EnvUserData] = { val prefixName = if (pluginRestricted) "plugin_" else "" HFunction.defineContextualFunction(s"proxy_${prefixName}datastore_incrby", config) { ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], hostData: EnvUserData ) => { @@ -699,13 +698,13 @@ object DataStore extends AwaitCapable { pluginRestricted: Boolean = false, prefix: Option[String] = None, config: WasmConfig - )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): HostFunction[EnvUserData] = { + )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): WasmOtoroshiHostFunction[EnvUserData] = { val prefixName = if (pluginRestricted) "plugin_" else "" HFunction.defineContextualFunction(s"proxy_${prefixName}datastore_pexpire", config) { ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], hostData: EnvUserData ) => { @@ -781,19 +780,19 @@ object DataStore extends AwaitCapable { object State { - private val cache: LegitTrieMap[String, LegitTrieMap[String, ByteString]] = - new LegitTrieMap[String, LegitTrieMap[String, ByteString]]() + private val cache: UnboundedTrieMap[String, UnboundedTrieMap[String, ByteString]] = + new UnboundedTrieMap[String, UnboundedTrieMap[String, ByteString]]() def getClusterState(cc: ClusterConfig): JsValue = cc.json def getProxyState( config: WasmConfig - )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): HostFunction[EnvUserData] = { + )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): WasmOtoroshiHostFunction[EnvUserData] = { HFunction.defineClassicFunction( "proxy_state", config, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64 + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64 ) { (plugin, _, returns, hostData) => { val proxyState = hostData.env.proxyState @@ -828,7 +827,7 @@ object State { } def proxyStateGetValue( config: WasmConfig - )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): HostFunction[EnvUserData] = { + )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): WasmOtoroshiHostFunction[EnvUserData] = { HFunction.defineContextualFunction("proxy_state_value", config) { (plugin, params, returns, userData) => { val context = Utils.contextParamsToJson(plugin, params: _*) @@ -890,12 +889,12 @@ object State { def getProxyConfig( config: WasmConfig - )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): HostFunction[EnvUserData] = { + )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): WasmOtoroshiHostFunction[EnvUserData] = { HFunction.defineClassicFunction( "proxy_config", config, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64 + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64 ) { (plugin, _, returns, hostData) => { val cc = hostData.env.configurationJson.stringify @@ -910,8 +909,8 @@ object State { HFunction.defineClassicFunction( "proxy_global_config", config, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64 + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64 ) { (plugin, _, returns, hostData) => { val cc = hostData.env.datastores.globalConfigDataStore.latest().json.stringify @@ -922,12 +921,12 @@ object State { def getClusterState( config: WasmConfig - )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): HostFunction[EnvUserData] = { + )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): WasmOtoroshiHostFunction[EnvUserData] = { HFunction.defineClassicFunction( "proxy_cluster_state", config, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64 + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64 ) { (plugin, _, returns, hostData) => { val cc = hostData.env.clusterConfig @@ -938,7 +937,7 @@ object State { def proxyClusteStateGetValue( config: WasmConfig - )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): HostFunction[EnvUserData] = { + )(implicit env: Env, executionContext: ExecutionContext, mat: Materializer): WasmOtoroshiHostFunction[EnvUserData] = { HFunction.defineContextualFunction("proxy_cluster_state_value", config) { (plugin, params, returns, userData) => { val path = Utils.contextParamsToString(plugin, params: _*) @@ -953,13 +952,13 @@ object State { env: Env, executionContext: ExecutionContext, mat: Materializer - ): HostFunction[StateUserData] = { + ): WasmOtoroshiHostFunction[StateUserData] = { HFunction.defineFunction[StateUserData]( if (pluginRestricted) "proxy_plugin_map_set" else "proxy_global_map_set", StateUserData(env, executionContext, mat, cache).some, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64 + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64 ) { (plugin, params, returns, userData: Option[StateUserData]) => { userData.map(hostData => { @@ -974,7 +973,7 @@ object State { state.put(key, ByteString(value)) hostData.cache.put(id, state) case None => - val state = new LegitTrieMap[String, ByteString]() + val state = new UnboundedTrieMap[String, ByteString]() state.put(key, ByteString(value)) hostData.cache.put(id, state) } @@ -989,13 +988,13 @@ object State { env: Env, executionContext: ExecutionContext, mat: Materializer - ): HostFunction[StateUserData] = { + ): WasmOtoroshiHostFunction[StateUserData] = { HFunction.defineFunction[StateUserData]( if (pluginRestricted) "proxy_plugin_map_del" else "proxy_global_map_del", StateUserData(env, executionContext, mat, cache).some, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64 + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64 ) { (plugin, params, returns, userData: Option[StateUserData]) => { userData.map(hostData => { @@ -1006,7 +1005,7 @@ object State { state.remove(key) hostData.cache.put(id, state) case None => - val state = new LegitTrieMap[String, ByteString]() + val state = new UnboundedTrieMap[String, ByteString]() state.remove(key) hostData.cache.put(id, state) } @@ -1020,13 +1019,13 @@ object State { env: Env, executionContext: ExecutionContext, mat: Materializer - ): HostFunction[StateUserData] = { + ): WasmOtoroshiHostFunction[StateUserData] = { HFunction.defineFunction[StateUserData]( if (pluginRestricted) "proxy_plugin_map_get" else "proxy_global_map_get", StateUserData(env, executionContext, mat, cache).some, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64 + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64 ) { (plugin, params, returns, userData: Option[StateUserData]) => { userData.map(hostData => { @@ -1050,12 +1049,12 @@ object State { env: Env, executionContext: ExecutionContext, mat: Materializer - ): HostFunction[StateUserData] = { + ): WasmOtoroshiHostFunction[StateUserData] = { HFunction.defineFunction[StateUserData]( if (pluginRestricted) "proxy_plugin_map" else "proxy_global_map", StateUserData(env, executionContext, mat, cache).some, - LibExtism.ExtismValType.I64, - LibExtism.ExtismValType.I64 + WasmBridge.ExtismValType.I64, + WasmBridge.ExtismValType.I64 ) { (plugin, _, returns, userData: Option[StateUserData]) => { userData.map(hostData => { @@ -1105,7 +1104,7 @@ object HostFunctions { def getFunctions(config: WasmConfig, pluginId: String, attrs: Option[TypedMap])(implicit env: Env, executionContext: ExecutionContext - ): Array[HostFunction[_ <: HostUserData]] = { + ): Array[WasmOtoroshiHostFunction[_ <: WasmOtoroshiHostUserData]] = { implicit val mat = env.otoroshiMaterializer diff --git a/otoroshi/app/wasm/opa.scala b/otoroshi/app/wasm/opa.scala index 59c5b24dcd..0fccf4d744 100644 --- a/otoroshi/app/wasm/opa.scala +++ b/otoroshi/app/wasm/opa.scala @@ -1,158 +1,156 @@ package otoroshi.wasm; -import akka.stream.Materializer -import org.extism.sdk.parameters.{IntegerParameter, Parameters} -import org.extism.sdk._ -import otoroshi.env.Env -import otoroshi.next.plugins.api.NgCachedConfigContext +import org.extism.sdk.wasmotoroshi._ +import otoroshi.utils.syntax.implicits.{BetterJsValue, BetterSyntax} +import play.api.libs.json.{JsString, JsValue, Json} +import java.nio.ByteBuffer import java.nio.charset.StandardCharsets import java.util.Optional -import java.util.concurrent.atomic.AtomicReference -import scala.concurrent.ExecutionContext; +import java.util.concurrent.atomic.AtomicReference; object OPA extends AwaitCapable { - def opaAbortFunction: ExtismFunction[EmptyUserData] = + def opaAbortFunction: WasmOtoroshiExtismFunction[EmptyUserData] = ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EmptyUserData] ) => { System.out.println("opaAbortFunction"); } - def opaPrintlnFunction: ExtismFunction[EmptyUserData] = + def opaPrintlnFunction: WasmOtoroshiExtismFunction[EmptyUserData] = ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EmptyUserData] ) => { System.out.println("opaPrintlnFunction"); } - def opaBuiltin0Function: ExtismFunction[EmptyUserData] = + def opaBuiltin0Function: WasmOtoroshiExtismFunction[EmptyUserData] = ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EmptyUserData] ) => { System.out.println("opaBuiltin0Function"); } - def opaBuiltin1Function: ExtismFunction[EmptyUserData] = + def opaBuiltin1Function: WasmOtoroshiExtismFunction[EmptyUserData] = ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EmptyUserData] ) => { System.out.println("opaBuiltin1Function"); } - def opaBuiltin2Function: ExtismFunction[EmptyUserData] = + def opaBuiltin2Function: WasmOtoroshiExtismFunction[EmptyUserData] = ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EmptyUserData] ) => { System.out.println("opaBuiltin2Function"); } - def opaBuiltin3Function: ExtismFunction[EmptyUserData] = + def opaBuiltin3Function: WasmOtoroshiExtismFunction[EmptyUserData] = ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EmptyUserData] ) => { System.out.println("opaBuiltin3Function"); }; - def opaBuiltin4Function: ExtismFunction[EmptyUserData] = + def opaBuiltin4Function: WasmOtoroshiExtismFunction[EmptyUserData] = ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EmptyUserData] ) => { System.out.println("opaBuiltin4Function"); } - def opaAbort() = new org.extism.sdk.HostFunction[EmptyUserData]( + def opaAbort() = new WasmOtoroshiHostFunction[EmptyUserData]( "opa_abort", - Array(LibExtism.ExtismValType.I32), + Array(WasmBridge.ExtismValType.I32), Array(), opaAbortFunction, Optional.empty() ) - def opaPrintln() = new org.extism.sdk.HostFunction[EmptyUserData]( + def opaPrintln() = new WasmOtoroshiHostFunction[EmptyUserData]( "opa_println", - Array(LibExtism.ExtismValType.I64), - Array(LibExtism.ExtismValType.I64), + Array(WasmBridge.ExtismValType.I64), + Array(WasmBridge.ExtismValType.I64), opaPrintlnFunction, Optional.empty() ) - def opaBuiltin0() = new org.extism.sdk.HostFunction[EmptyUserData]( + def opaBuiltin0() = new WasmOtoroshiHostFunction[EmptyUserData]( "opa_builtin0", - Array(LibExtism.ExtismValType.I32, LibExtism.ExtismValType.I32), - Array(LibExtism.ExtismValType.I32), + Array(WasmBridge.ExtismValType.I32, WasmBridge.ExtismValType.I32), + Array(WasmBridge.ExtismValType.I32), opaBuiltin0Function, Optional.empty() ) - def opaBuiltin1() = new org.extism.sdk.HostFunction[EmptyUserData]( + def opaBuiltin1() = new WasmOtoroshiHostFunction[EmptyUserData]( "opa_builtin1", - Array(LibExtism.ExtismValType.I32, LibExtism.ExtismValType.I32, LibExtism.ExtismValType.I32), - Array(LibExtism.ExtismValType.I32), + Array(WasmBridge.ExtismValType.I32, WasmBridge.ExtismValType.I32, WasmBridge.ExtismValType.I32), + Array(WasmBridge.ExtismValType.I32), opaBuiltin1Function, Optional.empty() ) - def opaBuiltin2() = new org.extism.sdk.HostFunction[EmptyUserData]( + def opaBuiltin2() = new WasmOtoroshiHostFunction[EmptyUserData]( "opa_builtin2", Array( - LibExtism.ExtismValType.I32, - LibExtism.ExtismValType.I32, - LibExtism.ExtismValType.I32, - LibExtism.ExtismValType.I32 + WasmBridge.ExtismValType.I32, + WasmBridge.ExtismValType.I32, + WasmBridge.ExtismValType.I32, + WasmBridge.ExtismValType.I32 ), - Array(LibExtism.ExtismValType.I32), + Array(WasmBridge.ExtismValType.I32), opaBuiltin2Function, Optional.empty() ) - def opaBuiltin3() = new org.extism.sdk.HostFunction[EmptyUserData]( + def opaBuiltin3() = new WasmOtoroshiHostFunction[EmptyUserData]( "opa_builtin3", Array( - LibExtism.ExtismValType.I32, - LibExtism.ExtismValType.I32, - LibExtism.ExtismValType.I32, - LibExtism.ExtismValType.I32, - LibExtism.ExtismValType.I32 + WasmBridge.ExtismValType.I32, + WasmBridge.ExtismValType.I32, + WasmBridge.ExtismValType.I32, + WasmBridge.ExtismValType.I32, + WasmBridge.ExtismValType.I32 ), - Array(LibExtism.ExtismValType.I32), + Array(WasmBridge.ExtismValType.I32), opaBuiltin3Function, Optional.empty() ) - def opaBuiltin4() = new org.extism.sdk.HostFunction[EmptyUserData]( + def opaBuiltin4() = new WasmOtoroshiHostFunction[EmptyUserData]( "opa_builtin4", Array( - LibExtism.ExtismValType.I32, - LibExtism.ExtismValType.I32, - LibExtism.ExtismValType.I32, - LibExtism.ExtismValType.I32, - LibExtism.ExtismValType.I32, - LibExtism.ExtismValType.I32 + WasmBridge.ExtismValType.I32, + WasmBridge.ExtismValType.I32, + WasmBridge.ExtismValType.I32, + WasmBridge.ExtismValType.I32, + WasmBridge.ExtismValType.I32, + WasmBridge.ExtismValType.I32 ), - Array(LibExtism.ExtismValType.I32), + Array(WasmBridge.ExtismValType.I32), opaBuiltin4Function, Optional.empty() ) @@ -169,98 +167,93 @@ object OPA extends AwaitCapable { ) } - def getLinearMemories(): Seq[LinearMemory] = { + def getLinearMemories(): Seq[WasmOtoroshiLinearMemory] = { Seq( - new LinearMemory("memory", "env", new LinearMemoryOptions(5, Optional.empty())) + new WasmOtoroshiLinearMemory("memory", "env", new WasmOtoroshiLinearMemoryOptions(5, Optional.empty())) ) } - def loadJSON(plugin: Plugin, value: Array[Byte]): Int = { + def loadJSON(plugin: WasmOtoroshiInstance, value: Array[Byte]): Either[JsValue, Int] = { if (value.length == 0) { - return 0 + 0.right } else { val value_buf_len = value.length - var parameters = new Parameters(1) - val parameter = new IntegerParameter() - parameter.add(parameters, value_buf_len, 0) + var parameters = new WasmOtoroshiParameters(1) + .pushInt(value_buf_len) - val raw_addr = plugin.call("opa_malloc", parameters, 1, "".getBytes()) + val raw_addr = plugin.call("opa_malloc", parameters, 1) if ( - LibExtism.INSTANCE.extism_memory_write_bytes( - plugin.getPointer(), - plugin.getIndex(), + plugin.writeBytes( value, value_buf_len, raw_addr.getValue(0).v.i32 ) == -1 ) { - throw new ExtismException("Cant' write in memory") - } + JsString("Cant' write in memory").left + } else { + parameters = new WasmOtoroshiParameters(2) + .pushInts(raw_addr.getValue(0).v.i32, value_buf_len) + val parsed_addr = plugin.call( + "opa_json_parse", + parameters, + 1 + ) - parameters = new Parameters(2) - parameter.add(parameters, raw_addr.getValue(0).v.i32, 0) - parameter.add(parameters, value_buf_len, 1) - val parsed_addr = plugin.call( - "opa_json_parse", - parameters, - 1 - ) - - if (parsed_addr.getValue(0).v.i32 == 0) { - throw new ExtismException("failed to parse json value") + if (parsed_addr.getValue(0).v.i32 == 0) { + JsString("failed to parse json value").left + } else { + parsed_addr.getValue(0).v.i32.right + } } - - parsed_addr.getValue(0).v.i32 } } - def evaluate(plugin: Plugin, input: String): String = { - val entrypoint = 0 - - // TODO - read and load builtins functions by calling dumpJSON - - val data_addr = loadJSON(plugin, "{}".getBytes(StandardCharsets.UTF_8)) - - val parameter = new IntegerParameter() + def initialize(plugin: WasmOtoroshiInstance): Either[JsValue, (String, ResultsWrapper)] = { + loadJSON(plugin, "{}".getBytes(StandardCharsets.UTF_8)) + .flatMap(dataAddr => { + val base_heap_ptr = plugin.call( + "opa_heap_ptr_get", + new WasmOtoroshiParameters(0), + 1 + ) - val base_heap_ptr = plugin.call( - "opa_heap_ptr_get", - new Parameters(0), - 1 - ) + val data_heap_ptr = base_heap_ptr.getValue(0).v.i32 + ( + Json.obj("dataAddr" -> dataAddr, "baseHeapPtr" -> data_heap_ptr).stringify, + ResultsWrapper(new WasmOtoroshiResults(0)) + ).right + }) + } - val data_heap_ptr = base_heap_ptr.getValue(0).v.i32 + def evaluate(plugin: WasmOtoroshiInstance, dataAddr: Int, baseHeapPtr: Int, input: String): Either[JsValue, (String, ResultsWrapper)] = { + val entrypoint = 0 + // TODO - read and load builtins functions by calling dumpJSON val input_len = input.getBytes(StandardCharsets.UTF_8).length - LibExtism.INSTANCE.extism_memory_write_bytes( - plugin.getPointer(), - plugin.getIndex(), + plugin.writeBytes( input.getBytes(StandardCharsets.UTF_8), input_len, - data_heap_ptr + baseHeapPtr ) - val heap_ptr = data_heap_ptr + input_len - val input_addr = data_heap_ptr + val heap_ptr = baseHeapPtr + input_len + val input_addr = baseHeapPtr - val ptr = new Parameters(7) - parameter.add(ptr, 0, 0) - parameter.add(ptr, entrypoint, 1) - parameter.add(ptr, data_addr, 2) - parameter.add(ptr, input_addr, 3) - parameter.add(ptr, input_len, 4) - parameter.add(ptr, heap_ptr, 5) - parameter.add(ptr, 0, 6) + val ptr = new WasmOtoroshiParameters(7) + .pushInts(0 , entrypoint, dataAddr, input_addr, input_len, heap_ptr, 0) val ret = plugin.call("opa_eval", ptr, 1) - val memory = LibExtism.INSTANCE.extism_get_memory(plugin.getPointer(), plugin.getIndex(), "memory") + val memory = plugin.getMemory("memory") + + val offset: Int = ret.getValue(0).v.i32 + val arraySize: Int = 65356 - val mem: Array[Byte] = memory.getByteArray(ret.getValue(0).v.i32, 65356) + val mem: Array[Byte] = memory.getByteArray(offset, arraySize) val size: Int = lastValidByte(mem) - new String(java.util.Arrays.copyOf(mem, size), StandardCharsets.UTF_8) + (new String(java.util.Arrays.copyOf(mem, size), StandardCharsets.UTF_8), ResultsWrapper(new WasmOtoroshiResults(0))).right } def lastValidByte(arr: Array[Byte]): Int = { @@ -275,10 +268,10 @@ object OPA extends AwaitCapable { object LinearMemories { - private val memories: AtomicReference[Seq[LinearMemory]] = - new AtomicReference[Seq[LinearMemory]](Seq.empty[LinearMemory]) + private val memories: AtomicReference[Seq[WasmOtoroshiLinearMemory]] = + new AtomicReference[Seq[WasmOtoroshiLinearMemory]](Seq.empty[WasmOtoroshiLinearMemory]) - def getMemories(config: WasmConfig): Array[LinearMemory] = { + def getMemories(config: WasmConfig): Array[WasmOtoroshiLinearMemory] = { if (config.opa) { if (memories.get.isEmpty) { memories.set( @@ -294,15 +287,15 @@ object LinearMemories { /* String dumpJSON() { - Results addr = plugin.call("builtins", new Parameters(0), 1); + Results addr = plugin.call("builtins", new WasmOtoroshiParameters(0), 1); - Parameters parameters = new Parameters(1); + Parameters parameters = new WasmOtoroshiParameters(1); IntegerParameter builder = new IntegerParameter(); builder.add(parameters, addr.getValue(0).v.i32, 0); Results rawAddr = plugin.call("opa_json_dump", parameters, 1); - Pointer memory = LibExtism.INSTANCE.extism_get_memory(plugin.getPointer(), plugin.getIndex(), "memory"); + Pointer memory = WasmBridge.INSTANCE.extism_get_memory(plugin.getPointer(), plugin.getIndex(), "memory"); byte[] mem = memory.getByteArray(rawAddr.getValue(0).v.i32, 65356); int size = lastValidByte(mem); diff --git a/otoroshi/app/wasm/proxywasm/api.scala b/otoroshi/app/wasm/proxywasm/api.scala index fafeb43089..5e9fb50fe3 100644 --- a/otoroshi/app/wasm/proxywasm/api.scala +++ b/otoroshi/app/wasm/proxywasm/api.scala @@ -2,12 +2,13 @@ package otoroshi.wasm.proxywasm import akka.util.ByteString import com.sun.jna.Pointer -import org.extism.sdk._ +import org.extism.sdk.wasmotoroshi._ import otoroshi.env.Env import otoroshi.next.plugins.api.NgPluginHttpResponse import otoroshi.utils.TypedMap import otoroshi.utils.http.RequestImplicits._ import otoroshi.utils.syntax.implicits._ +import otoroshi.wasm.WasmVm import play.api.libs.json.JsValue import play.api.mvc import play.api.mvc.RequestHeader @@ -15,13 +16,13 @@ import play.api.mvc.RequestHeader import java.util.concurrent.atomic.AtomicReference object VmData { - def empty(): VmData = VmData( - "", - Map.empty, - -1, - new AtomicReference[mvc.Result](null), - new AtomicReference[ByteString](null), - new AtomicReference[ByteString](null) + def empty(): VmData = VmData( + configuration = "", + properties = Map.empty, + tickPeriod = -1, + respRef = new AtomicReference[mvc.Result](null), + bodyInRef = new AtomicReference[ByteString](null), + bodyOutRef = new AtomicReference[ByteString](null) ) def withRules(rules: JsValue): VmData = VmData.empty().copy(configuration = rules.stringify) def from(request: RequestHeader, attrs: TypedMap)(implicit env: Env): VmData = { @@ -86,8 +87,8 @@ case class VmData( tickPeriod: Int = -1, respRef: AtomicReference[play.api.mvc.Result], bodyInRef: AtomicReference[ByteString], - bodyOutRef: AtomicReference[ByteString] -) extends HostUserData { + bodyOutRef: AtomicReference[ByteString], +) extends WasmOtoroshiHostUserData { def withRequest(request: RequestHeader, attrs: TypedMap)(implicit env: Env): VmData = { VmData .from(request, attrs) @@ -103,8 +104,8 @@ case class VmData( val newProps: Map[String, Array[Byte]] = properties ++ Map( "response.code" -> response.status.bytes, "response.code_details" -> "".bytes, - "response.flags" -> -1.bytes, - "response.grpc_status" -> -1.bytes, + "response.flags" -> (-1).bytes, + "response.grpc_status" -> (-1).bytes, ":status" -> response.status.toString.bytes //"response.size" -> , //"response.total_size" -> , @@ -129,14 +130,14 @@ case class VmData( trait Api { - def proxyLog(plugin: ExtismCurrentPlugin, logLevel: Int, messageData: Int, messageSize: Int): Result + def proxyLog(plugin: WasmOtoroshiInternal, logLevel: Int, messageData: Int, messageSize: Int): Result - def proxyResumeStream(plugin: ExtismCurrentPlugin, streamType: StreamType): Result + def proxyResumeStream(plugin: WasmOtoroshiInternal, streamType: StreamType): Result - def proxyCloseStream(plugin: ExtismCurrentPlugin, streamType: StreamType): Result + def proxyCloseStream(plugin: WasmOtoroshiInternal, streamType: StreamType): Result def proxySendHttpResponse( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, responseCode: Int, responseCodeDetailsData: Int, responseCodeDetailsSize: Int, @@ -144,17 +145,18 @@ trait Api { responseBodySize: Int, additionalHeadersMapData: Int, additionalHeadersSize: Int, - grpcStatus: Int + grpcStatus: Int, + vmData: VmData, ): Result - def proxyResumeHttpStream(plugin: ExtismCurrentPlugin, streamType: StreamType): Result + def proxyResumeHttpStream(plugin: WasmOtoroshiInternal, streamType: StreamType): Result - def proxyCloseHttpStream(plugin: ExtismCurrentPlugin, streamType: StreamType): Result + def proxyCloseHttpStream(plugin: WasmOtoroshiInternal, streamType: StreamType): Result - def getBuffer(plugin: ExtismCurrentPlugin, data: VmData, bufferType: BufferType): IoBuffer + def getBuffer(plugin: WasmOtoroshiInternal, data: VmData, bufferType: BufferType): IoBuffer def proxyGetBuffer( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, data: VmData, bufferType: Int, offset: Int, @@ -164,7 +166,7 @@ trait Api { ): Result def proxySetBuffer( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, data: VmData, bufferType: Int, offset: Int, @@ -173,17 +175,17 @@ trait Api { bufferSize: Int ): Result - def getMap(plugin: ExtismCurrentPlugin, data: VmData, mapType: MapType): Map[String, ByteString] + def getMap(plugin: WasmOtoroshiInternal, data: VmData, mapType: MapType): Map[String, ByteString] def copyMapIntoInstance( m: Map[String, String], - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, returnMapData: Int, returnMapSize: Int ): Unit def proxyGetHeaderMapPairs( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, data: VmData, mapType: Int, returnDataPtr: Int, @@ -191,7 +193,7 @@ trait Api { ): Int def proxyGetHeaderMapValue( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, data: VmData, mapType: Int, keyData: Int, @@ -201,7 +203,7 @@ trait Api { ): Result def proxyReplaceHeaderMapValue( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, data: VmData, mapType: Int, keyData: Int, @@ -211,7 +213,7 @@ trait Api { ): Result def proxyOpenSharedKvstore( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, kvstoreNameData: Int, kvstoreNameSiz: Int, createIfNotExist: Int, @@ -219,7 +221,7 @@ trait Api { ): Result def proxyGetSharedKvstoreKeyValues( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, kvstoreID: Int, keyData: Int, keySize: Int, @@ -229,7 +231,7 @@ trait Api { ): Result def proxySetSharedKvstoreKeyValues( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, kvstoreID: Int, keyData: Int, keySize: Int, @@ -239,7 +241,7 @@ trait Api { ): Result def proxyAddSharedKvstoreKeyValues( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, kvstoreID: Int, keyData: Int, keySize: Int, @@ -249,17 +251,17 @@ trait Api { ): Result def proxyRemoveSharedKvstoreKey( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, kvstoreID: Int, keyData: Int, keySize: Int, cas: Int ): Result - def proxyDeleteSharedKvstore(plugin: ExtismCurrentPlugin, kvstoreID: Int): Result + def proxyDeleteSharedKvstore(plugin: WasmOtoroshiInternal, kvstoreID: Int): Result def proxyOpenSharedQueue( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, queueNameData: Int, queueNameSize: Int, createIfNotExist: Int, @@ -267,38 +269,38 @@ trait Api { ): Result def proxyDequeueSharedQueueItem( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, queueID: Int, returnPayloadData: Int, returnPayloadSize: Int ): Result - def proxyEnqueueSharedQueueItem(plugin: ExtismCurrentPlugin, queueID: Int, payloadData: Int, payloadSize: Int): Result + def proxyEnqueueSharedQueueItem(plugin: WasmOtoroshiInternal, queueID: Int, payloadData: Int, payloadSize: Int): Result - def proxyDeleteSharedQueue(plugin: ExtismCurrentPlugin, queueID: Int): Result + def proxyDeleteSharedQueue(plugin: WasmOtoroshiInternal, queueID: Int): Result - def proxyCreateTimer(plugin: ExtismCurrentPlugin, period: Int, oneTime: Int, returnTimerID: Int): Result + def proxyCreateTimer(plugin: WasmOtoroshiInternal, period: Int, oneTime: Int, returnTimerID: Int): Result - def proxyDeleteTimer(plugin: ExtismCurrentPlugin, timerID: Int): Result + def proxyDeleteTimer(plugin: WasmOtoroshiInternal, timerID: Int): Result def proxyCreateMetric( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, metricType: MetricType, metricNameData: Int, metricNameSize: Int, returnMetricID: Int ): MetricType - def proxyGetMetricValue(plugin: ExtismCurrentPlugin, metricID: Int, returnValue: Int): Result + def proxyGetMetricValue(plugin: WasmOtoroshiInternal, metricID: Int, returnValue: Int): Result - def proxySetMetricValue(plugin: ExtismCurrentPlugin, metricID: Int, value: Int): Result + def proxySetMetricValue(plugin: WasmOtoroshiInternal, metricID: Int, value: Int): Result - def proxyIncrementMetricValue(plugin: ExtismCurrentPlugin, data: VmData, metricID: Int, offset: Long): Result + def proxyIncrementMetricValue(plugin: WasmOtoroshiInternal, data: VmData, metricID: Int, offset: Long): Result - def proxyDeleteMetric(plugin: ExtismCurrentPlugin, metricID: Int): Result + def proxyDeleteMetric(plugin: WasmOtoroshiInternal, metricID: Int): Result def proxyDefineMetric( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, metricType: Int, namePtr: Int, nameSize: Int, @@ -306,7 +308,7 @@ trait Api { ): Result def proxyDispatchHttpCall( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, upstreamNameData: Int, upstreamNameSize: Int, headersMapData: Int, @@ -320,7 +322,7 @@ trait Api { ): Result def proxyDispatchGrpcCall( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, upstreamNameData: Int, upstreamNameSize: Int, serviceNameData: Int, @@ -336,7 +338,7 @@ trait Api { ): Result def proxyOpenGrpcStream( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, upstreamNameData: Int, upstreamNameSize: Int, serviceNameData: Int, @@ -349,18 +351,18 @@ trait Api { ): Result def proxySendGrpcStreamMessage( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, calloutID: Int, grpcMessageData: Int, grpcMessageSize: Int ): Result - def proxyCancelGrpcCall(plugin: ExtismCurrentPlugin, calloutID: Int): Result + def proxyCancelGrpcCall(plugin: WasmOtoroshiInternal, calloutID: Int): Result - def proxyCloseGrpcCall(plugin: ExtismCurrentPlugin, calloutID: Int): Result + def proxyCloseGrpcCall(plugin: WasmOtoroshiInternal, calloutID: Int): Result def proxyCallCustomFunction( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, customFunctionID: Int, parametersData: Int, parametersSize: Int, @@ -368,10 +370,10 @@ trait Api { returnResultsSize: Int ): Result - def copyIntoInstance(plugin: ExtismCurrentPlugin, memory: Pointer, value: IoBuffer, retPtr: Int, retSize: Int): Result + def copyIntoInstance(plugin: WasmOtoroshiInternal, memory: Pointer, value: IoBuffer, retPtr: Int, retSize: Int): Result def proxyGetProperty( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, data: VmData, keyPtr: Int, keySize: Int, @@ -397,45 +399,45 @@ trait Api { def proxySetTickPeriodMilliseconds(data: VmData, period: Int): Status - def proxySetEffectiveContext(plugin: ExtismCurrentPlugin, contextID: Int): Status + def proxySetEffectiveContext(plugin: WasmOtoroshiInternal, contextID: Int): Status - def getPluginConfig(plugin: ExtismCurrentPlugin, data: VmData): IoBuffer + def getPluginConfig(plugin: WasmOtoroshiInternal, data: VmData): IoBuffer - def getHttpRequestBody(plugin: ExtismCurrentPlugin, data: VmData): IoBuffer + def getHttpRequestBody(plugin: WasmOtoroshiInternal, data: VmData): IoBuffer - def getHttpResponseBody(plugin: ExtismCurrentPlugin, data: VmData): IoBuffer + def getHttpResponseBody(plugin: WasmOtoroshiInternal, data: VmData): IoBuffer - def getDownStreamData(plugin: ExtismCurrentPlugin, data: VmData): IoBuffer + def getDownStreamData(plugin: WasmOtoroshiInternal, data: VmData): IoBuffer - def getUpstreamData(plugin: ExtismCurrentPlugin, data: VmData): IoBuffer + def getUpstreamData(plugin: WasmOtoroshiInternal, data: VmData): IoBuffer - def getHttpCalloutResponseBody(plugin: ExtismCurrentPlugin, data: VmData): IoBuffer + def getHttpCalloutResponseBody(plugin: WasmOtoroshiInternal, data: VmData): IoBuffer - def getVmConfig(plugin: ExtismCurrentPlugin, data: VmData): IoBuffer + def getVmConfig(plugin: WasmOtoroshiInternal, data: VmData): IoBuffer def getCustomBuffer(bufferType: BufferType): IoBuffer - def getHttpRequestHeader(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] + def getHttpRequestHeader(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] - def getHttpRequestTrailer(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] + def getHttpRequestTrailer(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] - def getHttpRequestMetadata(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] + def getHttpRequestMetadata(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] - def getHttpResponseHeader(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] + def getHttpResponseHeader(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] - def getHttpResponseTrailer(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] + def getHttpResponseTrailer(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] - def getHttpResponseMetadata(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] + def getHttpResponseMetadata(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] - def getHttpCallResponseHeaders(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] + def getHttpCallResponseHeaders(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] - def getHttpCallResponseTrailer(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] + def getHttpCallResponseTrailer(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] - def getHttpCallResponseMetadata(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] + def getHttpCallResponseMetadata(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] - def getCustomMap(plugin: ExtismCurrentPlugin, data: VmData, mapType: MapType): Map[String, ByteString] + def getCustomMap(plugin: WasmOtoroshiInternal, data: VmData, mapType: MapType): Map[String, ByteString] - def getMemory(plugin: ExtismCurrentPlugin, addr: Int, size: Int): Either[Error, (Pointer, ByteString)] + def getMemory(plugin: WasmOtoroshiInternal, addr: Int, size: Int): Either[Error, (Pointer, ByteString)] - def getMemory(plugin: ExtismCurrentPlugin): Either[Error, Pointer] + def getMemory(plugin: WasmOtoroshiInternal): Either[Error, Pointer] } diff --git a/otoroshi/app/wasm/proxywasm/coraza.scala b/otoroshi/app/wasm/proxywasm/coraza.scala index 7875a9a69e..d3d1e5328c 100644 --- a/otoroshi/app/wasm/proxywasm/coraza.scala +++ b/otoroshi/app/wasm/proxywasm/coraza.scala @@ -3,7 +3,7 @@ package otoroshi.wasm.proxywasm import akka.stream.Materializer import akka.util.ByteString import com.sksamuel.exts.concurrent.Futures.RichFuture -import org.extism.sdk.parameters._ +import org.extism.sdk.wasmotoroshi._ import org.joda.time.DateTime import otoroshi.api.{GenericResourceAccessApiWithState, Resource, ResourceVersion} import otoroshi.env.Env @@ -14,12 +14,13 @@ import otoroshi.next.plugins.api._ import otoroshi.security.IdGenerator import otoroshi.storage.{BasicStore, RedisLike, RedisLikeStore} import otoroshi.utils.{ReplaceAllWith, TypedMap} -import otoroshi.utils.cache.types.LegitTrieMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.http.RequestImplicits.EnhancedRequestHeader import otoroshi.utils.syntax.implicits._ import otoroshi.wasm._ import play.api.libs.json._ import play.api._ +import play.api.libs.typedmap.TypedKey import play.api.mvc.RequestHeader import java.util.concurrent.atomic._ @@ -55,8 +56,15 @@ object CorazaPlugin { |}""".stripMargin.parseJson } +object CorazaPluginKeys { + val CorazaContextIdKey = TypedKey[Int]("otoroshi.next.plugins.CorazaContextId") + val CorazaWasmVmKey = TypedKey[WasmVm]("otoroshi.next.plugins.CorazaWasmVm") +} + class CorazaPlugin(wasm: WasmConfig, val config: CorazaWafConfig, key: String, env: Env) { + WasmVmPool.logger.debug("new CorazaPlugin") + private implicit val ev = env private implicit val ec = env.otoroshiExecutionContext private implicit val ma = env.otoroshiMaterializer @@ -70,7 +78,7 @@ class CorazaPlugin(wasm: WasmConfig, val config: CorazaWafConfig, key: String, e private lazy val contextId = new AtomicInteger(0) private lazy val state = new ProxyWasmState(CorazaPlugin.rootContextIds.incrementAndGet(), contextId, Some((l, m) => logCallback(l, m)), env) - private lazy val functions = ProxyWasmFunctions.build(state) + private lazy val pool: WasmVmPool = new WasmVmPool(key, wasm.some, env) def logCallback(level: org.slf4j.event.Level, msg: String): Unit = { CorazaTrailEvent(level, msg).toAnalytics() @@ -78,120 +86,145 @@ class CorazaPlugin(wasm: WasmConfig, val config: CorazaWafConfig, key: String, e def isStarted(): Boolean = started.get() - def callPluginWithoutResults(function: String, params: Parameters, data: VmData, attrs: TypedMap): Unit = { - otoroshi.wasm.WasmUtils - .rawExecute( - _config = wasm, - defaultFunctionName = function, - input = None, - parameters = params.some, - resultSize = None, - attrs = attrs.some, - ctx = Some(data), - addHostFunctions = functions - )(env) - .await(timeout) - .map(_._2.free()) + def createFunctions(ref: AtomicReference[VmData]): Seq[WasmOtoroshiHostFunction[EnvUserData]] = { + ProxyWasmFunctions.build(state, ref) + } + + def callPluginWithoutResults( + function: String, + params: WasmOtoroshiParameters, + data: VmData, + attrs: TypedMap, + shouldBeCallOnce: Boolean = false): Future[Either[JsValue, ResultsWrapper]] = { + attrs.get(otoroshi.wasm.proxywasm.CorazaPluginKeys.CorazaWasmVmKey) match { + case None => + logger.error("no vm found in attrs") + Left(Json.obj("error" -> "no vm found in attrs")).vfuture + case Some(vm) => { + WasmUtils.traceHostVm(function + s" - vm: ${vm.index}") + vm.call(WasmFunctionParameters.NoResult(function, params), Some(data)).map { opt => + opt.map { res => + res._2.free() + res._2 + } + } + .andThen { + case _ => vm.release() + } + } + } } def callPluginWithResults( - function: String, - params: Parameters, - results: Int, - data: VmData, - attrs: TypedMap + function: String, + params: WasmOtoroshiParameters, + results: Int, + data: VmData, + attrs: TypedMap, + shouldBeCallOnce: Boolean = false ): Future[ResultsWrapper] = { - otoroshi.wasm.WasmUtils - .rawExecute( - _config = wasm, - defaultFunctionName = function, - input = None, - parameters = params.some, - resultSize = results.some, - attrs = attrs.some, - ctx = Some(data), - addHostFunctions = functions - ) - .flatMap { - case Left(err) => - logger.error(s"error while calling plugin: ${err}") - Future.failed(new RuntimeException(s"callPluginWithResults: ${err.stringify}")) - case Right((_, results)) => results.vfuture + attrs.get(otoroshi.wasm.proxywasm.CorazaPluginKeys.CorazaWasmVmKey) match { + case None => + logger.error("no vm found in attrs") + Future.failed(new RuntimeException("no vm found in attrs")) + case Some(vm) => { + WasmUtils.traceHostVm(function + s" - vm: ${vm.index}") + vm.call(WasmFunctionParameters.BothParamsResults(function, params, results), Some(data)) + .flatMap { + case Left(err) => + logger.error(s"error while calling plugin: ${err}") + Future.failed(new RuntimeException(s"callPluginWithResults: ${err.stringify}")) + case Right((_, results)) => results.vfuture + } + .andThen { + case _ => vm.release() + } } + } } - def proxyOnContexCreate(contextId: Int, rootContextId: Int, attrs: TypedMap, rootData: VmData): Unit = { - val prs = new Parameters(2) - new IntegerParameter().addAll(prs, contextId, rootContextId) - callPluginWithoutResults("proxy_on_context_create", prs, rootData, attrs) + def proxyOnContexCreate(contextId: Int, rootContextId: Int, attrs: TypedMap, rootData: VmData): Future[Unit] = { + val prs = new WasmOtoroshiParameters(2) + .pushInts(contextId, rootContextId) + callPluginWithoutResults("proxy_on_context_create", prs, rootData, attrs).map(_ => ()) + // TODO - just try to reset context for each request without call proxyOnConfigure } - def proxyOnVmStart(attrs: TypedMap, rootData: VmData): Boolean = { - val prs = new Parameters(2) - new IntegerParameter().addAll(prs, 0, vmConfigurationSize) - val proxyOnVmStartAction = callPluginWithResults("proxy_on_vm_start", prs, 1, rootData, attrs).await(timeout) - val res = proxyOnVmStartAction.results.getValues()(0).v.i32 != 0 - proxyOnVmStartAction.free() - res + def proxyOnVmStart(attrs: TypedMap, rootData: VmData): Future[Boolean] = { + val prs = new WasmOtoroshiParameters(2) + .pushInts(0, vmConfigurationSize) + callPluginWithResults("proxy_on_vm_start", prs, 1, rootData, attrs, shouldBeCallOnce = true).map { proxyOnVmStartAction => + val res = proxyOnVmStartAction.results.getValues()(0).v.i32 != 0 + proxyOnVmStartAction.free() + res + } } - def proxyOnConfigure(rootContextId: Int, attrs: TypedMap, rootData: VmData): Boolean = { - val prs = new Parameters(2) - new IntegerParameter().addAll(prs, rootContextId, pluginConfigurationSize) - val proxyOnConfigureAction = callPluginWithResults("proxy_on_configure", prs, 1, rootData, attrs).await(timeout) - val res = proxyOnConfigureAction.results.getValues()(0).v.i32 != 0 - proxyOnConfigureAction.free() - res + def proxyOnConfigure(rootContextId: Int, attrs: TypedMap, rootData: VmData): Future[Boolean] = { + val prs = new WasmOtoroshiParameters(2) + .pushInts(rootContextId, pluginConfigurationSize) + callPluginWithResults("proxy_on_configure", prs, 1, rootData, attrs, shouldBeCallOnce = true).map { proxyOnConfigureAction => + val res = proxyOnConfigureAction.results.getValues()(0).v.i32 != 0 + proxyOnConfigureAction.free() + res + } } - def proxyOnDone(rootContextId: Int, attrs: TypedMap): Boolean = { - val prs = new Parameters(1) - new IntegerParameter().addAll(prs, rootContextId) + def proxyOnDone(rootContextId: Int, attrs: TypedMap): Future[Boolean] = { + val prs = new WasmOtoroshiParameters(1).pushInt(rootContextId) val rootData = VmData.empty() - val proxyOnConfigureAction = callPluginWithResults("proxy_on_done", prs, 1, rootData, attrs).await(timeout) - val res = proxyOnConfigureAction.results.getValues()(0).v.i32 != 0 - proxyOnConfigureAction.free() - res + callPluginWithResults("proxy_on_done", prs, 1, rootData, attrs).map { proxyOnConfigureAction => + val res = proxyOnConfigureAction.results.getValues()(0).v.i32 != 0 + proxyOnConfigureAction.free() + res + } } - def proxyOnDelete(rootContextId: Int, attrs: TypedMap): Unit = { - val prs = new Parameters(1) - new IntegerParameter().addAll(prs, rootContextId) + def proxyOnDelete(rootContextId: Int, attrs: TypedMap): Future[Unit] = { + val prs = new WasmOtoroshiParameters(1).pushInt(rootContextId) val rootData = VmData.empty() - callPluginWithoutResults("proxy_on_done", prs, rootData, attrs) + callPluginWithoutResults("proxy_on_delete", prs, rootData, attrs).map(_ => ()) + } + + def proxyStart(attrs: TypedMap, rootData: VmData): Future[ResultsWrapper] = { + callPluginWithoutResults("_start", new WasmOtoroshiParameters(0), rootData, attrs, shouldBeCallOnce = true).map { res => + res.right.get + } } - def proxyStart(attrs: TypedMap, rootData: VmData): Unit = { - callPluginWithoutResults("_start", new Parameters(0), rootData, attrs) + def proxyCheckABIVersion(attrs: TypedMap, rootData: VmData): Future[Unit] = { + callPluginWithoutResults("proxy_abi_version_0_2_0", new WasmOtoroshiParameters(0), rootData, attrs, shouldBeCallOnce = true).map(_ => ()) } - def proxyCheckABIVersion(attrs: TypedMap, rootData: VmData): Unit = { - callPluginWithoutResults("proxy_abi_version_0_2_0", new Parameters(0), rootData, attrs) + def reportError(result: Result, vm: WasmVm, from: String): Unit = { + logger.error(s"[${vm.index}] from: $from - error: ${result.value} - ${vm.calls} / ${vm.current}") } def proxyOnRequestHeaders( contextId: Int, request: RequestHeader, attrs: TypedMap - ): Either[play.api.mvc.Result, Unit] = { + ): Future[Either[play.api.mvc.Result, Unit]] = { + val vm = attrs.get(otoroshi.wasm.proxywasm.CorazaPluginKeys.CorazaWasmVmKey).get val data = VmData.empty().withRequest(request, attrs)(env) val endOfStream = 1 val sizeHeaders = 0 - val prs = new Parameters(3) - new IntegerParameter().addAll(prs, contextId, sizeHeaders, endOfStream) - val requestHeadersAction = callPluginWithResults("proxy_on_request_headers", prs, 1, data, attrs).await(timeout) - val result = Result.valueToType(requestHeadersAction.results.getValues()(0).v.i32) - requestHeadersAction.free() - if (result != Result.ResultOk || data.httpResponse.isDefined) { - data.httpResponse match { - case None => - Left( - play.api.mvc.Results.InternalServerError(Json.obj("error" -> "no http response in context")) - ) // TODO: not sure if okay - case Some(response) => Left(response) + val prs = new WasmOtoroshiParameters(3).pushInts(contextId, sizeHeaders, endOfStream) + callPluginWithResults("proxy_on_request_headers", prs, 1, data, attrs).map { requestHeadersAction => + val result = Result.valueToType(requestHeadersAction.results.getValues()(0).v.i32) + requestHeadersAction.free() + if (result != Result.ResultOk || data.httpResponse.isDefined) { + data.httpResponse match { + case None => + reportError(result, vm, "proxyOnRequestHeaders") + Left( + play.api.mvc.Results.InternalServerError(Json.obj("error" -> s"no http response in context 1: ${result.value}")) + ) // TODO: not sure if okay + case Some(response) => Left(response) + } + } else { + Right(()) } - } else { - Right(()) } } @@ -201,26 +234,28 @@ class CorazaPlugin(wasm: WasmConfig, val config: CorazaWafConfig, key: String, e req: NgPluginHttpRequest, body_bytes: ByteString, attrs: TypedMap - ): Either[play.api.mvc.Result, Unit] = { + ): Future[Either[play.api.mvc.Result, Unit]] = { + val vm = attrs.get(otoroshi.wasm.proxywasm.CorazaPluginKeys.CorazaWasmVmKey).get val data = VmData.empty().withRequest(request, attrs)(env) data.bodyInRef.set(body_bytes) val endOfStream = 1 val sizeBody = body_bytes.size.bytes.length - val prs = new Parameters(3) - new IntegerParameter().addAll(prs, contextId, sizeBody, endOfStream) - val requestHeadersAction = callPluginWithResults("proxy_on_request_body", prs, 1, data, attrs).await(timeout) - val result = Result.valueToType(requestHeadersAction.results.getValues()(0).v.i32) - requestHeadersAction.free() - if (result != Result.ResultOk || data.httpResponse.isDefined) { - data.httpResponse match { - case None => - Left( - play.api.mvc.Results.InternalServerError(Json.obj("error" -> "no http response in context")) - ) // TODO: not sure if okay - case Some(response) => Left(response) + val prs = new WasmOtoroshiParameters(3).pushInts(contextId, sizeBody, endOfStream) + callPluginWithResults("proxy_on_request_body", prs, 1, data, attrs).map { requestHeadersAction => + val result = Result.valueToType(requestHeadersAction.results.getValues()(0).v.i32) + requestHeadersAction.free() + if (result != Result.ResultOk || data.httpResponse.isDefined) { + data.httpResponse match { + case None => + reportError(result, vm, "proxyOnRequestBody") + Left( + play.api.mvc.Results.InternalServerError(Json.obj("error" -> s"no http response in context 2: ${result.value}")) + ) // TODO: not sure if okay + case Some(response) => Left(response) + } + } else { + Right(()) } - } else { - Right(()) } } @@ -228,25 +263,27 @@ class CorazaPlugin(wasm: WasmConfig, val config: CorazaWafConfig, key: String, e contextId: Int, response: NgPluginHttpResponse, attrs: TypedMap - ): Either[play.api.mvc.Result, Unit] = { + ): Future[Either[play.api.mvc.Result, Unit]] = { + val vm = attrs.get(otoroshi.wasm.proxywasm.CorazaPluginKeys.CorazaWasmVmKey).get val data = VmData.empty().withResponse(response, attrs)(env) val endOfStream = 1 val sizeHeaders = 0 - val prs = new Parameters(3) - new IntegerParameter().addAll(prs, contextId, sizeHeaders, endOfStream) - val requestHeadersAction = callPluginWithResults("proxy_on_response_headers", prs, 1, data, attrs).await(timeout) - val result = Result.valueToType(requestHeadersAction.results.getValues()(0).v.i32) - requestHeadersAction.free() - if (result != Result.ResultOk || data.httpResponse.isDefined) { - data.httpResponse match { - case None => - Left( - play.api.mvc.Results.InternalServerError(Json.obj("error" -> "no http response in context")) - ) // TODO: not sure if okay - case Some(response) => Left(response) + val prs = new WasmOtoroshiParameters(3).pushInts(contextId, sizeHeaders, endOfStream) + callPluginWithResults("proxy_on_response_headers", prs, 1, data, attrs).map { requestHeadersAction => + val result = Result.valueToType(requestHeadersAction.results.getValues()(0).v.i32) + requestHeadersAction.free() + if (result != Result.ResultOk || data.httpResponse.isDefined) { + data.httpResponse match { + case None => + reportError(result, vm, "proxyOnResponseHeaders") + Left( + play.api.mvc.Results.InternalServerError(Json.obj("error" -> s"no http response in context 3: ${result.value}")) + ) // TODO: not sure if okay + case Some(response) => Left(response) + } + } else { + Right(()) } - } else { - Right(()) } } @@ -255,67 +292,71 @@ class CorazaPlugin(wasm: WasmConfig, val config: CorazaWafConfig, key: String, e response: NgPluginHttpResponse, body_bytes: ByteString, attrs: TypedMap - ): Either[play.api.mvc.Result, Unit] = { + ): Future[Either[play.api.mvc.Result, Unit]] = { + val vm = attrs.get(otoroshi.wasm.proxywasm.CorazaPluginKeys.CorazaWasmVmKey).get val data = VmData.empty().withResponse(response, attrs)(env) data.bodyInRef.set(body_bytes) val endOfStream = 1 val sizeBody = body_bytes.size.bytes.length - val prs = new Parameters(3) - new IntegerParameter().addAll(prs, contextId, sizeBody, endOfStream) - val requestHeadersAction = callPluginWithResults("proxy_on_response_body", prs, 1, data, attrs).await(timeout) - val result = Result.valueToType(requestHeadersAction.results.getValues()(0).v.i32) - requestHeadersAction.free() - if (result != Result.ResultOk || data.httpResponse.isDefined) { - data.httpResponse match { - case None => - Left( - play.api.mvc.Results.InternalServerError(Json.obj("error" -> "no http response in context")) - ) // TODO: not sure if okay - case Some(response) => Left(response) + val prs = new WasmOtoroshiParameters(3).pushInts(contextId, sizeBody, endOfStream) + callPluginWithResults("proxy_on_response_body", prs, 1, data, attrs).map { requestHeadersAction => + val result = Result.valueToType(requestHeadersAction.results.getValues()(0).v.i32) + requestHeadersAction.free() + if (result != Result.ResultOk || data.httpResponse.isDefined) { + data.httpResponse match { + case None => + reportError(result, vm, "proxyOnResponseBody") + Left( + play.api.mvc.Results.InternalServerError(Json.obj("error" -> s"no http response in context 4: ${result.value}")) + ) // TODO: not sure if okay + case Some(response) => Left(response) + } + } else { + Right(()) } - } else { - Right(()) } } - def start(attrs: TypedMap): Unit = { - val data = VmData.withRules(rules) - proxyStart(attrs, data) - proxyCheckABIVersion(attrs, data) - // according to ABI, we should create a root context id before any operations - proxyOnContexCreate(state.rootContextId, 0, attrs, data) - if (proxyOnVmStart(attrs, data)) { - if (proxyOnConfigure(state.rootContextId, attrs, data)) { - started.set(true) - //proxyOnContexCreate(state.contextId.get(), state.rootContextId, attrs) - } else { - logger.error("failed to configure coraza") + def start(attrs: TypedMap): Future[Unit] = { + pool.getPooledVm(WasmVmInitOptions(false, true, createFunctions)).flatMap { vm => + val data = VmData.withRules(rules) + attrs.put(otoroshi.wasm.proxywasm.CorazaPluginKeys.CorazaWasmVmKey -> vm) + vm.finitialize { + proxyStart(attrs, data).flatMap { _ => + proxyCheckABIVersion(attrs, data).flatMap { _ => + // according to ABI, we should create a root context id before any operations + proxyOnContexCreate(state.rootContextId, 0, attrs, data).flatMap { _ => + proxyOnVmStart(attrs, data).flatMap { + case true => proxyOnConfigure(state.rootContextId, attrs, data).map { + case true => started.set(true) + case _ => logger.error("failed to configure coraza") + } + case _ => logger.error("failed to start coraza vm").vfuture + } + } + } + } } - } else { - logger.error("failed to start coraza vm") } } - def stop(attrs: TypedMap): Unit = { - otoroshi.wasm.WasmUtils.pluginCache.get(s"http://${key}-0").foreach { slot => - slot.close(WasmVmLifetime.Forever) - otoroshi.wasm.WasmUtils.pluginCache.remove(s"http://${key}-0") - } + def stop(attrs: TypedMap): Future[Unit] = { + ().vfuture } - - def runRequestPath(request: RequestHeader, attrs: TypedMap): NgAccess = { - contextId.incrementAndGet() + // TODO - need to save VmData in attrs to get it from the start function and reuse the same slotId + def runRequestPath(request: RequestHeader, attrs: TypedMap): Future[NgAccess] = { + val contId = contextId.incrementAndGet() + attrs.put(otoroshi.wasm.proxywasm.CorazaPluginKeys.CorazaContextIdKey -> contId) + val instance = attrs.get(otoroshi.wasm.proxywasm.CorazaPluginKeys.CorazaWasmVmKey).get val data = VmData.withRules(rules) - proxyOnContexCreate(state.contextId.get(), state.rootContextId, attrs, data) - val res = for { - _ <- proxyOnRequestHeaders(state.contextId.get(), request, attrs) - } yield () - res match { - case Left(errRes) => - proxyOnDone(state.contextId.get(), attrs) - proxyOnDelete(state.contextId.get(), attrs) - NgAccess.NgDenied(errRes) - case Right(_) => NgAccess.NgAllowed + proxyOnContexCreate(contId, state.rootContextId, attrs, data).flatMap { _ => + proxyOnRequestHeaders(contId, request, attrs).map { + case Left(errRes) => + proxyOnDone(contId, attrs) + proxyOnDelete(contId, attrs) + NgAccess.NgDenied(errRes) + case Right(_) => NgAccess.NgAllowed + } } } @@ -324,19 +365,17 @@ class CorazaPlugin(wasm: WasmConfig, val config: CorazaWafConfig, key: String, e req: NgPluginHttpRequest, body_bytes: Option[ByteString], attrs: TypedMap - ): Either[mvc.Result, Unit] = { - val res = for { - _ <- if (body_bytes.isDefined) proxyOnRequestBody(state.contextId.get(), request, req, body_bytes.get, attrs) - else Right(()) - // proxy_on_http_request_trailers - // proxy_on_http_request_metadata : H2 only - } yield () - res match { + ): Future[Either[mvc.Result, Unit]] = { + val contId = attrs.get(otoroshi.wasm.proxywasm.CorazaPluginKeys.CorazaContextIdKey).get + val f = if (body_bytes.isDefined) proxyOnRequestBody(contId, request, req, body_bytes.get, attrs) else Right(()).vfuture + // proxy_on_http_request_trailers + // proxy_on_http_request_metadata : H2 only + f.map { case Left(errRes) => - proxyOnDone(state.contextId.get(), attrs) - proxyOnDelete(state.contextId.get(), attrs) + proxyOnDone(contId, attrs) + proxyOnDelete(contId, attrs) Left(errRes) - case Right(_) => Right(()) + case Right(_) => Right(()) } } @@ -344,17 +383,19 @@ class CorazaPlugin(wasm: WasmConfig, val config: CorazaWafConfig, key: String, e response: NgPluginHttpResponse, body_bytes: Option[ByteString], attrs: TypedMap - ): Either[mvc.Result, Unit] = { - val res = for { - _ <- proxyOnResponseHeaders(state.contextId.get(), response, attrs) - _ <- if (body_bytes.isDefined) proxyOnResponseBody(state.contextId.get(), response, body_bytes.get, attrs) - else Right(()) - // proxy_on_http_response_trailers - // proxy_on_http_response_metadata : H2 only - } yield () - proxyOnDone(state.contextId.get(), attrs) - proxyOnDelete(state.contextId.get(), attrs) - res + ): Future[Either[mvc.Result, Unit]] = { + val contId = attrs.get(otoroshi.wasm.proxywasm.CorazaPluginKeys.CorazaContextIdKey).get + proxyOnResponseHeaders(contId, response, attrs).flatMap { + case Left(e) => Left(e).vfuture + case Right(_) => { + val res = if (body_bytes.isDefined) proxyOnResponseBody(contId, response, body_bytes.get, attrs) else Right(()).vfuture + // proxy_on_http_response_trailers + // proxy_on_http_response_metadata : H2 only + proxyOnDone(contId, attrs) + proxyOnDelete(contId, attrs) + res + } + } } } @@ -378,9 +419,7 @@ object NgCorazaWAFConfig { class NgCorazaWAF extends NgAccessValidator with NgRequestTransformer { - // TODO: avoid blocking calls for wasm calls - // TODO: add job to preinstantiate plugin - // TODO: add coraza.wasm build in the release process + WasmVmPool.logger.debug("new NgCorazaWAF") override def steps: Seq[NgStep] = Seq(NgStep.ValidateAccess, NgStep.TransformRequest, NgStep.TransformResponse) override def categories: Seq[NgPluginCategory] = Seq(NgPluginCategory.AccessControl) @@ -399,17 +438,18 @@ class NgCorazaWAF extends NgAccessValidator with NgRequestTransformer { override def transformsResponse: Boolean = true override def transformsError: Boolean = false - private val plugins = new LegitTrieMap[String, CorazaPlugin]() + private val plugins = new UnboundedTrieMap[String, CorazaPlugin]() - private def getPlugin(ref: String, attrs: TypedMap)(implicit env: Env): CorazaPlugin = { + private def getPlugin(ref: String, attrs: TypedMap)(implicit env: Env): CorazaPlugin = plugins.synchronized { val config = env.adminExtensions.extension[CorazaWafAdminExtension].get.states.config(ref).get val configHash = config.json.stringify.sha512 val key = s"ref=${ref}&hash=${configHash}" - //println(s"get plugin: ${key}") - val plugin = plugins.getOrUpdate(key) { - //println(s"create plugin: ${key}") + + val plugin = if (plugins.contains(key)) { + plugins(key) + } else { val url = s"http://127.0.0.1:${env.httpPort}/__otoroshi_assets/wasm/coraza-proxy-wasm-v0.1.0.wasm?$key" - new CorazaPlugin( + val p = new CorazaPlugin( WasmConfig( source = WasmSource( kind = WasmSourceKind.Http, @@ -417,12 +457,22 @@ class NgCorazaWAF extends NgAccessValidator with NgRequestTransformer { ), memoryPages = 1000, functionName = None, - wasi = true + wasi = true, + // lifetime = WasmVmLifetime.Forever, + instances = config.poolCapacity, + killOptions = WasmVmKillOptions( + maxCalls = 2000, + maxMemoryUsage = 0.9, + maxAvgCallDuration = 1.day, + maxUnusedDuration = 5.minutes, + ) ), config, url, env ) + plugins.put(key, p) + p } val oldVersionsKeys = plugins.keySet.filter(_.startsWith(s"ref=${ref}&hash=")).filterNot(_ == key) val oldVersions = oldVersionsKeys.flatMap(plugins.get) @@ -438,22 +488,20 @@ class NgCorazaWAF extends NgAccessValidator with NgRequestTransformer { )(implicit env: Env, ec: ExecutionContext, mat: Materializer): Future[Unit] = { val config = ctx.cachedConfig(internalName)(NgCorazaWAFConfig.format).getOrElse(NgCorazaWAFConfig("none")) val plugin = getPlugin(config.ref, ctx.attrs) - if (!plugin.isStarted()) { - plugin.start(ctx.attrs) - } - ().vfuture + plugin.start(ctx.attrs) } override def afterRequest( ctx: NgAfterRequestContext )(implicit env: Env, ec: ExecutionContext, mat: Materializer): Future[Unit] = { + ctx.attrs.get(otoroshi.wasm.proxywasm.CorazaPluginKeys.CorazaWasmVmKey).foreach(_.release()) ().vfuture } override def access(ctx: NgAccessContext)(implicit env: Env, ec: ExecutionContext): Future[NgAccess] = { val config = ctx.cachedConfig(internalName)(NgCorazaWAFConfig.format).getOrElse(NgCorazaWAFConfig("none")) val plugin = getPlugin(config.ref, ctx.attrs) - plugin.runRequestPath(ctx.request, ctx.attrs).vfuture + plugin.runRequestPath(ctx.request, ctx.attrs) } override def transformRequest( @@ -468,7 +516,7 @@ class NgCorazaWAF extends NgAccessValidator with NgRequestTransformer { else { ctx.otoroshiRequest.body.runFold(ByteString.empty)(_ ++ _).map(_.some) } - bytesf.map { bytes => + bytesf.flatMap { bytes => val req = if (plugin.config.inspectBody && hasBody) ctx.otoroshiRequest.copy(body = bytes.get.chunks(16 * 1024)) else ctx.otoroshiRequest @@ -479,7 +527,10 @@ class NgCorazaWAF extends NgAccessValidator with NgRequestTransformer { bytes, ctx.attrs ) - .map(_ => req) + .map { + case Left(result) => Left(result) + case Right(_) => Right(req) + } } } @@ -491,7 +542,7 @@ class NgCorazaWAF extends NgAccessValidator with NgRequestTransformer { val bytesf: Future[Option[ByteString]] = if (!plugin.config.inspectBody) None.vfuture else ctx.otoroshiResponse.body.runFold(ByteString.empty)(_ ++ _).map(_.some) - bytesf.map { bytes => + bytesf.flatMap { bytes => val res = if (plugin.config.inspectBody) ctx.otoroshiResponse.copy(body = bytes.get.chunks(16 * 1024)) else ctx.otoroshiResponse @@ -501,7 +552,10 @@ class NgCorazaWAF extends NgAccessValidator with NgRequestTransformer { bytes, ctx.attrs ) - .map(_ => res) + .map { + case Left(result) => Left(result) + case Right(_) => Right(res) + } } } } @@ -514,7 +568,8 @@ case class CorazaWafConfig( tags: Seq[String], metadata: Map[String, String], inspectBody: Boolean, - config: JsObject + config: JsObject, + poolCapacity: Int, ) extends EntityLocationSupport { override def internalId: String = id override def json: JsValue = CorazaWafConfig.format.writes(this) @@ -533,7 +588,8 @@ object CorazaWafConfig { metadata = Map.empty, tags = Seq.empty, config = CorazaPlugin.corazaDefaultRules.asObject, - inspectBody = true + inspectBody = true, + poolCapacity = 2, ) val format = new Format[CorazaWafConfig] { override def writes(o: CorazaWafConfig): JsValue = o.location.jsonWithKey ++ Json.obj( @@ -543,7 +599,8 @@ object CorazaWafConfig { "metadata" -> o.metadata, "tags" -> JsArray(o.tags.map(JsString.apply)), "config" -> o.config, - "inspect_body" -> o.inspectBody + "inspect_body" -> o.inspectBody, + "pool_capacity" -> o.poolCapacity, ) override def reads(json: JsValue): JsResult[CorazaWafConfig] = Try { CorazaWafConfig( @@ -554,7 +611,8 @@ object CorazaWafConfig { metadata = (json \ "metadata").asOpt[Map[String, String]].getOrElse(Map.empty), tags = (json \ "tags").asOpt[Seq[String]].getOrElse(Seq.empty[String]), config = (json \ "config").asOpt[JsObject].getOrElse(Json.obj()), - inspectBody = (json \ "inspect_body").asOpt[Boolean].getOrElse(true) + inspectBody = (json \ "inspect_body").asOpt[Boolean].getOrElse(true), + poolCapacity = (json \ "pool_capacity").asOpt[Int].getOrElse(2), ) } match { case Failure(ex) => JsError(ex.getMessage) @@ -581,7 +639,7 @@ class CorazaWafConfigAdminExtensionDatastores(env: Env, extensionId: AdminExtens class CorazaWafConfigAdminExtensionState(env: Env) { - private val configs = new LegitTrieMap[String, CorazaWafConfig]() + private val configs = new UnboundedTrieMap[String, CorazaWafConfig]() def config(id: String): Option[CorazaWafConfig] = configs.get(id) def allConfigs(): Seq[CorazaWafConfig] = configs.values.toSeq diff --git a/otoroshi/app/wasm/proxywasm/functions.scala b/otoroshi/app/wasm/proxywasm/functions.scala index e0e716be35..ea586970d3 100644 --- a/otoroshi/app/wasm/proxywasm/functions.scala +++ b/otoroshi/app/wasm/proxywasm/functions.scala @@ -1,46 +1,60 @@ package otoroshi.wasm.proxywasm import akka.stream.Materializer -import org.extism.sdk._ +import org.extism.sdk.wasmotoroshi._ import otoroshi.env.Env import otoroshi.wasm._ import java.util.Optional +import java.util.concurrent.atomic.AtomicReference import scala.concurrent.ExecutionContext object ProxyWasmFunctions { - private def getCurrentVmData(): VmData = { - WasmContextSlot.getCurrentContext() match { - case Some(data: VmData) => data - case _ => throw new RuntimeException("missing vm data") - } - } + //private def getCurrentVmData(): VmData = { + // WasmContextSlot.getCurrentContext() match { + // case Some(data: VmData) => data + // case _ => + // println("missing vm data") + // new RuntimeException("missing vm data").printStackTrace() + // throw new RuntimeException("missing vm data") + // } + //} def build( - state: ProxyWasmState - )(implicit ec: ExecutionContext, env: Env, mat: Materializer): Seq[HostFunction[EnvUserData]] = { + state: ProxyWasmState, + vmDataRef: AtomicReference[VmData], + )(implicit ec: ExecutionContext, env: Env, mat: Materializer): Seq[WasmOtoroshiHostFunction[EnvUserData]] = { + def getCurrentVmData(): VmData = { + Option(vmDataRef.get()) match { + case Some(data: VmData) => data + case _ => + println("missing vm data") + new RuntimeException("missing vm data").printStackTrace() + throw new RuntimeException("missing vm data") + } + } Seq( - new HostFunction[EnvUserData]( + new WasmOtoroshiHostFunction[EnvUserData]( "proxy_log", parameters(3), parameters(1), ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EnvUserData] ) => state.proxyLog(plugin, params(0).v.i32, params(1).v.i32, params(2).v.i32), Optional.empty[EnvUserData]() ), - new HostFunction[EnvUserData]( + new WasmOtoroshiHostFunction[EnvUserData]( "proxy_get_buffer_bytes", parameters(5), parameters(1), ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EnvUserData] ) => state.proxyGetBuffer( @@ -54,39 +68,39 @@ object ProxyWasmFunctions { ), Optional.empty[EnvUserData]() ), - new HostFunction[EnvUserData]( + new WasmOtoroshiHostFunction[EnvUserData]( "proxy_set_effective_context", parameters(1), parameters(1), ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EnvUserData] ) => state.proxySetEffectiveContext(plugin, params(0).v.i32), Optional.empty[EnvUserData]() ), - new HostFunction[EnvUserData]( + new WasmOtoroshiHostFunction[EnvUserData]( "proxy_get_header_map_pairs", parameters(3), parameters(1), ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EnvUserData] ) => state.proxyGetHeaderMapPairs(plugin, getCurrentVmData(), params(0).v.i32, params(1).v.i32, params(2).v.i32), Optional.empty[EnvUserData]() ), - new HostFunction[EnvUserData]( + new WasmOtoroshiHostFunction[EnvUserData]( "proxy_set_buffer_bytes", parameters(5), parameters(1), ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EnvUserData] ) => state.proxySetBuffer( @@ -100,14 +114,14 @@ object ProxyWasmFunctions { ), Optional.empty[EnvUserData]() ), - new HostFunction[EnvUserData]( + new WasmOtoroshiHostFunction[EnvUserData]( "proxy_get_header_map_value", parameters(5), parameters(1), ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EnvUserData] ) => state.proxyGetHeaderMapValue( @@ -121,14 +135,14 @@ object ProxyWasmFunctions { ), Optional.empty[EnvUserData]() ), - new HostFunction[EnvUserData]( + new WasmOtoroshiHostFunction[EnvUserData]( "proxy_get_property", parameters(4), parameters(1), ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EnvUserData] ) => state.proxyGetProperty( @@ -141,50 +155,50 @@ object ProxyWasmFunctions { ), Optional.empty[EnvUserData]() ), - new HostFunction[EnvUserData]( + new WasmOtoroshiHostFunction[EnvUserData]( "proxy_increment_metric", - Seq(LibExtism.ExtismValType.I32, LibExtism.ExtismValType.I64).toArray, + Seq(WasmBridge.ExtismValType.I32, WasmBridge.ExtismValType.I64).toArray, parameters(1), ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EnvUserData] ) => state.proxyIncrementMetricValue(plugin, getCurrentVmData(), params(0).v.i32, params(1).v.i64), Optional.empty[EnvUserData]() ), - new HostFunction[EnvUserData]( + new WasmOtoroshiHostFunction[EnvUserData]( "proxy_define_metric", parameters(4), parameters(1), ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EnvUserData] ) => state.proxyDefineMetric(plugin, params(0).v.i32, params(1).v.i32, params(2).v.i32, params(3).v.i32), Optional.empty[EnvUserData]() ), - new HostFunction[EnvUserData]( + new WasmOtoroshiHostFunction[EnvUserData]( "proxy_set_tick_period_milliseconds", parameters(1), parameters(1), ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EnvUserData] ) => state.proxySetTickPeriodMilliseconds(getCurrentVmData(), params(0).v.i32), Optional.empty[EnvUserData]() ), - new HostFunction[EnvUserData]( + new WasmOtoroshiHostFunction[EnvUserData]( "proxy_replace_header_map_value", parameters(5), parameters(1), ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EnvUserData] ) => state.proxyReplaceHeaderMapValue( @@ -198,14 +212,14 @@ object ProxyWasmFunctions { ), Optional.empty[EnvUserData]() ), - new HostFunction[EnvUserData]( + new WasmOtoroshiHostFunction[EnvUserData]( "proxy_send_local_response", parameters(8), parameters(1), ( - plugin: ExtismCurrentPlugin, - params: Array[LibExtism.ExtismVal], - returns: Array[LibExtism.ExtismVal], + plugin: WasmOtoroshiInternal, + params: Array[WasmBridge.ExtismVal], + returns: Array[WasmBridge.ExtismVal], data: Optional[EnvUserData] ) => state.proxySendHttpResponse( @@ -217,14 +231,15 @@ object ProxyWasmFunctions { params(4).v.i32, params(5).v.i32, params(6).v.i32, - params(7).v.i32 + params(7).v.i32, + getCurrentVmData(), ), Optional.empty[EnvUserData]() ) ) } - private def parameters(n: Int): Array[LibExtism.ExtismValType] = { - (0 until n).map(_ => LibExtism.ExtismValType.I32).toArray + private def parameters(n: Int): Array[WasmBridge.ExtismValType] = { + (0 until n).map(_ => WasmBridge.ExtismValType.I32).toArray } } diff --git a/otoroshi/app/wasm/proxywasm/state.scala b/otoroshi/app/wasm/proxywasm/state.scala index a982dde92a..0e3a9f6b2c 100644 --- a/otoroshi/app/wasm/proxywasm/state.scala +++ b/otoroshi/app/wasm/proxywasm/state.scala @@ -2,7 +2,7 @@ package otoroshi.wasm import akka.util.ByteString import com.sun.jna.Pointer -import org.extism.sdk.ExtismCurrentPlugin +import org.extism.sdk.wasmotoroshi._ import otoroshi.env.Env import otoroshi.utils.syntax.implicits._ import otoroshi.wasm.proxywasm.WasmUtils.traceVmHost @@ -15,7 +15,7 @@ import play.api.Logger import play.api.libs.json.Json import java.nio.charset.StandardCharsets -import java.util.concurrent.atomic.AtomicInteger +import java.util.concurrent.atomic.{AtomicInteger, AtomicReference} class ProxyWasmState( val rootContextId: Int, @@ -28,7 +28,12 @@ class ProxyWasmState( val u32Len = 4 - override def proxyLog(plugin: ExtismCurrentPlugin, logLevel: Int, messageData: Int, messageSize: Int): Result = { + def unimplementedFunction[A](name: String): A = { + logger.error(s"unimplemented state function: '${name}'") + throw new NotImplementedError(s"proxy state method '${name}' is not implemented") + } + + override def proxyLog(plugin: WasmOtoroshiInternal, logLevel: Int, messageData: Int, messageSize: Int): Result = { getMemory(plugin, messageData, messageSize) .fold( Error.toResult, @@ -56,18 +61,18 @@ class ProxyWasmState( ) } - override def proxyResumeStream(plugin: ExtismCurrentPlugin, streamType: StreamType): Result = { + override def proxyResumeStream(plugin: WasmOtoroshiInternal, streamType: StreamType): Result = { traceVmHost("proxy_resume_stream") null } - override def proxyCloseStream(plugin: ExtismCurrentPlugin, streamType: StreamType): Result = { + override def proxyCloseStream(plugin: WasmOtoroshiInternal, streamType: StreamType): Result = { traceVmHost("proxy_close_stream") null } override def proxySendHttpResponse( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, responseCode: Int, responseCodeDetailsData: Int, responseCodeDetailsSize: Int, @@ -75,7 +80,8 @@ class ProxyWasmState( responseBodySize: Int, additionalHeadersMapData: Int, additionalHeadersSize: Int, - grpcStatus: Int + grpcStatus: Int, + vmData: VmData, ): Result = { traceVmHost(s"proxy_send_http_response: ${responseCode} - ${grpcStatus}") for { @@ -83,7 +89,7 @@ class ProxyWasmState( body <- getMemory(plugin, responseBodyData, responseBodySize) addHeaders <- getMemory(plugin, additionalHeadersMapData, additionalHeadersSize) } yield { - WasmContextSlot.getCurrentContext().map(_.asInstanceOf[VmData]).foreach { vmdata => + //WasmContextSlot.getCurrentContext().map(_.asInstanceOf[VmData]).foreach { vmdata => // Json.obj( // "http_status" -> responseCode, // "grpc_code" -> grpcStatus, @@ -91,28 +97,28 @@ class ProxyWasmState( // "body" -> body._2.utf8String, // "headers" -> addHeaders._2.utf8String, // ).prettify.debugPrintln - vmdata.respRef.set( + vmData.respRef.set( play.api.mvc.Results .Status(responseCode)(body._2) .withHeaders() // TODO: read it .as("text/plain") // TODO: change it ) - } + //} } ResultOk } - override def proxyResumeHttpStream(plugin: ExtismCurrentPlugin, streamType: StreamType): Result = { + override def proxyResumeHttpStream(plugin: WasmOtoroshiInternal, streamType: StreamType): Result = { traceVmHost("proxy_resume_http_stream") null } - override def proxyCloseHttpStream(plugin: ExtismCurrentPlugin, streamType: StreamType): Result = { + override def proxyCloseHttpStream(plugin: WasmOtoroshiInternal, streamType: StreamType): Result = { traceVmHost("proxy_close_http_stream") null } - override def getBuffer(plugin: ExtismCurrentPlugin, data: VmData, bufferType: BufferType): IoBuffer = { + override def getBuffer(plugin: WasmOtoroshiInternal, data: VmData, bufferType: BufferType): IoBuffer = { bufferType match { case BufferTypeHttpRequestBody => getHttpRequestBody(plugin, data) @@ -136,7 +142,7 @@ class ProxyWasmState( } override def proxyGetBuffer( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, data: VmData, bufferType: Int, offset: Int, @@ -172,14 +178,14 @@ class ProxyWasmState( } override def proxySetBuffer( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, data: VmData, bufferType: Int, offset: Int, size: Int, bufferData: Int, bufferSize: Int - ): Result = { + ): Result = plugin.synchronized { traceVmHost("proxy_set_buffer") val buf = getBuffer(plugin, data, BufferType.valueToType(bufferType)) if (buf == null) { @@ -207,7 +213,7 @@ class ProxyWasmState( ResultOk } - override def getMap(plugin: ExtismCurrentPlugin, vmData: VmData, mapType: MapType): Map[String, ByteString] = { + override def getMap(plugin: WasmOtoroshiInternal, vmData: VmData, mapType: MapType): Map[String, ByteString] = { mapType match { case MapTypeHttpRequestHeaders => getHttpRequestHeader(plugin, vmData) case MapTypeHttpRequestTrailers => getHttpRequestTrailer(plugin, vmData) @@ -224,13 +230,13 @@ class ProxyWasmState( def copyMapIntoInstance( m: Map[String, String], - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, returnMapData: Int, returnMapSize: Int - ): Unit = ??? + ): Unit = unimplementedFunction("copyMapIntoInstance") override def proxyGetHeaderMapPairs( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, data: VmData, mapType: Int, returnDataPtr: Int, @@ -261,50 +267,52 @@ class ProxyWasmState( // return int32(v2.ResultInvalidMemoryAccess) // } - val memory: Pointer = plugin.getLinearMemory("memory") - memory.setInt(addr, header.size) -// if err != nil { -// return int32(v2.ResultInvalidMemoryAccess) -// } - - var lenPtr = addr + u32Len - var dataPtr = lenPtr + (u32Len + u32Len) * header.size - - header.foreach(entry => { - val k = entry._1 - val v = entry._2 - - memory.setInt(lenPtr, k.length()) - lenPtr += u32Len - memory.setInt(lenPtr, v.length) - lenPtr += u32Len - - memory.write(dataPtr, k.getBytes(StandardCharsets.UTF_8), 0, k.length()) - dataPtr += k.length() - memory.setByte(dataPtr, 0) - dataPtr += 1 - - memory.write(dataPtr, v.toArray, 0, v.length) - dataPtr += v.length - memory.setByte(dataPtr, 0) - dataPtr += 1 - }) - - memory.setInt(returnDataPtr, addr) -// if err != nil { -// return int32(v2.ResultInvalidMemoryAccess) -// } - - memory.setInt(returnDataSize, totalBytesLen) -// if err != nil { -// return int32(v2.ResultInvalidMemoryAccess) -// } + plugin.synchronized { + val memory: Pointer = plugin.getLinearMemory("memory") + memory.setInt(addr, header.size) + // if err != nil { + // return int32(v2.ResultInvalidMemoryAccess) + // } + + var lenPtr = addr + u32Len + var dataPtr = lenPtr + (u32Len + u32Len) * header.size + + header.foreach(entry => { + val k = entry._1 + val v = entry._2 + + memory.setInt(lenPtr, k.length()) + lenPtr += u32Len + memory.setInt(lenPtr, v.length) + lenPtr += u32Len + + memory.write(dataPtr, k.getBytes(StandardCharsets.UTF_8), 0, k.length()) + dataPtr += k.length() + memory.setByte(dataPtr, 0) + dataPtr += 1 + + memory.write(dataPtr, v.toArray, 0, v.length) + dataPtr += v.length + memory.setByte(dataPtr, 0) + dataPtr += 1 + }) + + memory.setInt(returnDataPtr, addr) + // if err != nil { + // return int32(v2.ResultInvalidMemoryAccess) + // } + + memory.setInt(returnDataSize, totalBytesLen) + // if err != nil { + // return int32(v2.ResultInvalidMemoryAccess) + // } + } ResultOk.value } override def proxyGetHeaderMapValue( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, data: VmData, mapType: Int, keyData: Int, @@ -338,7 +346,7 @@ class ProxyWasmState( } override def proxyReplaceHeaderMapValue( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, data: VmData, mapType: Int, keyData: Int, @@ -378,91 +386,91 @@ class ProxyWasmState( } override def proxyOpenSharedKvstore( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, kvstoreNameData: Int, kvstoreNameSiz: Int, createIfNotExist: Int, kvstoreID: Int - ): Result = ??? + ): Result = unimplementedFunction("proxyOpenSharedKvstore") override def proxyGetSharedKvstoreKeyValues( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, kvstoreID: Int, keyData: Int, keySize: Int, returnValuesData: Int, returnValuesSize: Int, returnCas: Int - ): Result = ??? + ): Result = unimplementedFunction("proxyGetSharedKvstoreKeyValues") override def proxySetSharedKvstoreKeyValues( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, kvstoreID: Int, keyData: Int, keySize: Int, valuesData: Int, valuesSize: Int, cas: Int - ): Result = ??? + ): Result = unimplementedFunction("proxySetSharedKvstoreKeyValues") override def proxyAddSharedKvstoreKeyValues( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, kvstoreID: Int, keyData: Int, keySize: Int, valuesData: Int, valuesSize: Int, cas: Int - ): Result = ??? + ): Result = unimplementedFunction("proxyAddSharedKvstoreKeyValues") override def proxyRemoveSharedKvstoreKey( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, kvstoreID: Int, keyData: Int, keySize: Int, cas: Int - ): Result = ??? + ): Result = unimplementedFunction("proxyRemoveSharedKvstoreKey") - override def proxyDeleteSharedKvstore(plugin: ExtismCurrentPlugin, kvstoreID: Int): Result = ??? + override def proxyDeleteSharedKvstore(plugin: WasmOtoroshiInternal, kvstoreID: Int): Result = unimplementedFunction("proxyDeleteSharedKvstore") override def proxyOpenSharedQueue( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, queueNameData: Int, queueNameSize: Int, createIfNotExist: Int, returnQueueID: Int - ): Result = ??? + ): Result = unimplementedFunction("proxyOpenSharedQueue") override def proxyDequeueSharedQueueItem( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, queueID: Int, returnPayloadData: Int, returnPayloadSize: Int - ): Result = ??? + ): Result = unimplementedFunction("proxyDequeueSharedQueueItem") override def proxyEnqueueSharedQueueItem( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, queueID: Int, payloadData: Int, payloadSize: Int - ): Result = ??? + ): Result = unimplementedFunction("proxyEnqueueSharedQueueItem") - override def proxyDeleteSharedQueue(plugin: ExtismCurrentPlugin, queueID: Int): Result = ??? + override def proxyDeleteSharedQueue(plugin: WasmOtoroshiInternal, queueID: Int): Result = unimplementedFunction("proxyDeleteSharedQueue") - override def proxyCreateTimer(plugin: ExtismCurrentPlugin, period: Int, oneTime: Int, returnTimerID: Int): Result = - ??? + override def proxyCreateTimer(plugin: WasmOtoroshiInternal, period: Int, oneTime: Int, returnTimerID: Int): Result = + unimplementedFunction("proxyCreateTimer") - override def proxyDeleteTimer(plugin: ExtismCurrentPlugin, timerID: Int): Result = ??? + override def proxyDeleteTimer(plugin: WasmOtoroshiInternal, timerID: Int): Result = unimplementedFunction("proxyDeleteTimer") override def proxyCreateMetric( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, metricType: MetricType, metricNameData: Int, metricNameSize: Int, returnMetricID: Int - ): MetricType = ??? + ): MetricType = unimplementedFunction("proxyCreateMetric") - override def proxyGetMetricValue(plugin: ExtismCurrentPlugin, metricID: Int, returnValue: Int): Result = { + override def proxyGetMetricValue(plugin: WasmOtoroshiInternal, metricID: Int, returnValue: Int): Result = { // TODO - get metricID val value = 10 @@ -476,10 +484,10 @@ class ProxyWasmState( ) } - override def proxySetMetricValue(plugin: ExtismCurrentPlugin, metricID: Int, value: Int): Result = ??? + override def proxySetMetricValue(plugin: WasmOtoroshiInternal, metricID: Int, value: Int): Result = unimplementedFunction("proxySetMetricValue") override def proxyIncrementMetricValue( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, data: VmData, metricID: Int, offset: Long @@ -488,10 +496,10 @@ class ProxyWasmState( ResultOk } - override def proxyDeleteMetric(plugin: ExtismCurrentPlugin, metricID: Int): Result = ??? + override def proxyDeleteMetric(plugin: WasmOtoroshiInternal, metricID: Int): Result = unimplementedFunction("proxyDeleteMetric") override def proxyDefineMetric( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, metricType: Int, namePtr: Int, nameSize: Int, @@ -517,7 +525,7 @@ class ProxyWasmState( } override def proxyDispatchHttpCall( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, upstreamNameData: Int, upstreamNameSize: Int, headersMapData: Int, @@ -528,10 +536,10 @@ class ProxyWasmState( trailersMapSize: Int, timeoutMilliseconds: Int, returnCalloutID: Int - ): Result = ??? + ): Result = unimplementedFunction("proxyDispatchHttpCall") override def proxyDispatchGrpcCall( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, upstreamNameData: Int, upstreamNameSize: Int, serviceNameData: Int, @@ -544,10 +552,10 @@ class ProxyWasmState( grpcMessageSize: Int, timeoutMilliseconds: Int, returnCalloutID: Int - ): Result = ??? + ): Result = unimplementedFunction("proxyDispatchGrpcCall") override def proxyOpenGrpcStream( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, upstreamNameData: Int, upstreamNameSize: Int, serviceNameData: Int, @@ -557,30 +565,30 @@ class ProxyWasmState( initialMetadataMapData: Int, initialMetadataMapSize: Int, returnCalloutID: Int - ): Result = ??? + ): Result = unimplementedFunction("proxyOpenGrpcStream") override def proxySendGrpcStreamMessage( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, calloutID: Int, grpcMessageData: Int, grpcMessageSize: Int - ): Result = ??? + ): Result = unimplementedFunction("proxySendGrpcStreamMessage") - override def proxyCancelGrpcCall(plugin: ExtismCurrentPlugin, calloutID: Int): Result = ??? + override def proxyCancelGrpcCall(plugin: WasmOtoroshiInternal, calloutID: Int): Result = unimplementedFunction("proxyCancelGrpcCall") - override def proxyCloseGrpcCall(plugin: ExtismCurrentPlugin, calloutID: Int): Result = ??? + override def proxyCloseGrpcCall(plugin: WasmOtoroshiInternal, calloutID: Int): Result = unimplementedFunction("proxyCloseGrpcCall") override def proxyCallCustomFunction( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, customFunctionID: Int, parametersData: Int, parametersSize: Int, returnResultsData: Int, returnResultsSize: Int - ): Result = ??? + ): Result = unimplementedFunction("proxyCallCustomFunction") override def copyIntoInstance( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, memory: Pointer, value: IoBuffer, retPtr: Int, @@ -597,7 +605,7 @@ class ProxyWasmState( } override def proxyGetProperty( - plugin: ExtismCurrentPlugin, + plugin: WasmOtoroshiInternal, data: VmData, pathPtr: Int, pathSize: Int, @@ -628,7 +636,7 @@ class ProxyWasmState( ) } - override def proxyRegisterSharedQueue(nameData: ByteString, nameSize: Int, returnID: Int): Status = ??? + override def proxyRegisterSharedQueue(nameData: ByteString, nameSize: Int, returnID: Int): Status = unimplementedFunction("proxyRegisterSharedQueue") override def proxyResolveSharedQueue( vmIDData: ByteString, @@ -636,11 +644,11 @@ class ProxyWasmState( nameData: ByteString, nameSize: Int, returnID: Int - ): Status = ??? + ): Status = unimplementedFunction("proxyResolveSharedQueue") - override def proxyEnqueueSharedQueue(queueID: Int, valueData: ByteString, valueSize: Int): Status = ??? + override def proxyEnqueueSharedQueue(queueID: Int, valueData: ByteString, valueSize: Int): Status = unimplementedFunction("proxyEnqueueSharedQueue") - override def proxyDequeueSharedQueue(queueID: Int, returnValueData: ByteString, returnValueSize: Int): Status = ??? + override def proxyDequeueSharedQueue(queueID: Int, returnValueData: ByteString, returnValueSize: Int): Status = unimplementedFunction("proxyDequeueSharedQueue") override def proxyDone(): Status = { StatusOK @@ -652,73 +660,73 @@ class ProxyWasmState( StatusOK } - override def proxySetEffectiveContext(plugin: ExtismCurrentPlugin, contextID: Int): Status = { + override def proxySetEffectiveContext(plugin: WasmOtoroshiInternal, contextID: Int): Status = { // TODO - manage context id changes // this.contextId = contextID StatusOK } - override def getPluginConfig(plugin: ExtismCurrentPlugin, data: VmData): IoBuffer = { + override def getPluginConfig(plugin: WasmOtoroshiInternal, data: VmData): IoBuffer = { new IoBuffer(data.configuration) } - override def getHttpRequestBody(plugin: ExtismCurrentPlugin, data: VmData): IoBuffer = { + override def getHttpRequestBody(plugin: WasmOtoroshiInternal, data: VmData): IoBuffer = { data.bodyIn match { case None => new IoBuffer(ByteString.empty) case Some(body) => new IoBuffer(body) } } - override def getHttpResponseBody(plugin: ExtismCurrentPlugin, data: VmData): IoBuffer = { + override def getHttpResponseBody(plugin: WasmOtoroshiInternal, data: VmData): IoBuffer = { data.bodyOut match { case None => new IoBuffer(ByteString.empty) case Some(body) => new IoBuffer(body) } } - override def getDownStreamData(plugin: ExtismCurrentPlugin, data: VmData): IoBuffer = ??? + override def getDownStreamData(plugin: WasmOtoroshiInternal, data: VmData): IoBuffer = unimplementedFunction("getDownStreamData") - override def getUpstreamData(plugin: ExtismCurrentPlugin, data: VmData): IoBuffer = ??? + override def getUpstreamData(plugin: WasmOtoroshiInternal, data: VmData): IoBuffer = unimplementedFunction("getUpstreamData") - override def getHttpCalloutResponseBody(plugin: ExtismCurrentPlugin, data: VmData): IoBuffer = ??? + override def getHttpCalloutResponseBody(plugin: WasmOtoroshiInternal, data: VmData): IoBuffer = unimplementedFunction("getHttpCalloutResponseBody") - override def getVmConfig(plugin: ExtismCurrentPlugin, data: VmData): IoBuffer = ??? + override def getVmConfig(plugin: WasmOtoroshiInternal, data: VmData): IoBuffer = unimplementedFunction("getVmConfig") - override def getCustomBuffer(bufferType: BufferType): IoBuffer = ??? + override def getCustomBuffer(bufferType: BufferType): IoBuffer = unimplementedFunction("getCustomBuffer") - override def getHttpRequestHeader(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] = { + override def getHttpRequestHeader(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] = { data.properties .filter(entry => entry._1.startsWith("request.headers.") || entry._1.startsWith(":")) .map(t => (t._1.replace("request.headers.", ""), ByteString(t._2))) } - override def getHttpRequestTrailer(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] = { + override def getHttpRequestTrailer(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] = { Map.empty } - override def getHttpRequestMetadata(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] = { + override def getHttpRequestMetadata(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] = { Map.empty } - override def getHttpResponseHeader(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] = { + override def getHttpResponseHeader(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] = { data.properties .filter(entry => entry._1.startsWith("response.headers.") || entry._1.startsWith(":")) .map(t => (t._1.replace("response.headers.", ""), ByteString(t._2))) } - override def getHttpResponseTrailer(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] = ??? + override def getHttpResponseTrailer(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] = unimplementedFunction("getHttpResponseTrailer") - override def getHttpResponseMetadata(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] = ??? + override def getHttpResponseMetadata(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] = unimplementedFunction("getHttpResponseMetadata") - override def getHttpCallResponseHeaders(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] = ??? + override def getHttpCallResponseHeaders(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] = unimplementedFunction("getHttpCallResponseHeaders") - override def getHttpCallResponseTrailer(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] = ??? + override def getHttpCallResponseTrailer(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] = unimplementedFunction("getHttpCallResponseTrailer") - override def getHttpCallResponseMetadata(plugin: ExtismCurrentPlugin, data: VmData): Map[String, ByteString] = ??? + override def getHttpCallResponseMetadata(plugin: WasmOtoroshiInternal, data: VmData): Map[String, ByteString] = unimplementedFunction("getHttpCallResponseMetadata") - override def getCustomMap(plugin: ExtismCurrentPlugin, data: VmData, mapType: MapType): Map[String, ByteString] = ??? + override def getCustomMap(plugin: WasmOtoroshiInternal, data: VmData, mapType: MapType): Map[String, ByteString] = unimplementedFunction("getCustomMap") - override def getMemory(plugin: ExtismCurrentPlugin, addr: Int, size: Int): Either[Error, (Pointer, ByteString)] = { + override def getMemory(plugin: WasmOtoroshiInternal, addr: Int, size: Int): Either[Error, (Pointer, ByteString)] = plugin.synchronized { val memory: Pointer = plugin.getLinearMemory("memory") if (memory == null) { return Error.ErrorExportsNotFound.left @@ -733,7 +741,8 @@ class ProxyWasmState( (memory -> ByteString(memory.share(addr).getByteArray(0, size))).right[Error] } - override def getMemory(plugin: ExtismCurrentPlugin): Either[Error, Pointer] = { + override def getMemory(plugin: WasmOtoroshiInternal): Either[Error, Pointer] = plugin.synchronized { + val memory: Pointer = plugin.getLinearMemory("memory") if (memory == null) { return Error.ErrorExportsNotFound.left diff --git a/otoroshi/app/wasm/proxywasm/utils.scala b/otoroshi/app/wasm/proxywasm/utils.scala index 345a3ee23b..5059a3af88 100644 --- a/otoroshi/app/wasm/proxywasm/utils.scala +++ b/otoroshi/app/wasm/proxywasm/utils.scala @@ -8,10 +8,12 @@ object WasmUtils { val logger = Logger("otoroshi-proxy-wasm-utils") def traceVmHost(message: String): Unit = { + // println("[vm->host]: " + message) if (logger.isTraceEnabled) logger.trace("[vm->host]: " + message) } def traceHostVm(message: String) { + // println("[host->vm]: " + message) if (logger.isTraceEnabled) logger.trace("[host->vm]: " + message) } diff --git a/otoroshi/app/wasm/runtimev1.scala b/otoroshi/app/wasm/runtimev1.scala new file mode 100644 index 0000000000..fff852f775 --- /dev/null +++ b/otoroshi/app/wasm/runtimev1.scala @@ -0,0 +1,534 @@ +package otoroshi.wasm + +import akka.stream.OverflowStrategy +import akka.stream.scaladsl.{Keep, Sink, Source, SourceQueueWithComplete} +import akka.util.ByteString +import org.extism.sdk.manifest.{Manifest, MemoryOptions} +import org.extism.sdk.wasmotoroshi._ +import org.extism.sdk.wasm.WasmSourceResolver +import org.joda.time.DateTime +import otoroshi.env.Env +import otoroshi.security.IdGenerator +import otoroshi.utils.TypedMap +import otoroshi.utils.cache.types.UnboundedTrieMap +import otoroshi.utils.syntax.implicits._ +import otoroshi.wasm.proxywasm.VmData +import play.api.Logger +import play.api.libs.json._ +import play.api.libs.ws.{DefaultWSCookie, WSCookie} +import play.api.mvc.Cookie + +import java.util.concurrent.Executors +import java.util.concurrent.atomic.{AtomicBoolean, AtomicInteger} +import scala.concurrent.duration.DurationInt +import scala.concurrent.{ExecutionContext, Future, Promise} +import scala.jdk.CollectionConverters._ + + +sealed trait WasmAction + +object WasmAction { + case class WasmOpaInvocation(call: () => Either[JsValue, String], promise: Promise[Either[JsValue, String]]) + extends WasmAction + case class WasmInvocation( + call: () => Either[JsValue, (String, ResultsWrapper)], + promise: Promise[Either[JsValue, (String, ResultsWrapper)]] + ) extends WasmAction + case class WasmUpdate(call: () => Unit) extends WasmAction +} + +object WasmContextSlot { + // private val _currentContext = new ThreadLocal[Any]() + // def getCurrentContext(): Option[Any] = Option(_currentContext.get()) + // private[wasm] def setCurrentContext(value: Any): Unit = _currentContext.set(value) + // private[wasm] def clearCurrentContext(): Unit = _currentContext.remove() +} + +class WasmContextSlot( + id: String, + instance: Int, + plugin: WasmOtoroshiInstance, + cfg: WasmConfig, + wsm: ByteString, + closed: AtomicBoolean, + updating: AtomicBoolean, + instanceId: String, + functions: Array[WasmOtoroshiHostFunction[_ <: WasmOtoroshiHostUserData]] + ) { + + def callSync( + wasmFunctionParameters: WasmFunctionParameters, + context: Option[VmData] + )(implicit env: Env, ec: ExecutionContext): Either[JsValue, (String, ResultsWrapper)] = { + if (closed.get()) { + val plug = WasmUtils.pluginCache.apply(s"$id-$instance") + plug.callSync(wasmFunctionParameters, context) + } else { + try { + // context.foreach(ctx => WasmContextSlot.setCurrentContext(ctx)) + if (WasmUtils.logger.isDebugEnabled) WasmUtils.logger.debug(s"calling instance $id-$instance") + WasmUtils.debugLog.debug(s"calling '${wasmFunctionParameters.functionName}' on instance '$id-$instance'") + val res: Either[JsValue, (String, ResultsWrapper)] = env.metrics.withTimer("otoroshi.wasm.core.call") { + wasmFunctionParameters.call(plugin) + } + env.metrics.withTimer("otoroshi.wasm.core.reset") { + plugin.reset() + } + env.metrics.withTimer("otoroshi.wasm.core.count-thunks") { + WasmUtils.logger.debug(s"thunks: ${functions.size}") + } + res + } catch { + case e: Throwable if e.getMessage.contains("wasm backtrace") => + WasmUtils.logger.error(s"error while invoking wasm function '${wasmFunctionParameters.functionName}'", e) + Json + .obj( + "error" -> "wasm_error", + "error_description" -> JsArray(e.getMessage.split("\\n").filter(_.trim.nonEmpty).map(JsString.apply)) + ) + .left + case e: Throwable => + WasmUtils.logger.error(s"error while invoking wasm function '${wasmFunctionParameters.functionName}'", e) + Json.obj("error" -> "wasm_error", "error_description" -> JsString(e.getMessage)).left + } finally { + // context.foreach(ctx => WasmContextSlot.clearCurrentContext()) + } + } + } + + def callOpaSync(input: String)(implicit env: Env, ec: ExecutionContext): Either[JsValue, String] = { + if (closed.get()) { + val plug = WasmUtils.pluginCache.apply(s"$id-$instance") + plug.callOpaSync(input) + } else { + try { + val res = env.metrics.withTimer("otoroshi.wasm.core.call-opa") { + val result = OPA.initialize(plugin).right + val str = result.get._1 + val parts = str.split("@") + OPA.evaluate(plugin, parts(0).toInt, parts(1).toInt, input) + .map(r => r._1) + } + res + } catch { + case e: Throwable if e.getMessage.contains("wasm backtrace") => + WasmUtils.logger.error(s"error while invoking wasm function 'opa'", e) + Json + .obj( + "error" -> "wasm_error", + "error_description" -> JsArray(e.getMessage.split("\\n").filter(_.trim.nonEmpty).map(JsString.apply)) + ) + .left + case e: Throwable => + WasmUtils.logger.error(s"error while invoking wasm function 'opa'", e) + Json.obj("error" -> "wasm_error", "error_description" -> JsString(e.getMessage)).left + } + } + } + + def call( + wasmFunctionParameters: WasmFunctionParameters, + context: Option[VmData] + )(implicit env: Env, ec: ExecutionContext): Future[Either[JsValue, (String, ResultsWrapper)]] = { + val promise = Promise.apply[Either[JsValue, (String, ResultsWrapper)]]() + WasmUtils + .getInvocationQueueFor(id, instance) + .offer(WasmAction.WasmInvocation(() => callSync(wasmFunctionParameters, context), promise)) + promise.future + } + + def callOpa(input: String)(implicit env: Env, ec: ExecutionContext): Future[Either[JsValue, String]] = { + val promise = Promise.apply[Either[JsValue, String]]() + WasmUtils.getInvocationQueueFor(id, instance).offer(WasmAction.WasmOpaInvocation(() => callOpaSync(input), promise)) + promise.future + } + + def close(lifetime: WasmVmLifetime): Unit = { + if (lifetime == WasmVmLifetime.Invocation) { + if (WasmUtils.logger.isDebugEnabled) WasmUtils.logger.debug(s"calling close on WasmContextSlot of ${id}") + forceClose() + } + } + + def forceClose(): Unit = { + if (WasmUtils.logger.isDebugEnabled) WasmUtils.logger.debug(s"calling forceClose on WasmContextSlot of ${id}") + if (closed.compareAndSet(false, true)) { + try { + plugin.close() + } catch { + case e: Throwable => e.printStackTrace() + } + } + } + + def needsUpdate(wasmConfig: WasmConfig, wasm: ByteString): Boolean = { + val configHasChanged = wasmConfig != cfg + val wasmHasChanged = wasm != wsm + if (WasmUtils.logger.isDebugEnabled && configHasChanged) + WasmUtils.logger.debug(s"plugin ${id} needs update because of config change") + if (WasmUtils.logger.isDebugEnabled && wasmHasChanged) + WasmUtils.logger.debug(s"plugin ${id} needs update because of wasm change") + configHasChanged || wasmHasChanged + } + + def updateIfNeeded( + pluginId: String, + config: WasmConfig, + wasm: ByteString, + attrsOpt: Option[TypedMap], + addHostFunctions: Seq[WasmOtoroshiHostFunction[_ <: WasmOtoroshiHostUserData]] + )(implicit env: Env, ec: ExecutionContext): WasmContextSlot = { + if (needsUpdate(config, wasm) && updating.compareAndSet(false, true)) { + + if (config.instances < cfg.instances) { + env.otoroshiActorSystem.scheduler.scheduleOnce(20.seconds) { // TODO: config ? + if (WasmUtils.logger.isDebugEnabled) WasmUtils.logger.debug(s"trying to kill unused instances of ${pluginId}") + (config.instances to cfg.instances).map { idx => + WasmUtils.pluginCache.get(s"${pluginId}-${instance}").foreach(p => p.forceClose()) + WasmUtils.queues.remove(s"${pluginId}-${instance}") + WasmUtils.pluginCache.remove(s"$pluginId-$instance") + } + } + } + if (WasmUtils.logger.isDebugEnabled) WasmUtils.logger.debug(s"scheduling update ${instanceId}") + WasmUtils + .getInvocationQueueFor(id, instance) + .offer(WasmAction.WasmUpdate(() => { + val plugin = WasmUtils.actuallyCreatePlugin( + instance, + wasm, + config, + pluginId, + attrsOpt, + addHostFunctions + ) + if (WasmUtils.logger.isDebugEnabled) WasmUtils.logger.debug(s"updating ${instanceId}") + WasmUtils.pluginCache.put(s"$pluginId-$instance", plugin) + env.otoroshiActorSystem.scheduler.scheduleOnce(20.seconds) { // TODO: config ? + if (WasmUtils.logger.isDebugEnabled) WasmUtils.logger.debug(s"delayed force close ${instanceId}") + if (!closed.get()) { + forceClose() + } + } + })) + } + this + } +} + + +object WasmUtils { + + private[wasm] val logger = Logger("otoroshi-wasm") + + val debugLog = Logger("otoroshi-wasm-debug") + + implicit val executor = ExecutionContext.fromExecutorService( + Executors.newWorkStealingPool(Math.max(32, (Runtime.getRuntime.availableProcessors * 4) + 1)) + ) + + // TODO: handle env.wasmCacheSize based on creation date ? + private[wasm] val _script_cache: UnboundedTrieMap[String, CacheableWasmScript] = new UnboundedTrieMap[String, CacheableWasmScript]() + private[wasm] val pluginCache = new UnboundedTrieMap[String, WasmContextSlot]() + private[wasm] val queues = new UnboundedTrieMap[String, (DateTime, SourceQueueWithComplete[WasmAction])]() + private[wasm] val instancesCounter = new AtomicInteger(0) + + def scriptCache(implicit env: Env): UnboundedTrieMap[String, CacheableWasmScript] = _script_cache + + def convertJsonCookies(wasmResponse: JsValue): Option[Seq[WSCookie]] = + wasmResponse + .select("cookies") + .asOpt[Seq[JsObject]] + .map { arr => + arr.map { c => + DefaultWSCookie( + name = c.select("name").asString, + value = c.select("value").asString, + maxAge = c.select("maxAge").asOpt[Long], + path = c.select("path").asOpt[String], + domain = c.select("domain").asOpt[String], + secure = c.select("secure").asOpt[Boolean].getOrElse(false), + httpOnly = c.select("httpOnly").asOpt[Boolean].getOrElse(false) + ) + } + } + + def convertJsonPlayCookies(wasmResponse: JsValue): Option[Seq[Cookie]] = + wasmResponse + .select("cookies") + .asOpt[Seq[JsObject]] + .map { arr => + arr.map { c => + Cookie( + name = c.select("name").asString, + value = c.select("value").asString, + maxAge = c.select("maxAge").asOpt[Int], + path = c.select("path").asOpt[String].getOrElse("/"), + domain = c.select("domain").asOpt[String], + secure = c.select("secure").asOpt[Boolean].getOrElse(false), + httpOnly = c.select("httpOnly").asOpt[Boolean].getOrElse(false), + sameSite = c.select("domain").asOpt[String].flatMap(Cookie.SameSite.parse) + ) + } + } + + private[wasm] def getInvocationQueueFor(id: String, instance: Int)(implicit + env: Env + ): SourceQueueWithComplete[WasmAction] = { + val key = s"$id-$instance" + queues.getOrUpdate(key) { + val stream = Source + .queue[WasmAction](env.wasmQueueBufferSize, OverflowStrategy.dropHead) + .mapAsync(1) { action => + Future.apply { + action match { + case WasmAction.WasmInvocation(invoke, promise) => + try { + val res = invoke() + promise.trySuccess(res) + } catch { + case e: Throwable => promise.tryFailure(e) + } + case WasmAction.WasmOpaInvocation(invoke, promise) => + try { + val res = invoke() + promise.trySuccess(res) + } catch { + case e: Throwable => promise.tryFailure(e) + } + case WasmAction.WasmUpdate(update) => + try { + update() + } catch { + case e: Throwable => e.printStackTrace() + } + } + }(executor) + } + (DateTime.now(), stream.toMat(Sink.ignore)(Keep.both).run()(env.otoroshiMaterializer)._1) + } + }._2 + + private[wasm] def internalCreateManifest(config: WasmConfig, wasm: ByteString, env: Env) = + env.metrics.withTimer("otoroshi.wasm.core.create-plugin.manifest") { + val resolver = new WasmSourceResolver() + val source = resolver.resolve("wasm", wasm.toByteBuffer.array()) + new Manifest( + Seq[org.extism.sdk.wasm.WasmSource](source).asJava, + new MemoryOptions(config.memoryPages), + config.config.asJava, + config.allowedHosts.asJava, + config.allowedPaths.asJava + ) + } + + private[wasm] def actuallyCreatePlugin( + instance: Int, + wasm: ByteString, + config: WasmConfig, + pluginId: String, + attrsOpt: Option[TypedMap], + addHostFunctions: Seq[WasmOtoroshiHostFunction[_ <: WasmOtoroshiHostUserData]] + )(implicit env: Env, ec: ExecutionContext): WasmContextSlot = + env.metrics.withTimer("otoroshi.wasm.core.act-create-plugin") { + if (WasmUtils.logger.isDebugEnabled) + WasmUtils.logger.debug(s"creating wasm plugin instance for ${pluginId}") + val engine = WasmVmPool.engine + val manifest = internalCreateManifest(config, wasm, env) + val hash = java.security.MessageDigest.getInstance("SHA-256") + .digest(wasm.toArray) + .map("%02x".format(_)).mkString + val template = new WasmOtoroshiTemplate(engine, hash, manifest) + // val context = env.metrics.withTimer("otoroshi.wasm.core.create-plugin.context")(new Context()) + val functions: Array[WasmOtoroshiHostFunction[_ <: WasmOtoroshiHostUserData]] = + HostFunctions.getFunctions(config, pluginId, attrsOpt) ++ addHostFunctions + val plugin = env.metrics.withTimer("otoroshi.wasm.core.create-plugin.plugin") { + template.instantiate( + engine, + functions, + LinearMemories.getMemories(config), + config.wasi, + ) + } + new WasmContextSlot( + pluginId, + instance, + plugin, + config, + wasm, + functions = functions, + closed = new AtomicBoolean(false), + updating = new AtomicBoolean(false), + instanceId = IdGenerator.uuid + ) + } + + private def callWasm( + wasm: ByteString, + config: WasmConfig, + wasmFunctionParameters: WasmFunctionParameters, + pluginId: String, + attrsOpt: Option[TypedMap], + ctx: Option[VmData], + addHostFunctions: Seq[WasmOtoroshiHostFunction[_ <: WasmOtoroshiHostUserData]] + )(implicit env: Env, ec: ExecutionContext): Future[Either[JsValue, (String, ResultsWrapper)]] = + env.metrics.withTimerAsync("otoroshi.wasm.core.call-wasm") { + + WasmUtils.debugLog.debug("callWasm") + + val functionName = config.functionName.filter(_.nonEmpty).getOrElse(wasmFunctionParameters.functionName) + val instance = instancesCounter.incrementAndGet() % config.instances + + def createPlugin(): WasmContextSlot = { + if (config.lifetime == WasmVmLifetime.Forever) { + pluginCache + .getOrUpdate(s"$pluginId-$instance") { + actuallyCreatePlugin(instance, wasm, config, pluginId, None, addHostFunctions) + } + .seffectOn(_.updateIfNeeded(pluginId, config, wasm, None, addHostFunctions)) + } else { + actuallyCreatePlugin(instance, wasm, config, pluginId, attrsOpt, addHostFunctions) + } + } + + attrsOpt match { + case None => { + val slot = createPlugin() + if (config.opa) { + slot.callOpa(wasmFunctionParameters.input.get).map { output => + slot.close(config.lifetime) + output.map(str => (str, ResultsWrapper(new WasmOtoroshiResults(0)))) + } + } else { + slot.call(wasmFunctionParameters, ctx).map { output => + slot.close(config.lifetime) + output + } + } + } + case Some(attrs) => { + val context = attrs.get(otoroshi.next.plugins.Keys.WasmContextKey) match { + case None => { + val context = new WasmContext() + attrs.put(otoroshi.next.plugins.Keys.WasmContextKey -> context) + context + } + case Some(context) => context + } + val slot = context.get(pluginId) match { + case None => { + val plugin = createPlugin() + if (config.lifetime == WasmVmLifetime.Invocation) context.put(pluginId, plugin) + plugin + } + case Some(plugin) => plugin + } + if (config.opa) { + slot.callOpa(wasmFunctionParameters.input.get).map { output => + slot.close(config.lifetime) + output.map(str => (str, ResultsWrapper(new WasmOtoroshiResults(0)))) + } + } else { + slot.call(wasmFunctionParameters, ctx).map { output => + slot.close(config.lifetime) + output + } + } + } + } + } + + @deprecated(message = "Use WasmVmPool and WasmVm apis instead", since = "v16.6.0") + def execute( + config: WasmConfig, + defaultFunctionName: String, + input: JsValue, + attrs: Option[TypedMap], + ctx: Option[VmData] + )(implicit env: Env): Future[Either[JsValue, String]] = { + rawExecute(config, WasmFunctionParameters.ExtismFuntionCall(config.functionName.getOrElse(defaultFunctionName), input.stringify), attrs, ctx, Seq.empty).map(r => r.map(_._1)) + } + + @deprecated(message = "Use WasmVmPool and WasmVm apis instead", since = "v16.6.0") + def rawExecute( + _config: WasmConfig, + wasmFunctionParameters: WasmFunctionParameters, + attrs: Option[TypedMap], + ctx: Option[VmData], + addHostFunctions: Seq[WasmOtoroshiHostFunction[_ <: WasmOtoroshiHostUserData]] + )(implicit env: Env): Future[Either[JsValue, (String, ResultsWrapper)]] = + env.metrics.withTimerAsync("otoroshi.wasm.core.raw-execute") { + val config = _config // if (_config.opa) _config.copy(lifetime = WasmVmLifetime.Invocation) else _config + WasmUtils.debugLog.debug("execute") + val pluginId = config.source.kind match { + case WasmSourceKind.Local => { + env.proxyState.wasmPlugin(config.source.path) match { + case None => config.source.cacheKey + case Some(plugin) => plugin.config.source.cacheKey + } + } + case _ => config.source.cacheKey + } + scriptCache.get(pluginId) match { + case Some(CacheableWasmScript.FetchingWasmScript(fu)) => + fu.flatMap { _ => + rawExecute(config, wasmFunctionParameters, attrs, ctx, addHostFunctions) + } + case Some(CacheableWasmScript.CachedWasmScript(script, _)) => { + env.metrics.withTimerAsync("otoroshi.wasm.core.get-config")(config.source.getConfig()).flatMap { + case None => + WasmUtils.callWasm( + script, + config, + wasmFunctionParameters, + pluginId, + attrs, + ctx, + addHostFunctions + ) + case Some(finalConfig) => + val functionName = config.functionName.filter(_.nonEmpty).orElse(finalConfig.functionName) + WasmUtils.callWasm( + script, + finalConfig.copy(functionName = functionName), + wasmFunctionParameters.withFunctionName(functionName.getOrElse(wasmFunctionParameters.functionName)), + pluginId, + attrs, + ctx, + addHostFunctions + ) + } + } + case None if config.source.kind == WasmSourceKind.Unknown => Left(Json.obj("error" -> "missing source")).future + case _ => + env.metrics.withTimerAsync("otoroshi.wasm.core.get-wasm")(config.source.getWasm()).flatMap { + case Left(err) => err.left.vfuture + case Right(wasm) => { + env.metrics.withTimerAsync("otoroshi.wasm.core.get-config")(config.source.getConfig()).flatMap { + case None => + WasmUtils.callWasm( + wasm, + config, + wasmFunctionParameters, + pluginId, + attrs, + ctx, + addHostFunctions + ) + case Some(finalConfig) => + val functionName = config.functionName.filter(_.nonEmpty).orElse(finalConfig.functionName) + WasmUtils.callWasm( + wasm, + finalConfig.copy(functionName = functionName), + wasmFunctionParameters.withFunctionName(functionName.getOrElse(wasmFunctionParameters.functionName)), + pluginId, + attrs, + ctx, + addHostFunctions + ) + } + } + } + } + } +} \ No newline at end of file diff --git a/otoroshi/app/wasm/runtimev2.scala b/otoroshi/app/wasm/runtimev2.scala new file mode 100644 index 0000000000..cf717f446f --- /dev/null +++ b/otoroshi/app/wasm/runtimev2.scala @@ -0,0 +1,593 @@ +package otoroshi.wasm + +import akka.stream.OverflowStrategy +import akka.stream.scaladsl.{Keep, Sink, Source} +import com.codahale.metrics.UniformReservoir +import org.extism.sdk.manifest.{Manifest, MemoryOptions} +import org.extism.sdk.wasm.WasmSourceResolver +import org.extism.sdk.wasmotoroshi._ +import otoroshi.env.Env +import otoroshi.models.WasmPlugin +import otoroshi.next.plugins.api.{NgPluginVisibility, NgStep} +import otoroshi.script._ +import otoroshi.utils.cache.types.UnboundedTrieMap +import otoroshi.utils.syntax.implicits._ +import otoroshi.wasm.CacheableWasmScript.CachedWasmScript +import otoroshi.wasm.WasmVm.logger +import otoroshi.wasm.proxywasm.VmData +import play.api.Logger +import play.api.libs.json._ + +import java.util.concurrent.ConcurrentLinkedQueue +import java.util.concurrent.atomic.{AtomicBoolean, AtomicInteger, AtomicLong, AtomicReference} +import scala.concurrent.duration.{DurationInt, DurationLong, FiniteDuration} +import scala.concurrent.{Await, ExecutionContext, Future, Promise} +import scala.jdk.CollectionConverters._ +import scala.util.{Failure, Success, Try} + +sealed trait WasmVmAction + +object WasmVmAction { + case object WasmVmKillAction extends WasmVmAction + case class WasmVmCallAction( + parameters: WasmFunctionParameters, + context: Option[VmData], + promise: Promise[Either[JsValue, (String, ResultsWrapper)]] + ) extends WasmVmAction +} + +object WasmVm { + val logger = Logger("otoroshi-wasm-vm") + def fromConfig(config: WasmConfig)(implicit env: Env, ec: ExecutionContext): Future[Option[(WasmVm, WasmConfig)]] = { + if (config.source.kind == WasmSourceKind.Local) { + env.proxyState.wasmPlugin(config.source.path) match { + case None => None.vfuture + case Some(localPlugin) => { + val localConfig = localPlugin.config + localPlugin.pool().getPooledVm().map(vm => Some((vm, localConfig))) + } + } + } else { + config.pool().getPooledVm().map(vm => Some((vm, config))) + } + } +} + +case class OPAWasmVm(opaDataAddr: Int, opaBaseHeapPtr: Int) + +case class WasmVm(index: Int, + maxCalls: Int, + maxMemory: Long, + resetMemory: Boolean, + instance: WasmOtoroshiInstance, + vmDataRef: AtomicReference[VmData], + memories: Array[WasmOtoroshiLinearMemory], + functions: Array[WasmOtoroshiHostFunction[_ <: WasmOtoroshiHostUserData]], + pool: WasmVmPool, + var opaPointers: Option[OPAWasmVm] = None) { + + private val callDurationReservoirNs = new UniformReservoir() + private val lastUsage: AtomicLong = new AtomicLong(System.currentTimeMillis()) + private val initializedRef: AtomicBoolean = new AtomicBoolean(false) + private val killAtRelease: AtomicBoolean = new AtomicBoolean(false) + private val inFlight = new AtomicInteger(0) + private val callCounter = new AtomicInteger(0) + private val queue = { + val env = pool.env + Source.queue[WasmVmAction](env.wasmQueueBufferSize, OverflowStrategy.dropTail) + .mapAsync(1)(handle) + .toMat(Sink.ignore)(Keep.both) + .run()(env.otoroshiMaterializer)._1 + } + + def calls: Int = callCounter.get() + def current: Int = inFlight.get() + + private def handle(act: WasmVmAction): Future[Unit] = { + Future.apply { + lastUsage.set(System.currentTimeMillis()) + act match { + case WasmVmAction.WasmVmKillAction => destroy() + case action: WasmVmAction.WasmVmCallAction => { + try { + inFlight.decrementAndGet() + // action.context.foreach(ctx => WasmContextSlot.setCurrentContext(ctx)) + action.context.foreach(ctx => vmDataRef.set(ctx)) + if (WasmVm.logger.isDebugEnabled) WasmVm.logger.debug(s"call vm ${index} with method ${action.parameters.functionName} on thread ${Thread.currentThread().getName} on path ${action.context.get.properties.get("request.path").map(v => new String(v))}") + val start = System.nanoTime() + val res = action.parameters.call(instance) + callDurationReservoirNs.update(System.nanoTime() - start) + if (res.isRight && res.right.get._2.results.getValues() != null) { + val ret = res.right.get._2.results.getValues()(0).v.i32 + if (ret > 7 || ret < 0) { // weird multi thread issues + ignore() + killAtRelease.set(true) + } + } + action.promise.trySuccess(res) + } catch { + case t: Throwable => action.promise.tryFailure(t) + } finally { + if (resetMemory) { + instance.reset() + } + WasmVm.logger.debug(s"functions: ${functions.size}") + WasmVm.logger.debug(s"memories: ${memories.size}") + // WasmContextSlot.clearCurrentContext() + // vmDataRef.set(null) + val count = callCounter.incrementAndGet() + if (count >= maxCalls) { + callCounter.set(0) + if (WasmVm.logger.isDebugEnabled) WasmVm.logger.debug(s"killing vm ${index} with remaining ${inFlight.get()} calls (${count})") + destroyAtRelease() + } + } + } + } + () + }(WasmUtils.executor) + } + + def reset(): Unit = instance.reset() + + def destroy(): Unit = { + if (WasmVm.logger.isDebugEnabled) WasmVm.logger.debug(s"destroy vm: ${index}") + WasmVm.logger.debug(s"destroy vm: ${index}") + pool.clear(this) + instance.close() + } + + def isBusy(): Boolean = { + inFlight.get() > 0 + } + + def destroyAtRelease(): Unit = { + ignore() + killAtRelease.set(true) + } + + def release(): Unit = { + if (killAtRelease.get()) { + queue.offer(WasmVmAction.WasmVmKillAction) + } else { + pool.release(this) + } + } + + def lastUsedAt(): Long = lastUsage.get() + + def hasNotBeenUsedInTheLast(duration: FiniteDuration): Boolean = !hasBeenUsedInTheLast(duration) + def consumesMoreThanMemoryPercent(percent: Double): Boolean = { + val consumed: Double = (instance.getMemorySize.toDouble / maxMemory.toDouble) + val res = consumed > percent + if (logger.isDebugEnabled) logger.debug(s"consumesMoreThanMemoryPercent($percent) = (${instance.getMemorySize} / $maxMemory) > $percent : $res : (${consumed * 100.0}%)") + res + } + def tooSlow(max: Long): Boolean = callDurationReservoirNs.getSnapshot.getMean.toLong > max + + def hasBeenUsedInTheLast(duration: FiniteDuration): Boolean = { + val now = System.currentTimeMillis() + val limit = lastUsage.get() + duration.toMillis + now < limit + } + + def ignore(): Unit = pool.ignore(this) + + def initialized(): Boolean = initializedRef.get() + + def initialize(f: => Any): Unit = { + if (initializedRef.compareAndSet(false, true)) { + f + } + } + + def finitialize[A](f: => Future[A]): Future[Unit] = { + if (initializedRef.compareAndSet(false, true)) { + f.map(_ => ())(pool.env.otoroshiExecutionContext) + } else { + ().vfuture + } + } + + def call( + parameters: WasmFunctionParameters, + context: Option[VmData], + )(implicit env: Env, ec: ExecutionContext): Future[Either[JsValue, (String, ResultsWrapper)]] = { + val promise = Promise[Either[JsValue, (String, ResultsWrapper)]]() + inFlight.incrementAndGet() + lastUsage.set(System.currentTimeMillis()) + queue.offer(WasmVmAction.WasmVmCallAction(parameters, context, promise)) + promise.future + } +} + +case class WasmVmPoolAction(promise: Promise[WasmVm], options: WasmVmInitOptions) { + private[wasm] def provideVm(vm: WasmVm): Unit = promise.trySuccess(vm) + private[wasm] def fail(e: Throwable): Unit = promise.tryFailure(e) +} + +object WasmVmPool { + + private[wasm] val logger = Logger("otoroshi-wasm-vm-pool") + private[wasm] val engine = new WasmOtoroshiEngine() + private val instances = new UnboundedTrieMap[String, WasmVmPool]() + + def allInstances(): Map[String, WasmVmPool] = instances.synchronized { + instances.toMap + } + + def forPlugin(plugin: WasmPlugin)(implicit env: Env): WasmVmPool = instances.synchronized { + val key = plugin.id // s"plugin://${plugin.id}?cfg=${plugin.config.json.stringify.sha512}" + instances.getOrUpdate(key) { + new WasmVmPool(key, None, env) + } + } + + def forConfig(config: => WasmConfig)(implicit env: Env): WasmVmPool = instances.synchronized { + val key = s"${config.source.cacheKey}?cfg=${config.json.stringify.sha512}" + instances.getOrUpdate(key) { + new WasmVmPool(key, config.some, env) + } + } + + private[wasm] def removePlugin(id: String): Unit = instances.synchronized { + instances.remove(id) + } +} + +class WasmVmPool(stableId: => String, optConfig: => Option[WasmConfig], val env: Env) { + + WasmVmPool.logger.debug("new WasmVmPool") + + private val engine = new WasmOtoroshiEngine() + private val counter = new AtomicInteger(-1) + private val templateRef = new AtomicReference[WasmOtoroshiTemplate](null) + private[wasm] val availableVms = new ConcurrentLinkedQueue[WasmVm]() + private[wasm] val inUseVms = new ConcurrentLinkedQueue[WasmVm]() + private val creatingRef = new AtomicBoolean(false) + private val lastPluginVersion = new AtomicReference[String](null) + private val requestsSource = Source.queue[WasmVmPoolAction](env.wasmQueueBufferSize, OverflowStrategy.dropTail) + private val prioritySource = Source.queue[WasmVmPoolAction](env.wasmQueueBufferSize, OverflowStrategy.dropTail) + private val (priorityQueue, requestsQueue) = { + prioritySource + .mergePrioritizedMat(requestsSource, 99, 1, false)(Keep.both) + .map(handleAction) + .toMat(Sink.ignore)(Keep.both) + .run()(env.otoroshiMaterializer)._1 + } + + // unqueue actions from the action queue + private def handleAction(action: WasmVmPoolAction): Unit = try { + wasmConfig() match { + case None => + // if we cannot find the current wasm config, something is wrong, we destroy the pool + destroyCurrentVms() + WasmVmPool.removePlugin(stableId) + action.fail(new RuntimeException(s"No more plugin ${stableId}")) + case Some(wcfg) => { + // first we ensure the wasm source has been fetched + if (!wcfg.source.isCached()(env)) { + wcfg.source.getWasm()(env, env.otoroshiExecutionContext).andThen { + case _ => priorityQueue.offer(action) + }(env.otoroshiExecutionContext) + } else { + val changed = hasChanged(wcfg) + val available = hasAvailableVm(wcfg) + val creating = isVmCreating() + val atMax = atMaxPoolCapacity(wcfg) + // then we check if the underlying wasmcode + config has not changed since last time + if (changed) { + // if so, we destroy all current vms and recreate a new one + WasmVmPool.logger.warn("plugin has changed, destroying old instances") + destroyCurrentVms() + createVm(wcfg, action.options) + } + // check if a vm is available + if (!available) { + // if not, but a new one is creating, just wait a little bit more + if (creating) { + priorityQueue.offer(action) + } else { + // check if we hit the max possible instances + if (atMax) { + // if so, just wait + priorityQueue.offer(action) + } else { + // if not, create a new instance because we need one + createVm(wcfg, action.options) + priorityQueue.offer(action) + } + } + } else { + // if so, acquire one + val vm = acquireVm() + action.provideVm(vm) + } + } + } + } + } catch { + case t: Throwable => + t.printStackTrace() + action.fail(t) + } + + // create a new vm for the pool + // we try to create vm one by one and to not create more than needed + private def createVm(config: WasmConfig, options: WasmVmInitOptions): Unit = synchronized { + if (creatingRef.compareAndSet(false, true)) { + val index = counter.incrementAndGet() + WasmVmPool.logger.debug(s"creating vm: ${index}") + if (templateRef.get() == null) { + if (!config.source.isCached()(env)) { + // this part should never happen anymore, but just in case + WasmVmPool.logger.warn("fetching missing source") + Await.result(config.source.getWasm()(env, env.otoroshiExecutionContext), 30.seconds) + } + lastPluginVersion.set(computeHash(config, config.source.cacheKey, WasmUtils.scriptCache(env))) + val cache = WasmUtils.scriptCache(env) + val key = config.source.cacheKey + val wasm = cache(key).asInstanceOf[CachedWasmScript].script + val hash = wasm.sha256 + val resolver = new WasmSourceResolver() + val source = resolver.resolve("wasm", wasm.toByteBuffer.array()) + templateRef.set(new WasmOtoroshiTemplate(engine, hash, new Manifest( + Seq[org.extism.sdk.wasm.WasmSource](source).asJava, + new MemoryOptions(config.memoryPages), + config.config.asJava, + config.allowedHosts.asJava, + config.allowedPaths.asJava + ))) + } + val template = templateRef.get() + val vmDataRef = new AtomicReference[VmData](null) + val addedFunctions = options.addHostFunctions(vmDataRef) + val functions: Array[WasmOtoroshiHostFunction[_ <: WasmOtoroshiHostUserData]] = if (options.importDefaultHostFunctions) { + HostFunctions.getFunctions(config, stableId, None)(env, env.otoroshiExecutionContext) ++ addedFunctions + } else { + addedFunctions.toArray[WasmOtoroshiHostFunction[_ <: WasmOtoroshiHostUserData]] + } + val memories = LinearMemories.getMemories(config) + val instance = template.instantiate(engine, functions, memories, config.wasi) + val vm = WasmVm(index, config.killOptions.maxCalls, config.memoryPages * (64L * 1024L), options.resetMemory, instance, vmDataRef, memories, functions, this) + availableVms.offer(vm) + creatingRef.compareAndSet(true, false) + } + } + + // acquire an available vm for work + private def acquireVm(): WasmVm = synchronized { + if (availableVms.size() > 0) { + availableVms.synchronized { + val vm = availableVms.poll() + availableVms.remove(vm) + inUseVms.offer(vm) + vm + } + } else { + throw new RuntimeException("no instances available") + } + } + + // release the vm to be available for other tasks + private[wasm] def release(vm: WasmVm): Unit = synchronized { + availableVms.synchronized { + availableVms.offer(vm) + inUseVms.remove(vm) + } + } + + // do not consider the vm anymore for more work (the vm is being dropped for some reason) + private[wasm] def ignore(vm: WasmVm): Unit = synchronized { + availableVms.synchronized { + inUseVms.remove(vm) + } + } + + // do not consider the vm anymore for more work (the vm is being dropped for some reason) + private[wasm] def clear(vm: WasmVm): Unit = synchronized { + availableVms.synchronized { + availableVms.remove(vm) + } + } + + private[wasm] def wasmConfig(): Option[WasmConfig] = { + optConfig.orElse(env.proxyState.wasmPlugin(stableId).map(_.config)) + } + + private def hasAvailableVm(plugin: WasmConfig): Boolean = availableVms.size() > 0 && (inUseVms.size < plugin.instances) + + private def isVmCreating(): Boolean = creatingRef.get() + + private def atMaxPoolCapacity(plugin: WasmConfig): Boolean = (availableVms.size + inUseVms.size) >= plugin.instances + + // close the current pool + private[wasm] def close(): Unit = availableVms.synchronized { + engine.close() + } + + // destroy all vms and clear everything in order to destroy the current pool + private[wasm] def destroyCurrentVms(): Unit = availableVms.synchronized { + WasmVmPool.logger.info("destroying all vms") + availableVms.asScala.foreach(_.destroy()) + availableVms.clear() + inUseVms.clear() + //counter.set(0) + templateRef.set(null) + creatingRef.set(false) + lastPluginVersion.set(null) + } + + // compute the current hash for a tuple (wasmcode + config) + private def computeHash(config: WasmConfig, key: String, cache: UnboundedTrieMap[String, CacheableWasmScript]): String = { + config.json.stringify.sha512 + "#" + cache.get(key).map { + case CacheableWasmScript.CachedWasmScript(wasm, _) => wasm.sha512 + case _ => "fetching" + }.getOrElse("null") + } + + // compute if the source (wasm code + config) is the same than current + private def hasChanged(config: WasmConfig): Boolean = availableVms.synchronized { + val key = config.source.cacheKey + val cache = WasmUtils.scriptCache(env) + var oldHash = lastPluginVersion.get() + if (oldHash == null) { + oldHash = computeHash(config, key, cache) + lastPluginVersion.set(oldHash) + } + cache.get(key) match { + case Some(CacheableWasmScript.CachedWasmScript(_, _)) => { + val currentHash = computeHash(config, key, cache) + oldHash != currentHash + } + case _ => false + } + } + + // get a pooled vm when one available. + // Do not forget to release it after usage + def getPooledVm(options: WasmVmInitOptions = WasmVmInitOptions.empty()): Future[WasmVm] = { + val p = Promise[WasmVm]() + requestsQueue.offer(WasmVmPoolAction(p, options)) + p.future + } + + // borrow a vm for sync operations + def withPooledVm[A](options: WasmVmInitOptions = WasmVmInitOptions.empty())(f: WasmVm => A): Future[A] = { + implicit val ev = env + implicit val ec = env.otoroshiExecutionContext + getPooledVm(options).flatMap { vm => + val p = Promise[A]() + try { + val ret = f(vm) + p.trySuccess(ret) + } catch { + case e: Throwable => + p.tryFailure(e) + } finally { + vm.release() + } + p.future + } + } + + // borrow a vm for async operations + def withPooledVmF[A](options: WasmVmInitOptions = WasmVmInitOptions.empty())(f: WasmVm => Future[A]): Future[A] = { + implicit val ev = env + implicit val ec = env.otoroshiExecutionContext + getPooledVm(options).flatMap { vm => + f(vm).andThen { + case _ => vm.release() + } + } + } +} + +case class WasmVmInitOptions( + importDefaultHostFunctions: Boolean = true, + resetMemory: Boolean = true, + addHostFunctions: (AtomicReference[VmData]) => Seq[WasmOtoroshiHostFunction[_ <: WasmOtoroshiHostUserData]] = _ => Seq.empty +) + +object WasmVmInitOptions { + def empty(): WasmVmInitOptions = WasmVmInitOptions( + importDefaultHostFunctions = true, + resetMemory = true, + addHostFunctions = _ => Seq.empty + ) +} + +// this job tries to kill unused wasm vms and unused pools to save memory +class WasmVmPoolCleaner extends Job { + + private val logger = Logger("otoroshi-wasm-vm-pool-cleaner") + + override def uniqueId: JobId = JobId("otoroshi.wasm.WasmVmPoolCleaner") + + override def visibility: NgPluginVisibility = NgPluginVisibility.NgInternal + + override def steps: Seq[NgStep] = Seq(NgStep.Job) + + override def kind: JobKind = JobKind.ScheduledEvery + + override def starting: JobStarting = JobStarting.Automatically + + override def instantiation(ctx: JobContext, env: Env): JobInstantiation = JobInstantiation.OneInstancePerOtoroshiInstance + + override def initialDelay(ctx: JobContext, env: Env): Option[FiniteDuration] = 10.seconds.some + + override def interval(ctx: JobContext, env: Env): Option[FiniteDuration] = 60.seconds.some + + override def jobRun(ctx: JobContext)(implicit env: Env, ec: ExecutionContext): Future[Unit] = { + val config = env.datastores.globalConfigDataStore.latest().plugins.config.select("wasm-vm-pool-cleaner-config").asOpt[JsObject].getOrElse(Json.obj()) + val globalNotUsedDuration = config.select("not-used-duration").asOpt[Long].map(v => v.millis).getOrElse(5.minutes) + WasmVmPool.allInstances().foreach { + case (key, pool) => + if (pool.inUseVms.isEmpty && pool.availableVms.isEmpty) { + logger.warn(s"will destroy 1 wasm vms pool") + pool.destroyCurrentVms() + pool.close() + WasmVmPool.removePlugin(key) + } else { + val options = pool.wasmConfig().map(_.killOptions) + if (!options.exists(_.immortal)) { + val maxDur = options.map(_.maxUnusedDuration).getOrElse(globalNotUsedDuration) + val unusedVms = pool.availableVms.asScala.filter(_.hasNotBeenUsedInTheLast(maxDur)) + val tooMuchMemoryVms = (pool.availableVms.asScala ++ pool.inUseVms.asScala).filter(_.consumesMoreThanMemoryPercent(options.map(_.maxMemoryUsage).getOrElse(0.9))) + val tooSlowVms = (pool.availableVms.asScala ++ pool.inUseVms.asScala).filter(_.tooSlow(options.map(_.maxAvgCallDuration.toNanos).getOrElse(1.day.toNanos))) + val allVms = unusedVms ++ tooMuchMemoryVms ++ tooSlowVms + if (allVms.nonEmpty) { + logger.warn(s"will destroy ${allVms.size} wasm vms") + if (unusedVms.nonEmpty) logger.warn(s" - ${unusedVms.size} because unused for more than ${maxDur.toHours}") + if (tooMuchMemoryVms.nonEmpty) logger.warn(s" - ${tooMuchMemoryVms.size} because of too much memory used") + if (tooSlowVms.nonEmpty) logger.warn(s" - ${tooSlowVms.size} because of avg call duration too long") + } + allVms.foreach { vm => + if (vm.isBusy()) { + vm.destroyAtRelease() + } else { + vm.ignore() + vm.destroy() + } + } + } + } + } + ().vfuture + } +} + +case class WasmVmKillOptions( + immortal: Boolean = false, + maxCalls: Int = Int.MaxValue, + maxMemoryUsage: Double = 0.9, + maxAvgCallDuration: FiniteDuration = 1.day, + maxUnusedDuration: FiniteDuration = 5.minutes, +) { + def json: JsValue = WasmVmKillOptions.format.writes(this) +} + +object WasmVmKillOptions { + val default = WasmVmKillOptions() + val format = new Format[WasmVmKillOptions] { + override def writes(o: WasmVmKillOptions): JsValue = Json.obj( + "immortal" -> o.immortal, + "max_calls" -> o.maxCalls, + "max_memory_usage" -> o.maxMemoryUsage, + "max_avg_call_duration" -> o.maxAvgCallDuration.toMillis, + "max_unused_duration" -> o.maxUnusedDuration.toMillis, + ) + override def reads(json: JsValue): JsResult[WasmVmKillOptions] = Try { + WasmVmKillOptions( + immortal = json.select("immortal").asOpt[Boolean].getOrElse(false), + maxCalls = json.select("max_calls").asOpt[Int].getOrElse(Int.MaxValue), + maxMemoryUsage = json.select("max_memory_usage").asOpt[Double].getOrElse(0.9), + maxAvgCallDuration = json.select("max_avg_call_duration").asOpt[Long].map(_.millis).getOrElse(1.day), + maxUnusedDuration = json.select("max_unused_duration").asOpt[Long].map(_.millis).getOrElse(5.minutes), + ) + } match { + case Failure(e) => JsError(e.getMessage) + case Success(e) => JsSuccess(e) + } + } +} \ No newline at end of file diff --git a/otoroshi/app/wasm/types.scala b/otoroshi/app/wasm/types.scala new file mode 100644 index 0000000000..837f76cd13 --- /dev/null +++ b/otoroshi/app/wasm/types.scala @@ -0,0 +1,114 @@ +package otoroshi.wasm + +import org.extism.sdk.Results +import org.extism.sdk.wasmotoroshi._ +import play.api.libs.json._ +import otoroshi.utils.syntax.implicits._ + +import java.nio.charset.StandardCharsets + +sealed abstract class WasmFunctionParameters { + def functionName: String + def input: Option[String] + def parameters: Option[WasmOtoroshiParameters] + def resultSize: Option[Int] + def call(plugin: WasmOtoroshiInstance): Either[JsValue, (String, ResultsWrapper)] + def withInput(input: Option[String]): WasmFunctionParameters + def withFunctionName(functionName: String): WasmFunctionParameters +} + +object WasmFunctionParameters { + def from(functionName: String, input: Option[String], parameters: Option[WasmOtoroshiParameters], resultSize: Option[Int]) = { + (input, parameters, resultSize) match { + case (_, Some(p), Some(s)) => BothParamsResults(functionName, p, s) + case (_, Some(p), None) => NoResult(functionName, p) + case (_, None, Some(s)) => NoParams(functionName, s) + case (Some(in), None, None) => ExtismFuntionCall(functionName, in) + case _ => UnknownCombination() + } + } + + case class UnknownCombination(functionName: String = "unknown", + input: Option[String] = None, + parameters: Option[WasmOtoroshiParameters] = None, + resultSize: Option[Int] = None) + extends WasmFunctionParameters { + override def call(plugin: WasmOtoroshiInstance): Either[JsValue, (String, ResultsWrapper)] = { + Left(Json.obj("error" -> "bad call combination")) + } + def withInput(input: Option[String]): WasmFunctionParameters = this.copy(input = input) + def withFunctionName(functionName: String): WasmFunctionParameters = this.copy(functionName = functionName) + } + + case class NoResult(functionName: String, params: WasmOtoroshiParameters, + input: Option[String] = None, + resultSize: Option[Int] = None) extends WasmFunctionParameters { + override def parameters: Option[WasmOtoroshiParameters] = Some(params) + override def call(plugin: WasmOtoroshiInstance): Either[JsValue, (String, ResultsWrapper)] = { + plugin.callWithoutResults(functionName, parameters.get) + Right[JsValue, (String, ResultsWrapper)](("", ResultsWrapper(new WasmOtoroshiResults(0), plugin))) + } + override def withInput(input: Option[String]): WasmFunctionParameters = this.copy(input = input) + override def withFunctionName(functionName: String): WasmFunctionParameters = this.copy(functionName = functionName) + } + + case class NoParams(functionName: String, result: Int, + input: Option[String] = None, + parameters: Option[WasmOtoroshiParameters] = None) extends WasmFunctionParameters { + override def resultSize: Option[Int] = Some(result) + override def call(plugin: WasmOtoroshiInstance): Either[JsValue, (String, ResultsWrapper)] = { + plugin.callWithoutParams(functionName, resultSize.get) + .right + .map(_ => ("", ResultsWrapper(new WasmOtoroshiResults(0), plugin))) + } + override def withInput(input: Option[String]): WasmFunctionParameters = this.copy(input = input) + override def withFunctionName(functionName: String): WasmFunctionParameters = this.copy(functionName = functionName) + } + + case class BothParamsResults(functionName: String, params: WasmOtoroshiParameters, result: Int, + input: Option[String] = None) extends WasmFunctionParameters { + override def parameters: Option[WasmOtoroshiParameters] = Some(params) + override def resultSize: Option[Int] = Some(result) + override def call(plugin: WasmOtoroshiInstance): Either[JsValue, (String, ResultsWrapper)] = { + plugin.call(functionName, parameters.get, resultSize.get) + .right + .map(res => ("", ResultsWrapper(res, plugin))) + } + override def withInput(input: Option[String]): WasmFunctionParameters = this.copy(input = input) + override def withFunctionName(functionName: String): WasmFunctionParameters = this.copy(functionName = functionName) + } + + case class ExtismFuntionCall(functionName: String, + in: String, + parameters: Option[WasmOtoroshiParameters] = None, + resultSize: Option[Int] = None) extends WasmFunctionParameters { + override def input: Option[String] = Some(in) + override def call(plugin: WasmOtoroshiInstance): Either[JsValue, (String, ResultsWrapper)] = { + plugin.extismCall(functionName, input.get.getBytes(StandardCharsets.UTF_8)) + .right + .map { str => + (str, ResultsWrapper(new WasmOtoroshiResults(0), plugin)) + } + } + + override def withInput(input: Option[String]): WasmFunctionParameters = this.copy(in = input.get) + override def withFunctionName(functionName: String): WasmFunctionParameters = this.copy(functionName = functionName) + } + + case class OPACall(functionName: String, pointers: Option[OPAWasmVm] = None, in: String) extends WasmFunctionParameters { + override def input: Option[String] = Some(in) + + override def call(plugin: WasmOtoroshiInstance): Either[JsValue, (String, ResultsWrapper)] = { + if (functionName == "initialize") + OPA.initialize(plugin) + else + OPA.evaluate(plugin, pointers.get.opaDataAddr, pointers.get.opaBaseHeapPtr, in) + } + + override def withInput(input: Option[String]): WasmFunctionParameters = this.copy(in = input.get) + + override def withFunctionName(functionName: String): WasmFunctionParameters = this + override def parameters: Option[WasmOtoroshiParameters] = None + override def resultSize: Option[Int] = None + } +} \ No newline at end of file diff --git a/otoroshi/app/wasm/wasm.scala b/otoroshi/app/wasm/wasm.scala index 0ad2a0d255..113a9379d7 100644 --- a/otoroshi/app/wasm/wasm.scala +++ b/otoroshi/app/wasm/wasm.scala @@ -1,36 +1,19 @@ package otoroshi.wasm -import akka.stream.OverflowStrategy -import akka.stream.scaladsl.{Keep, Sink, Source, SourceQueueWithComplete} import akka.util.ByteString -import org.extism.sdk.manifest.{Manifest, MemoryOptions} -import org.extism.sdk.parameters.{Parameters, Results} -import org.extism.sdk.wasm.WasmSourceResolver -import org.extism.sdk.{Context, HostFunction, HostUserData, Plugin} -import org.joda.time.DateTime +import org.extism.sdk.wasmotoroshi._ import otoroshi.env.Env import otoroshi.models.{WSProxyServerJson, WasmManagerSettings} import otoroshi.next.models.NgTlsConfig import otoroshi.next.plugins.api._ -import otoroshi.security.IdGenerator -import otoroshi.utils.TypedMap +import otoroshi.utils.cache.types.UnboundedTrieMap import otoroshi.utils.http.MtlsConfig import otoroshi.utils.syntax.implicits._ -import otoroshi.wasm.proxywasm.Result -import otoroshi.wasm.proxywasm.VmData -import play.api.Logger import play.api.libs.json._ -import play.api.libs.ws.{DefaultWSCookie, WSCookie} -import play.api.mvc.Cookie -import java.nio.charset.StandardCharsets import java.nio.file.{Files, Paths} -import java.util.concurrent.Executors -import java.util.concurrent.atomic.{AtomicBoolean, AtomicInteger} -import scala.collection.concurrent.TrieMap -import scala.concurrent.duration.{Duration, DurationLong, FiniteDuration, MILLISECONDS} -import scala.concurrent.{Await, ExecutionContext, Future, Promise} -import scala.jdk.CollectionConverters._ +import scala.concurrent.duration.{DurationLong, FiniteDuration, MILLISECONDS} +import scala.concurrent.{ExecutionContext, Future, Promise} import scala.util.{Failure, Success, Try} case class WasmDataRights(read: Boolean = false, write: Boolean = false) @@ -65,6 +48,7 @@ sealed trait WasmSourceKind { def getConfig(path: String, opts: JsValue)(implicit env: Env, ec: ExecutionContext): Future[Option[WasmConfig]] = None.vfuture } + object WasmSourceKind { case object Unknown extends WasmSourceKind { def name: String = "Unknown" @@ -204,6 +188,13 @@ case class WasmSource(kind: WasmSourceKind, path: String, opts: JsValue = Json.o def json: JsValue = WasmSource.format.writes(this) def cacheKey = s"${kind.name.toLowerCase}://${path}" def getConfig()(implicit env: Env, ec: ExecutionContext): Future[Option[WasmConfig]] = kind.getConfig(path, opts) + def isCached()(implicit env: Env): Boolean = { + val cache = WasmUtils.scriptCache(env) + cache.get(cacheKey) match { + case Some(CacheableWasmScript.CachedWasmScript(_, _)) => true + case _ => false + } + } def getWasm()(implicit env: Env, ec: ExecutionContext): Future[Either[JsValue, ByteString]] = { val cache = WasmUtils.scriptCache(env) def fetchAndAddToCache(): Future[Either[JsValue, ByteString]] = { @@ -231,6 +222,7 @@ case class WasmSource(kind: WasmSourceKind, path: String, opts: JsValue = Json.o } } } + object WasmSource { val format = new Format[WasmSource] { override def writes(o: WasmSource): JsValue = Json.obj( @@ -306,6 +298,7 @@ sealed trait WasmVmLifetime { def name: String def json: JsValue = JsString(name) } + object WasmVmLifetime { case object Invocation extends WasmVmLifetime { def name: String = "Invocation" } @@ -322,18 +315,22 @@ object WasmVmLifetime { case class WasmConfig( source: WasmSource = WasmSource(WasmSourceKind.Unknown, "", Json.obj()), - memoryPages: Int = 4, + memoryPages: Int = 20, functionName: Option[String] = None, config: Map[String, String] = Map.empty, allowedHosts: Seq[String] = Seq.empty, allowedPaths: Map[String, String] = Map.empty, //// - lifetime: WasmVmLifetime = WasmVmLifetime.Forever, + // lifetime: WasmVmLifetime = WasmVmLifetime.Forever, wasi: Boolean = false, opa: Boolean = false, instances: Int = 1, + killOptions: WasmVmKillOptions = WasmVmKillOptions.default, authorizations: WasmAuthorizations = WasmAuthorizations() ) extends NgPluginConfig { + // still here for compat reason + def lifetime: WasmVmLifetime = WasmVmLifetime.Forever + def pool()(implicit env: Env): WasmVmPool = WasmVmPool.forConfig(this) def json: JsValue = Json.obj( "source" -> source.json, "memoryPages" -> memoryPages, @@ -343,9 +340,10 @@ case class WasmConfig( "allowedPaths" -> allowedPaths, "wasi" -> wasi, "opa" -> opa, - "lifetime" -> lifetime.json, + // "lifetime" -> lifetime.json, "authorizations" -> authorizations.json, - "instances" -> instances + "instances" -> instances, + "killOptions" -> killOptions.json, ) } @@ -379,31 +377,32 @@ object WasmConfig { } WasmConfig( source = source, - memoryPages = (json \ "memoryPages").asOpt[Int].getOrElse(4), + memoryPages = (json \ "memoryPages").asOpt[Int].getOrElse(20), functionName = (json \ "functionName").asOpt[String].filter(_.nonEmpty), config = (json \ "config").asOpt[Map[String, String]].getOrElse(Map.empty), allowedHosts = (json \ "allowedHosts").asOpt[Seq[String]].getOrElse(Seq.empty), allowedPaths = (json \ "allowedPaths").asOpt[Map[String, String]].getOrElse(Map.empty), wasi = (json \ "wasi").asOpt[Boolean].getOrElse(false), opa = (json \ "opa").asOpt[Boolean].getOrElse(false), - lifetime = json - .select("lifetime") - .asOpt[String] - .flatMap(WasmVmLifetime.parse) - .orElse( - (json \ "preserve").asOpt[Boolean].map { - case true => WasmVmLifetime.Request - case false => WasmVmLifetime.Forever - } - ) - .getOrElse(WasmVmLifetime.Forever), + // lifetime = json + // .select("lifetime") + // .asOpt[String] + // .flatMap(WasmVmLifetime.parse) + // .orElse( + // (json \ "preserve").asOpt[Boolean].map { + // case true => WasmVmLifetime.Request + // case false => WasmVmLifetime.Forever + // } + // ) + // .getOrElse(WasmVmLifetime.Forever), authorizations = (json \ "authorizations") .asOpt[WasmAuthorizations](WasmAuthorizations.format.reads) .orElse((json \ "accesses").asOpt[WasmAuthorizations](WasmAuthorizations.format.reads)) .getOrElse { WasmAuthorizations() }, - instances = json.select("instances").asOpt[Int].getOrElse(1) + instances = json.select("instances").asOpt[Int].getOrElse(1), + killOptions = json.select("killOptions").asOpt[JsValue].flatMap(v => WasmVmKillOptions.format.reads(v).asOpt).getOrElse(WasmVmKillOptions.default) ) } match { case Failure(ex) => JsError(ex.getMessage) @@ -413,20 +412,15 @@ object WasmConfig { } } -object WasmContextSlot { - private val _currentContext = new ThreadLocal[Any]() - def getCurrentContext(): Option[Any] = Option(_currentContext.get()) - private def setCurrentContext(value: Any): Unit = _currentContext.set(value) - private def clearCurrentContext(): Unit = _currentContext.remove() -} object ResultsWrapper { - def apply(results: Results): ResultsWrapper = new ResultsWrapper(results, None) - def apply(results: Results, plugin: Plugin): ResultsWrapper = new ResultsWrapper(results, Some(plugin)) + def apply(results: WasmOtoroshiResults): ResultsWrapper = new ResultsWrapper(results, None) + def apply(results: WasmOtoroshiResults, plugin: WasmOtoroshiInstance): ResultsWrapper = new ResultsWrapper(results, Some(plugin)) } -case class ResultsWrapper(results: Results, pluginOpt: Option[Plugin]) { + +case class ResultsWrapper(results: WasmOtoroshiResults, pluginOpt: Option[WasmOtoroshiInstance]) { def free(): Unit = try { if (results.getLength > 0) { - pluginOpt.foreach(_.freeResults(results)) + results.close() } } catch { case t: Throwable => @@ -435,202 +429,7 @@ case class ResultsWrapper(results: Results, pluginOpt: Option[Plugin]) { } } -class WasmContextSlot( - id: String, - instance: Int, - context: Context, - plugin: Plugin, - cfg: WasmConfig, - wsm: ByteString, - closed: AtomicBoolean, - updating: AtomicBoolean, - instanceId: String, - functions: Array[HostFunction[_ <: HostUserData]] -) { - - def callSync( - functionName: String, - input: Option[String], - parameters: Option[Parameters], - resultSize: Option[Int], - context: Option[VmData] - )(implicit env: Env, ec: ExecutionContext): Either[JsValue, (String, ResultsWrapper)] = { - if (closed.get()) { - val plug = WasmUtils.pluginCache.apply(s"$id-$instance") - plug.callSync(functionName, input, parameters, resultSize, context) - } else { - try { - context.foreach(ctx => WasmContextSlot.setCurrentContext(ctx)) - if (WasmUtils.logger.isDebugEnabled) WasmUtils.logger.debug(s"calling instance $id-$instance") - WasmUtils.debugLog.debug(s"calling '${functionName}' on instance '$id-$instance'") - val res: Either[JsValue, (String, ResultsWrapper)] = env.metrics.withTimer("otoroshi.wasm.core.call") { - // TODO: need to split this !! - (input, parameters, resultSize) match { - case (Some(in), Some(p), Some(s)) => - plugin - .call(functionName, p, s, in.getBytes(StandardCharsets.UTF_8)) - .right - .map(res => ("", ResultsWrapper(res, plugin))) - case (_, Some(p), None) => - plugin.callWithoutResults(functionName, p) - Right[JsValue, (String, ResultsWrapper)](("", ResultsWrapper(new Results(0), plugin))) - case (_, Some(p), Some(s)) => - plugin.call(functionName, p, s).right.map(res => ("", ResultsWrapper(res, plugin))) - case (_, None, Some(s)) => - plugin.callWithoutParams(functionName, s).right.map(_ => ("", ResultsWrapper(new Results(0), plugin))) - case (Some(in), None, None) => - plugin.call(functionName, in).right.map(str => (str, ResultsWrapper(new Results(0), plugin))) - case _ => Left(Json.obj("error" -> "bad call combination")) - } - } - env.metrics.withTimer("otoroshi.wasm.core.reset") { - plugin.reset() - } - env.metrics.withTimer("otoroshi.wasm.core.count-thunks") { - WasmUtils.logger.debug(s"thunks: ${functions.size}") - } - res - } catch { - case e: Throwable if e.getMessage.contains("wasm backtrace") => - WasmUtils.logger.error(s"error while invoking wasm function '${functionName}'", e) - Json - .obj( - "error" -> "wasm_error", - "error_description" -> JsArray(e.getMessage.split("\\n").filter(_.trim.nonEmpty).map(JsString.apply)) - ) - .left - case e: Throwable => - WasmUtils.logger.error(s"error while invoking wasm function '${functionName}'", e) - Json.obj("error" -> "wasm_error", "error_description" -> JsString(e.getMessage)).left - } finally { - context.foreach(ctx => WasmContextSlot.clearCurrentContext()) - } - } - } - - def callOpaSync(input: String)(implicit env: Env, ec: ExecutionContext): Either[JsValue, String] = { - if (closed.get()) { - val plug = WasmUtils.pluginCache.apply(s"$id-$instance") - plug.callOpaSync(input) - } else { - try { - val res = env.metrics.withTimer("otoroshi.wasm.core.call-opa") { - OPA.evaluate(plugin, input) - } - // env.metrics.withTimer("otoroshi.wasm.core.reset") { - // plugin.reset() - // } - res.right - } catch { - case e: Throwable if e.getMessage.contains("wasm backtrace") => - WasmUtils.logger.error(s"error while invoking wasm function 'opa'", e) - Json - .obj( - "error" -> "wasm_error", - "error_description" -> JsArray(e.getMessage.split("\\n").filter(_.trim.nonEmpty).map(JsString.apply)) - ) - .left - case e: Throwable => - WasmUtils.logger.error(s"error while invoking wasm function 'opa'", e) - Json.obj("error" -> "wasm_error", "error_description" -> JsString(e.getMessage)).left - } - } - } - - def call( - functionName: String, - input: Option[String], - parameters: Option[Parameters], - resultSize: Option[Int], - context: Option[VmData] - )(implicit env: Env, ec: ExecutionContext): Future[Either[JsValue, (String, ResultsWrapper)]] = { - val promise = Promise.apply[Either[JsValue, (String, ResultsWrapper)]]() - WasmUtils - .getInvocationQueueFor(id, instance) - .offer(WasmAction.WasmInvocation(() => callSync(functionName, input, parameters, resultSize, context), promise)) - promise.future - } - - def callOpa(input: String)(implicit env: Env, ec: ExecutionContext): Future[Either[JsValue, String]] = { - val promise = Promise.apply[Either[JsValue, String]]() - WasmUtils.getInvocationQueueFor(id, instance).offer(WasmAction.WasmOpaInvocation(() => callOpaSync(input), promise)) - promise.future - } - - def close(lifetime: WasmVmLifetime): Unit = { - if (lifetime == WasmVmLifetime.Invocation) { - if (WasmUtils.logger.isDebugEnabled) WasmUtils.logger.debug(s"calling close on WasmContextSlot of ${id}") - forceClose() - } - } - - def forceClose(): Unit = { - if (WasmUtils.logger.isDebugEnabled) WasmUtils.logger.debug(s"calling forceClose on WasmContextSlot of ${id}") - if (closed.compareAndSet(false, true)) { - try { - plugin.close() - context.free() - } catch { - case e: Throwable => e.printStackTrace() - } - } - } - - def needsUpdate(wasmConfig: WasmConfig, wasm: ByteString): Boolean = { - val configHasChanged = wasmConfig != cfg - val wasmHasChanged = wasm != wsm - if (WasmUtils.logger.isDebugEnabled && configHasChanged) - WasmUtils.logger.debug(s"plugin ${id} needs update because of config change") - if (WasmUtils.logger.isDebugEnabled && wasmHasChanged) - WasmUtils.logger.debug(s"plugin ${id} needs update because of wasm change") - configHasChanged || wasmHasChanged - } - - def updateIfNeeded( - pluginId: String, - config: WasmConfig, - wasm: ByteString, - attrsOpt: Option[TypedMap], - addHostFunctions: Seq[HostFunction[_ <: HostUserData]] - )(implicit env: Env, ec: ExecutionContext): WasmContextSlot = { - if (needsUpdate(config, wasm) && updating.compareAndSet(false, true)) { - - if (config.instances < cfg.instances) { - env.otoroshiActorSystem.scheduler.scheduleOnce(20.seconds) { // TODO: config ? - if (WasmUtils.logger.isDebugEnabled) WasmUtils.logger.debug(s"trying to kill unused instances of ${pluginId}") - (config.instances to cfg.instances).map { idx => - WasmUtils.pluginCache.get(s"${pluginId}-${instance}").foreach(p => p.forceClose()) - WasmUtils.queues.remove(s"${pluginId}-${instance}") - WasmUtils.pluginCache.remove(s"$pluginId-$instance") - } - } - } - if (WasmUtils.logger.isDebugEnabled) WasmUtils.logger.debug(s"scheduling update ${instanceId}") - WasmUtils - .getInvocationQueueFor(id, instance) - .offer(WasmAction.WasmUpdate(() => { - val plugin = WasmUtils.actuallyCreatePlugin( - instance, - wasm, - config, - pluginId, - attrsOpt, - addHostFunctions - ) - if (WasmUtils.logger.isDebugEnabled) WasmUtils.logger.debug(s"updating ${instanceId}") - WasmUtils.pluginCache.put(s"$pluginId-$instance", plugin) - env.otoroshiActorSystem.scheduler.scheduleOnce(20.seconds) { // TODO: config ? - if (WasmUtils.logger.isDebugEnabled) WasmUtils.logger.debug(s"delayed force close ${instanceId}") - if (!closed.get()) { - forceClose() - } - } - })) - } - this - } -} -class WasmContext(plugins: TrieMap[String, WasmContextSlot] = new TrieMap[String, WasmContextSlot]()) { +class WasmContext(plugins: UnboundedTrieMap[String, WasmContextSlot] = new UnboundedTrieMap[String, WasmContextSlot]()) { def put(id: String, slot: WasmContextSlot): Unit = plugins.put(id, slot) def get(id: String): Option[WasmContextSlot] = plugins.get(id) def close(): Unit = { @@ -641,347 +440,10 @@ class WasmContext(plugins: TrieMap[String, WasmContextSlot] = new TrieMap[String } } -sealed trait WasmAction -object WasmAction { - case class WasmOpaInvocation(call: () => Either[JsValue, String], promise: Promise[Either[JsValue, String]]) - extends WasmAction - case class WasmInvocation( - call: () => Either[JsValue, (String, ResultsWrapper)], - promise: Promise[Either[JsValue, (String, ResultsWrapper)]] - ) extends WasmAction - case class WasmUpdate(call: () => Unit) extends WasmAction -} - sealed trait CacheableWasmScript + object CacheableWasmScript { case class CachedWasmScript(script: ByteString, createAt: Long) extends CacheableWasmScript case class FetchingWasmScript(f: Future[Either[JsValue, ByteString]]) extends CacheableWasmScript } -object WasmUtils { - - private[wasm] val logger = Logger("otoroshi-wasm") - - val debugLog = Logger("otoroshi-wasm-debug") - - implicit val executor = ExecutionContext.fromExecutorService( - Executors.newWorkStealingPool(Math.max(32, (Runtime.getRuntime.availableProcessors * 4) + 1)) - ) - - // TODO: handle env.wasmCacheSize based on creation date ? - private[wasm] val _script_cache: TrieMap[String, CacheableWasmScript] = new TrieMap[String, CacheableWasmScript]() - private[wasm] val pluginCache = new TrieMap[String, WasmContextSlot]() - private[wasm] val queues = new TrieMap[String, (DateTime, SourceQueueWithComplete[WasmAction])]() - private[wasm] val instancesCounter = new AtomicInteger(0) - - def scriptCache(implicit env: Env): TrieMap[String, CacheableWasmScript] = _script_cache - - def convertJsonCookies(wasmResponse: JsValue): Option[Seq[WSCookie]] = - wasmResponse - .select("cookies") - .asOpt[Seq[JsObject]] - .map { arr => - arr.map { c => - DefaultWSCookie( - name = c.select("name").asString, - value = c.select("value").asString, - maxAge = c.select("maxAge").asOpt[Long], - path = c.select("path").asOpt[String], - domain = c.select("domain").asOpt[String], - secure = c.select("secure").asOpt[Boolean].getOrElse(false), - httpOnly = c.select("httpOnly").asOpt[Boolean].getOrElse(false) - ) - } - } - - def convertJsonPlayCookies(wasmResponse: JsValue): Option[Seq[Cookie]] = - wasmResponse - .select("cookies") - .asOpt[Seq[JsObject]] - .map { arr => - arr.map { c => - Cookie( - name = c.select("name").asString, - value = c.select("value").asString, - maxAge = c.select("maxAge").asOpt[Int], - path = c.select("path").asOpt[String].getOrElse("/"), - domain = c.select("domain").asOpt[String], - secure = c.select("secure").asOpt[Boolean].getOrElse(false), - httpOnly = c.select("httpOnly").asOpt[Boolean].getOrElse(false), - sameSite = c.select("domain").asOpt[String].flatMap(Cookie.SameSite.parse) - ) - } - } - - private[wasm] def getInvocationQueueFor(id: String, instance: Int)(implicit - env: Env - ): SourceQueueWithComplete[WasmAction] = { - val key = s"$id-$instance" - queues.getOrUpdate(key) { - val stream = Source - .queue[WasmAction](env.wasmQueueBufferSize, OverflowStrategy.dropHead) - .mapAsync(1) { action => - Future.apply { - action match { - case WasmAction.WasmInvocation(invoke, promise) => - try { - val res = invoke() - promise.trySuccess(res) - } catch { - case e: Throwable => promise.tryFailure(e) - } - case WasmAction.WasmOpaInvocation(invoke, promise) => - try { - val res = invoke() - promise.trySuccess(res) - } catch { - case e: Throwable => promise.tryFailure(e) - } - case WasmAction.WasmUpdate(update) => - try { - update() - } catch { - case e: Throwable => e.printStackTrace() - } - } - }(executor) - } - (DateTime.now(), stream.toMat(Sink.ignore)(Keep.both).run()(env.otoroshiMaterializer)._1) - } - }._2 - - private[wasm] def internalCreateManifest(config: WasmConfig, wasm: ByteString, env: Env) = - env.metrics.withTimer("otoroshi.wasm.core.create-plugin.manifest") { - val resolver = new WasmSourceResolver() - val source = resolver.resolve("wasm", wasm.toByteBuffer.array()) - new Manifest( - Seq[org.extism.sdk.wasm.WasmSource](source).asJava, - new MemoryOptions(config.memoryPages), - config.config.asJava, - config.allowedHosts.asJava, - config.allowedPaths.asJava - ) - } - - private[wasm] def actuallyCreatePlugin( - instance: Int, - wasm: ByteString, - config: WasmConfig, - pluginId: String, - attrsOpt: Option[TypedMap], - addHostFunctions: Seq[HostFunction[_ <: HostUserData]] - )(implicit env: Env, ec: ExecutionContext): WasmContextSlot = - env.metrics.withTimer("otoroshi.wasm.core.act-create-plugin") { - if (WasmUtils.logger.isDebugEnabled) - WasmUtils.logger.debug(s"creating wasm plugin instance for ${pluginId}") - val manifest = internalCreateManifest(config, wasm, env) - val context = env.metrics.withTimer("otoroshi.wasm.core.create-plugin.context")(new Context()) - val functions: Array[HostFunction[_ <: HostUserData]] = - HostFunctions.getFunctions(config, pluginId, attrsOpt) ++ addHostFunctions - val plugin = env.metrics.withTimer("otoroshi.wasm.core.create-plugin.plugin") { - context.newPlugin( - manifest, - config.wasi, - functions, - LinearMemories.getMemories(config) - ) - } - new WasmContextSlot( - pluginId, - instance, - context, - plugin, - config, - wasm, - functions = functions, - closed = new AtomicBoolean(false), - updating = new AtomicBoolean(false), - instanceId = IdGenerator.uuid - ) - } - - private def callWasm( - wasm: ByteString, - config: WasmConfig, - defaultFunctionName: String, - input: Option[JsValue], - parameters: Option[Parameters], - resultSize: Option[Int], - pluginId: String, - attrsOpt: Option[TypedMap], - ctx: Option[VmData], - addHostFunctions: Seq[HostFunction[_ <: HostUserData]] - )(implicit env: Env, ec: ExecutionContext): Future[Either[JsValue, (String, ResultsWrapper)]] = - env.metrics.withTimerAsync("otoroshi.wasm.core.call-wasm") { - - WasmUtils.debugLog.debug("callWasm") - - val functionName = config.functionName.filter(_.nonEmpty).getOrElse(defaultFunctionName) - val instance = instancesCounter.incrementAndGet() % config.instances - - def createPlugin(): WasmContextSlot = { - if (config.lifetime == WasmVmLifetime.Forever) { - pluginCache - .getOrUpdate(s"$pluginId-$instance") { - actuallyCreatePlugin(instance, wasm, config, pluginId, None, addHostFunctions) - } - .seffectOn(_.updateIfNeeded(pluginId, config, wasm, None, addHostFunctions)) - } else { - actuallyCreatePlugin(instance, wasm, config, pluginId, attrsOpt, addHostFunctions) - } - } - - attrsOpt match { - case None => { - val slot = createPlugin() - if (config.opa) { - slot.callOpa(input.get.stringify).map { output => - slot.close(config.lifetime) - output.map(str => (str, ResultsWrapper(new Results(0)))) - } - } else { - slot.call(functionName, input.map(_.stringify), parameters, resultSize, ctx).map { output => - slot.close(config.lifetime) - output - } - } - } - case Some(attrs) => { - val context = attrs.get(otoroshi.next.plugins.Keys.WasmContextKey) match { - case None => { - val context = new WasmContext() - attrs.put(otoroshi.next.plugins.Keys.WasmContextKey -> context) - context - } - case Some(context) => context - } - val slot = context.get(pluginId) match { - case None => { - val plugin = createPlugin() - if (config.lifetime == WasmVmLifetime.Invocation) context.put(pluginId, plugin) - plugin - } - case Some(plugin) => plugin - } - if (config.opa) { - slot.callOpa(input.get.stringify).map { output => - slot.close(config.lifetime) - output.map(str => (str, ResultsWrapper(new Results(0)))) - } - } else { - slot.call(functionName, input.map(_.stringify), parameters, resultSize, ctx).map { output => - slot.close(config.lifetime) - output - } - } - } - } - } - - def execute( - config: WasmConfig, - defaultFunctionName: String, - input: JsValue, - attrs: Option[TypedMap], - ctx: Option[VmData] - )(implicit env: Env): Future[Either[JsValue, String]] = { - rawExecute(config, defaultFunctionName, input.some, None, None, attrs, ctx, Seq.empty).map(r => r.map(_._1)) - } - - def rawExecute( - _config: WasmConfig, - defaultFunctionName: String, - input: Option[JsValue], - parameters: Option[Parameters], - resultSize: Option[Int], - attrs: Option[TypedMap], - ctx: Option[VmData], - addHostFunctions: Seq[HostFunction[_ <: HostUserData]] - )(implicit env: Env): Future[Either[JsValue, (String, ResultsWrapper)]] = - env.metrics.withTimerAsync("otoroshi.wasm.core.raw-execute") { - val config = if (_config.opa) _config.copy(lifetime = WasmVmLifetime.Invocation) else _config - WasmUtils.debugLog.debug("execute") - val pluginId = config.source.kind match { - case WasmSourceKind.Local => { - env.proxyState.wasmPlugin(config.source.path) match { - case None => config.source.cacheKey - case Some(plugin) => plugin.config.source.cacheKey - } - } - case _ => config.source.cacheKey - } - scriptCache.get(pluginId) match { - case Some(CacheableWasmScript.FetchingWasmScript(fu)) => - fu.flatMap { _ => - rawExecute(config, defaultFunctionName, input, parameters, resultSize, attrs, ctx, addHostFunctions) - } - case Some(CacheableWasmScript.CachedWasmScript(script, _)) => { - env.metrics.withTimerAsync("otoroshi.wasm.core.get-config")(config.source.getConfig()).flatMap { - case None => - WasmUtils.callWasm( - script, - config, - defaultFunctionName, - input, - parameters, - resultSize, - pluginId, - attrs, - ctx, - addHostFunctions - ) - case Some(finalConfig) => - val functionName = config.functionName.filter(_.nonEmpty).orElse(finalConfig.functionName) - WasmUtils.callWasm( - script, - finalConfig.copy(functionName = functionName), - defaultFunctionName, - input, - parameters, - resultSize, - pluginId, - attrs, - ctx, - addHostFunctions - ) - } - } - case None if config.source.kind == WasmSourceKind.Unknown => Left(Json.obj("error" -> "missing source")).future - case _ => - env.metrics.withTimerAsync("otoroshi.wasm.core.get-wasm")(config.source.getWasm()).flatMap { - case Left(err) => err.left.vfuture - case Right(wasm) => { - env.metrics.withTimerAsync("otoroshi.wasm.core.get-config")(config.source.getConfig()).flatMap { - case None => - WasmUtils.callWasm( - wasm, - config, - defaultFunctionName, - input, - parameters, - resultSize, - pluginId, - attrs, - ctx, - addHostFunctions - ) - case Some(finalConfig) => - val functionName = config.functionName.filter(_.nonEmpty).orElse(finalConfig.functionName) - WasmUtils.callWasm( - wasm, - finalConfig.copy(functionName = functionName), - defaultFunctionName, - input, - parameters, - resultSize, - pluginId, - attrs, - ctx, - addHostFunctions - ) - } - } - } - } - } -} diff --git a/otoroshi/build.sbt b/otoroshi/build.sbt index c335ae127f..ab6b171e1b 100644 --- a/otoroshi/build.sbt +++ b/otoroshi/build.sbt @@ -164,6 +164,7 @@ libraryDependencies ++= Seq( "org.sangria-graphql" %% "sangria" % "3.4.0", "org.bigtesting" % "routd" % "1.0.7", "com.nixxcode.jvmbrotli" % "jvmbrotli" % "0.2.0", + "io.azam.ulidj" % "ulidj" % "1.0.4", // using a custom one right now as current build is broken // "org.extism.sdk" % "extism" % "0.3.2", if (scalaLangVersion.startsWith("2.12")) { diff --git a/otoroshi/javascript/src/forms/ng_plugins/WasmPlugin.js b/otoroshi/javascript/src/forms/ng_plugins/WasmPlugin.js index 6243715672..a8c4d2a3ae 100644 --- a/otoroshi/javascript/src/forms/ng_plugins/WasmPlugin.js +++ b/otoroshi/javascript/src/forms/ng_plugins/WasmPlugin.js @@ -47,7 +47,7 @@ const schema = { props: { defaultValue: 4, subTitle: - 'Configures memory for the Wasm runtime. Memory is described in units of pages (64KB) and represent contiguous chunks of addressable memory', + 'Configures memory for the Wasm runtime. Memory is described in units of pages (64Kb) and represent contiguous chunks of addressable memory', }, }, functionName: { @@ -208,6 +208,59 @@ const schema = { }, }, }, + killOptions: { + label: 'wasm vm kill options', + type: 'form', + collapsable: true, + collapsed: false, + flow: [ + 'max_calls', + 'max_memory_usage', + 'max_avg_call_duration', + 'max_unused_duration', + ], + schema: { + max_calls: { + type: 'bool', + label: 'Immortal', + props: { + help: 'The vm instances cannot be killed', + }, + }, + max_calls: { + type: 'number', + label: 'Max calls', + suffix: 'calls', + props: { + help: 'The maximum number of calls before killing a wasm vm (the pool will reinstantiate a new one)', + }, + }, + max_memory_usage: { + type: 'number', + label: 'Max memory usage', + suffix: '%', + props: { + help: 'The maximum memory usage allowed before killing the wasm vm (the pool will reinstantiate a new one)', + }, + }, + max_avg_call_duration: { + type: 'number', + label: 'Max unused duration', + suffix: 'ms.', + props: { + help: 'The maximum time allowed for a vm call before killing the wasm vm (the pool will reinstantiate a new one)', + }, + }, + max_unused_duration: { + type: 'number', + label: 'Max unused duration', + suffix: 'ms.', + props: { + help: 'The maximum time otoroshi waits before killing a wasm vm that is not called anymore (the pool will reinstantiate a new one)', + }, + } + }, + }, }; export default { @@ -219,8 +272,9 @@ export default { 'source', 'functionName', v.source.kind.toLowerCase() !== 'local' && 'wasi', - v.source.kind.toLowerCase() !== 'local' && 'lifetime', + // v.source.kind.toLowerCase() !== 'local' && 'lifetime', v.source.kind.toLowerCase() !== 'local' && 'authorizations', + v.source.kind.toLowerCase() !== 'local' && 'killOptions', v.source.kind.toLowerCase() !== 'local' && { type: 'group', name: 'Advanced settings', diff --git a/otoroshi/javascript/src/pages/WasmPluginsPage.js b/otoroshi/javascript/src/pages/WasmPluginsPage.js index 2c2442c5d0..ff0a05554b 100644 --- a/otoroshi/javascript/src/pages/WasmPluginsPage.js +++ b/otoroshi/javascript/src/pages/WasmPluginsPage.js @@ -209,8 +209,14 @@ export class WasmPluginsPage extends Component { 'config.functionName', value.config.source.kind.toLowerCase() !== 'local' && 'config.instances', value.config.source.kind.toLowerCase() !== 'local' && 'config.config', - value.config.source.kind.toLowerCase() !== 'local' && 'config.lifetime', + //value.config.source.kind.toLowerCase() !== 'local' && 'config.lifetime', value.config.source.kind.toLowerCase() !== 'local' && 'config.opa', + value.config.source.kind.toLowerCase() !== 'local' && '<< item.description }, ]; - formFlow = ['_loc', 'id', 'name', 'description', 'tags', 'metadata', 'inspect_body', 'config']; + formFlow = ['_loc', 'id', 'name', 'description', 'tags', 'metadata', 'pool_capacity', 'inspect_body', 'config']; componentDidMount() { this.props.setTitle(`All Coraza WAF configs.`); diff --git a/otoroshi/test/Suites.scala b/otoroshi/test/Suites.scala index 65edbe0e6b..41e45d0a93 100644 --- a/otoroshi/test/Suites.scala +++ b/otoroshi/test/Suites.scala @@ -81,14 +81,12 @@ object OtoroshiTests { val suites = Seq( new BasicSpec, new AdminApiSpec(name, config), - new ProgrammaticApiSpec(name, config), new CircuitBreakerSpec(name, config), new AlertAndAnalyticsSpec(name, config), // new AnalyticsSpec(name, config), new ApiKeysSpec(name, config), new CanarySpec(name, config), new QuotasSpec(name, config), - new SidecarSpec(name, config), new JWTVerificationSpec(name, config), new JWTVerificationRefSpec(name, config), new SnowMonkeySpec(name, config), @@ -172,3 +170,11 @@ class ConfigCleanerTests extends Suites( new ConfigurationCleanupSpec() ) + +class CircuitBreakerTests extends Suites( + new CircuitBreakerSpec("InMemory", Configurations.InMemoryConfiguration) +) + +class AnalyticsTests extends Suites( + new AlertAndAnalyticsSpec("InMemory", Configurations.InMemoryConfiguration) +) \ No newline at end of file diff --git a/otoroshi/test/functional/AlertAndAnalyticsSpec.scala b/otoroshi/test/functional/AlertAndAnalyticsSpec.scala index 8e58b306f2..87b468d5b9 100644 --- a/otoroshi/test/functional/AlertAndAnalyticsSpec.scala +++ b/otoroshi/test/functional/AlertAndAnalyticsSpec.scala @@ -114,7 +114,7 @@ class AlertAndAnalyticsSpec(name: String, configurationSpec: => Configuration) e config <- getOtoroshiConfig() } yield config).futureValue - awaitF(6.seconds).futureValue + awaitF(12.seconds).futureValue getOtoroshiConfig().futureValue getOtoroshiApiKeys().futureValue @@ -123,7 +123,7 @@ class AlertAndAnalyticsSpec(name: String, configurationSpec: => Configuration) e createOtoroshiApiKey(apiKey).futureValue deleteOtoroshiApiKey(apiKey).futureValue - await(2.seconds) + await(12.seconds) println(counter.get()) counter.get() >= 16 mustBe true diff --git a/otoroshi/test/functional/CircuitBreakerSpec.scala b/otoroshi/test/functional/CircuitBreakerSpec.scala index dc5cbe5866..a39b5be949 100644 --- a/otoroshi/test/functional/CircuitBreakerSpec.scala +++ b/otoroshi/test/functional/CircuitBreakerSpec.scala @@ -1,19 +1,19 @@ package functional import java.util.concurrent.atomic.AtomicInteger - import akka.actor.ActorSystem import com.typesafe.config.ConfigFactory import otoroshi.models.{ClientConfig, ServiceDescriptor, Target} import org.scalatest.concurrent.IntegrationPatience import org.scalatestplus.play.PlaySpec +import otoroshi.utils.syntax.implicits.BetterSyntax import play.api.Configuration import scala.concurrent.duration._ class CircuitBreakerSpec(name: String, configurationSpec: => Configuration) extends OtoroshiSpec { - lazy val serviceHost = "cb.oto.tools" + //lazy val serviceHost = "cb.oto.tools" implicit val system = ActorSystem("otoroshi-test") override def getTestConfiguration(configuration: Configuration) = @@ -28,96 +28,96 @@ class CircuitBreakerSpec(name: String, configurationSpec: => Configuration) exte s"[$name] Otoroshi Circuit Breaker" should { - val callCounter1 = new AtomicInteger(0) - val basicTestExpectedBody = """{"message":"hello world"}""" - val basicTestServer1 = TargetService( - Some(serviceHost), - "/api", - "application/json", - { _ => - callCounter1.incrementAndGet() - basicTestExpectedBody - } - ).await() - - val callCounter2 = new AtomicInteger(0) - val basicTestServer2 = TargetService( - Some(serviceHost), - "/api", - "application/json", - { _ => - callCounter2.incrementAndGet() - basicTestExpectedBody - } - ).await() - - val callCounter3 = new AtomicInteger(0) - val basicTestServer3 = TargetService( - Some(serviceHost), - "/api", - "application/json", - { _ => - awaitF(2.seconds).futureValue - callCounter3.incrementAndGet() - basicTestExpectedBody - } - ).await() - "warm up" in { startOtoroshi() getOtoroshiServices().futureValue // WARM UP } - "Open if too many failures" in { + "Retry on failures" in { + + val callCounter1 = new AtomicInteger(0) + val basicTestExpectedBody = """{"message":"hello world"}""" + val basicTestServer1 = TargetService( + "cbr.oto.tools".option, + "/api", + "application/json", + { _ => + callCounter1.incrementAndGet() + basicTestExpectedBody + } + ).await() + + val callCounter2 = new AtomicInteger(0) + val basicTestServer2 = TargetService( + "cbr.oto.tools".option, + "/api", + "application/json", + { _ => + callCounter2.incrementAndGet() + basicTestExpectedBody + } + ).await() + val fakePort = TargetService.freePort - val service = ServiceDescriptor( - id = "cb-test", - name = "cb-test", + val service = ServiceDescriptor( + id = "cbr-test", + name = "cbr-test", env = "prod", - subdomain = "cb", + subdomain = "cbr", domain = "oto.tools", targets = Seq( Target( host = s"127.0.0.1:$fakePort", scheme = "http" + ), + Target( + host = s"127.0.0.1:${basicTestServer1.port}", + scheme = "http" + ), + Target( + host = s"127.0.0.1:${basicTestServer2.port}", + scheme = "http" ) ), forceHttps = false, enforceSecureCommunication = false, publicPatterns = Seq("/.*"), clientConfig = ClientConfig( + retries = 2, maxErrors = 3, - sampleInterval = 500 + sampleInterval = 500, + connectionTimeout = 500 ) ) createOtoroshiService(service).futureValue def callServer() = { ws.url(s"http://127.0.0.1:$port/api") - .withHttpHeaders( - "Host" -> "cb.oto.tools" - ) - .get() - .futureValue + .withHttpHeaders( + "Host" -> "cbr.oto.tools" + ) + .get() + .futureValue } val basicTestResponse1 = callServer() - basicTestResponse1.status mustBe 502 - basicTestResponse1.body.contains("the connection to backend service was refused") mustBe true + basicTestResponse1.status mustBe 200 + callCounter1.get() mustBe 1 - callServer() - callServer() - callServer() + callServer().status mustBe 200 + callServer().status mustBe 200 + callServer().status mustBe 200 - val basicTestResponse2 = callServer() - basicTestResponse2.status mustBe 503 - basicTestResponse2.body.contains("the backend service seems a little bit overwhelmed") mustBe true + callCounter1.get() mustBe 2 + callCounter2.get() mustBe 2 deleteOtoroshiService(service).futureValue + basicTestServer1.stop() + basicTestServer2.stop() } - "Open if too many failures and close back" in { + "Open if too many failures" in { val fakePort = TargetService.freePort val service = ServiceDescriptor( id = "cb-test", @@ -151,6 +151,7 @@ class CircuitBreakerSpec(name: String, configurationSpec: => Configuration) exte } val basicTestResponse1 = callServer() + basicTestResponse1.status mustBe 502 basicTestResponse1.body.contains("the connection to backend service was refused") mustBe true @@ -162,16 +163,10 @@ class CircuitBreakerSpec(name: String, configurationSpec: => Configuration) exte basicTestResponse2.status mustBe 503 basicTestResponse2.body.contains("the backend service seems a little bit overwhelmed") mustBe true - awaitF(1.seconds).futureValue - - val basicTestResponse3 = callServer() - basicTestResponse3.status mustBe 502 - basicTestResponse3.body.contains("the connection to backend service was refused") mustBe true - deleteOtoroshiService(service).futureValue } - "Retry on failures" in { + "Open if too many failures and close back" in { val fakePort = TargetService.freePort val service = ServiceDescriptor( id = "cb-test", @@ -183,21 +178,12 @@ class CircuitBreakerSpec(name: String, configurationSpec: => Configuration) exte Target( host = s"127.0.0.1:$fakePort", scheme = "http" - ), - Target( - host = s"127.0.0.1:${basicTestServer1.port}", - scheme = "http" - ), - Target( - host = s"127.0.0.1:${basicTestServer2.port}", - scheme = "http" ) ), forceHttps = false, enforceSecureCommunication = false, publicPatterns = Seq("/.*"), clientConfig = ClientConfig( - retries = 2, maxErrors = 3, sampleInterval = 500 ) @@ -214,26 +200,45 @@ class CircuitBreakerSpec(name: String, configurationSpec: => Configuration) exte } val basicTestResponse1 = callServer() + basicTestResponse1.status mustBe 502 + basicTestResponse1.body.contains("the connection to backend service was refused") mustBe true - basicTestResponse1.status mustBe 200 - callCounter1.get() mustBe 1 + callServer() + callServer() + callServer() - callServer().status mustBe 200 - callServer().status mustBe 200 - callServer().status mustBe 200 + val basicTestResponse2 = callServer() + basicTestResponse2.status mustBe 503 + basicTestResponse2.body.contains("the backend service seems a little bit overwhelmed") mustBe true - callCounter1.get() mustBe 2 - callCounter2.get() mustBe 2 + awaitF(1.seconds).futureValue + + val basicTestResponse3 = callServer() + basicTestResponse3.status mustBe 502 + basicTestResponse3.body.contains("the connection to backend service was refused") mustBe true deleteOtoroshiService(service).futureValue } "Timeout on long calls" in { + val basicTestExpectedBody = """{"message":"hello world"}""" + val callCounter3 = new AtomicInteger(0) + val basicTestServer3 = TargetService( + "cbt.oto.tools".option, + "/api", + "application/json", + { _ => + awaitF(2.seconds).futureValue + callCounter3.incrementAndGet() + basicTestExpectedBody + } + ).await() + val service = ServiceDescriptor( - id = "cb-test", - name = "cb-test", + id = "cbt-test", + name = "cbt-test", env = "prod", - subdomain = "cb", + subdomain = "cbt", domain = "oto.tools", targets = Seq( Target( @@ -253,7 +258,7 @@ class CircuitBreakerSpec(name: String, configurationSpec: => Configuration) exte def callServer() = { ws.url(s"http://127.0.0.1:$port/api") .withHttpHeaders( - "Host" -> "cb.oto.tools" + "Host" -> "cbt.oto.tools" ) .get() .futureValue @@ -266,14 +271,28 @@ class CircuitBreakerSpec(name: String, configurationSpec: => Configuration) exte ) mustBe true deleteOtoroshiService(service).futureValue + basicTestServer3.stop() } "Timeout on long calls with retries" in { + val basicTestExpectedBody = """{"message":"hello world"}""" + val callCounter3 = new AtomicInteger(0) + val basicTestServer3 = TargetService( + "cbtr.oto.tools".option, + "/api", + "application/json", + { _ => + awaitF(2.seconds).futureValue + callCounter3.incrementAndGet() + basicTestExpectedBody + } + ).await() + val service = ServiceDescriptor( - id = "cb-test", - name = "cb-test", + id = "cbtr-test", + name = "cbtr-test", env = "prod", - subdomain = "cb", + subdomain = "cbtr", domain = "oto.tools", targets = Seq( Target( @@ -287,7 +306,7 @@ class CircuitBreakerSpec(name: String, configurationSpec: => Configuration) exte clientConfig = ClientConfig( retries = 3, callTimeout = 800, - globalTimeout = 2000 + globalTimeout = 500 ) ) createOtoroshiService(service).futureValue @@ -295,7 +314,7 @@ class CircuitBreakerSpec(name: String, configurationSpec: => Configuration) exte def callServer() = { ws.url(s"http://127.0.0.1:$port/api") .withHttpHeaders( - "Host" -> "cb.oto.tools" + "Host" -> "cbtr.oto.tools" ) .get() .futureValue @@ -309,12 +328,11 @@ class CircuitBreakerSpec(name: String, configurationSpec: => Configuration) exte ) mustBe true deleteOtoroshiService(service).futureValue + basicTestServer3.stop() + } "stop servers" in { - basicTestServer1.stop() - basicTestServer2.stop() - basicTestServer3.stop() system.terminate() } diff --git a/otoroshi/test/functional/JWTVerificationSpec.scala b/otoroshi/test/functional/JWTVerificationSpec.scala index f1141f7385..55e4338854 100644 --- a/otoroshi/test/functional/JWTVerificationSpec.scala +++ b/otoroshi/test/functional/JWTVerificationSpec.scala @@ -125,6 +125,17 @@ class JWTVerificationSpec(name: String, configurationSpec: => Configuration) ext } ).await() + val jwtVerifier = GlobalJwtVerifier( + id = "verifier1", + name = "verifier1", + desc = "verifier1", + strict = true, + source = InHeader(name = "X-JWT-Token"), + algoSettings = HSAlgoSettings(512, "secret"), + strategy = PassThrough(verificationSettings = VerificationSettings(Map("iss" -> "foo", "bar" -> "yo"))) + ) + createOtoroshiVerifier(jwtVerifier).futureValue + val service = ServiceDescriptor( id = "jwt-test", name = "jwt-test", @@ -140,13 +151,7 @@ class JWTVerificationSpec(name: String, configurationSpec: => Configuration) ext forceHttps = false, enforceSecureCommunication = false, publicPatterns = Seq("/.*"), - jwtVerifier = LocalJwtVerifier( - enabled = true, - strict = true, - source = InHeader(name = "X-JWT-Token"), - algoSettings = HSAlgoSettings(512, "secret"), - strategy = PassThrough(verificationSettings = VerificationSettings(Map("iss" -> "foo", "bar" -> "yo"))) - ) + jwtVerifier = RefJwtVerifier(ids = Seq("verifier1"), enabled = true) ) createOtoroshiService(service).futureValue @@ -227,6 +232,7 @@ class JWTVerificationSpec(name: String, configurationSpec: => Configuration) ext body3.contains("error.bad.token") mustBe true deleteOtoroshiService(service).futureValue + deleteOtoroshiVerifier(jwtVerifier).futureValue basicTestServer1.stop() } @@ -272,6 +278,19 @@ class JWTVerificationSpec(name: String, configurationSpec: => Configuration) ext } ).await() + val jwtVerifier = GlobalJwtVerifier( + id = "verifier2", + name = "verifier2", + desc = "verifier2", + strict = true, + source = InHeader(name = "X-JWT-Token"), + algoSettings = HSAlgoSettings(512, "secret"), + strategy = Sign( + verificationSettings = VerificationSettings(Map("iss" -> "foo", "bar" -> "yo")), + algoSettings = HSAlgoSettings(512, key)) + ) + createOtoroshiVerifier(jwtVerifier).futureValue + val service = ServiceDescriptor( id = "jwt-test", name = "jwt-test", @@ -287,15 +306,9 @@ class JWTVerificationSpec(name: String, configurationSpec: => Configuration) ext forceHttps = false, enforceSecureCommunication = false, publicPatterns = Seq("/.*"), - jwtVerifier = LocalJwtVerifier( + jwtVerifier = RefJwtVerifier( + ids= Seq("verifier2"), enabled = true, - strict = true, - source = InHeader(name = "X-JWT-Token"), - algoSettings = HSAlgoSettings(512, "secret"), - strategy = Sign( - verificationSettings = VerificationSettings(Map("iss" -> "foo", "bar" -> "yo")), - algoSettings = HSAlgoSettings(512, key) - ) ) ) @@ -367,6 +380,7 @@ class JWTVerificationSpec(name: String, configurationSpec: => Configuration) ext body3.contains("error.bad.token") mustBe true deleteOtoroshiService(service).futureValue + deleteOtoroshiVerifier(jwtVerifier).futureValue basicTestServer1.stop() } @@ -421,6 +435,40 @@ class JWTVerificationSpec(name: String, configurationSpec: => Configuration) ext } ).await() + val jwtVerifier = GlobalJwtVerifier( + id = "verifier3", + name = "verifier3", + desc = "verifier3", + strict = true, + source = InHeader(name = "X-JWT-Token"), + algoSettings = HSAlgoSettings(512, "secret"), + strategy = Transform( + verificationSettings = VerificationSettings(Map("iss" -> "foo", "bar" -> "yo")), + algoSettings = HSAlgoSettings(512, key), + transformSettings = TransformSettings( + location = InHeader("X-Barrr"), + mappingSettings = MappingSettings( + map = Map( + "fakebar" -> "x-bar", + "bar" -> "x-bar", + "superfakebar" -> "x-bar" + ), + values = Json.obj( + "x-yo" -> "foo", + "the-date-1" -> "the-${date}", + "the-date-2" -> "the-${date.format('dd-MM-yyyy')}", + "the-var-1" -> "the-${token.var1}", + "the-var-2" -> "the-${token.var2}", + "the-var-1-2" -> "the-${token.var1}-${token.var2}", + "the-host" -> "${req.host}" + ), + remove = Seq("foo") + ) + ) + ) + ) + createOtoroshiVerifier(jwtVerifier).futureValue + val service = ServiceDescriptor( id = "jwt-test", name = "jwt-test", @@ -436,35 +484,9 @@ class JWTVerificationSpec(name: String, configurationSpec: => Configuration) ext forceHttps = false, enforceSecureCommunication = false, publicPatterns = Seq("/.*"), - jwtVerifier = LocalJwtVerifier( - enabled = true, - strict = true, - source = InHeader(name = "X-JWT-Token"), - algoSettings = HSAlgoSettings(512, "secret"), - strategy = Transform( - verificationSettings = VerificationSettings(Map("iss" -> "foo", "bar" -> "yo")), - algoSettings = HSAlgoSettings(512, key), - transformSettings = TransformSettings( - location = InHeader("X-Barrr"), - mappingSettings = MappingSettings( - map = Map( - "fakebar" -> "x-bar", - "bar" -> "x-bar", - "superfakebar" -> "x-bar" - ), - values = Json.obj( - "x-yo" -> "foo", - "the-date-1" -> "the-${date}", - "the-date-2" -> "the-${date.format('dd-MM-yyyy')}", - "the-var-1" -> "the-${token.var1}", - "the-var-2" -> "the-${token.var2}", - "the-var-1-2" -> "the-${token.var1}-${token.var2}", - "the-host" -> "${req.host}" - ), - remove = Seq("foo") - ) - ) - ) + jwtVerifier = RefJwtVerifier( + ids = Seq("verifier3"), + enabled = true ) ) @@ -538,6 +560,7 @@ class JWTVerificationSpec(name: String, configurationSpec: => Configuration) ext body3.contains("error.bad.token") mustBe true deleteOtoroshiService(service).futureValue + deleteOtoroshiVerifier(jwtVerifier).futureValue basicTestServer1.stop() } diff --git a/otoroshi/test/functional/MapFilterSpec.scala b/otoroshi/test/functional/MapFilterSpec.scala index 4f529d1dac..b5abadfe2b 100644 --- a/otoroshi/test/functional/MapFilterSpec.scala +++ b/otoroshi/test/functional/MapFilterSpec.scala @@ -715,7 +715,6 @@ class MapFilterSpec extends WordSpec with MustMatchers with OptionValues { matchExpr ) mustBe false } - "exclude events with root operator" in { val logger = Logger("exclude-test") val config = DataExporterConfig( @@ -784,7 +783,6 @@ class MapFilterSpec extends WordSpec with MustMatchers with OptionValues { logger ) mustBe true } - "exclude events with two exclusions" in { val logger = Logger("exclude-test-2") val config = DataExporterConfig( @@ -853,5 +851,26 @@ class MapFilterSpec extends WordSpec with MustMatchers with OptionValues { logger ) mustBe true } + "works on sub-objects of arrays" in { + println("\n\n\n======================================================================\n\n\n") + val source = Json.obj( + "otoroshiHeadersIn" -> Json.arr( + Json.obj("key" -> "key1", "value" -> "value1"), + Json.obj("key" -> "key2", "value" -> "value2"), + Json.obj("key" -> "key3", "value" -> "value3"), + Json.obj("key" -> "key4", "value" -> "value4"), + ) + ) + val res = otoroshi.utils.Projection.project( + source, + Json.obj( + "otoroshiHeadersIn" -> Json.obj("$path" -> "$.otoroshiHeadersIn.*.key") + ), + identity + ) + println(Json.prettyPrint(source)) + println(Json.prettyPrint(res)) + println("\n\n\n======================================================================\n\n\n") + } } } diff --git a/otoroshi/test/functional/ProgrammaticApiSpec.scala b/otoroshi/test/functional/ProgrammaticApiSpec.scala deleted file mode 100644 index 492c0a3622..0000000000 --- a/otoroshi/test/functional/ProgrammaticApiSpec.scala +++ /dev/null @@ -1,143 +0,0 @@ -package functional - -import java.nio.file.Files -import java.util.concurrent.atomic.AtomicInteger - -import akka.actor.ActorSystem -import com.typesafe.config.ConfigFactory -import otoroshi.models.{ServiceDescriptor, Target} -import org.apache.commons.io.FileUtils -import org.scalatest.concurrent.IntegrationPatience -import org.scalatestplus.play.PlaySpec -import otoroshi.api.Otoroshi -import play.api.Configuration -import play.api.libs.json.Json -import play.core.server.ServerConfig - -class ProgrammaticApiSpec(name: String, configurationSpec: => Configuration) extends OtoroshiSpec { - - lazy val serviceHost = "basictest.oto.tools" - lazy val serviceHost2 = "basictest2.oto.tools" - - override def getTestConfiguration(configuration: Configuration) = - Configuration( - ConfigFactory - .parseString(s""" - """.stripMargin) - .resolve() - ).withFallback(configurationSpec).withFallback(configuration) - - s"[$name] Otoroshi Programmatic API" should { - - "just works" in { - - import scala.concurrent.duration._ - - implicit val system = ActorSystem("otoroshi-prog-api-test") - val dir = Files.createTempDirectory("otoroshi-prog-api-test").toFile - - val callCounter = new AtomicInteger(0) - val basicTestExpectedBody = """{"message":"hello world"}""" - val basicTestServer = TargetService( - None, - "/api", - "application/json", - { _ => - callCounter.incrementAndGet() - basicTestExpectedBody - } - ).await() - - val initialDescriptor = ServiceDescriptor( - id = "basic-test", - name = "basic-test", - env = "prod", - subdomain = "basictest", - domain = "oto.tools", - targets = Seq( - Target( - host = s"127.0.0.1:${basicTestServer.port}", - scheme = "http" - ) - ), - localHost = s"127.0.0.1:${basicTestServer.port}", - forceHttps = false, - enforceSecureCommunication = false, - publicPatterns = Seq("/.*") - ) - val otherDescriptor = ServiceDescriptor( - id = "basic-test-2", - name = "basic-test-2", - env = "prod", - subdomain = "basictest2", - domain = "oto.tools", - targets = Seq( - Target( - host = s"127.0.0.1:${basicTestServer.port}", - scheme = "http" - ) - ), - localHost = s"127.0.0.1:${basicTestServer.port}", - forceHttps = false, - enforceSecureCommunication = false, - publicPatterns = Seq("/.*") - ) - - val otoroshi = Otoroshi( - ServerConfig( - address = "0.0.0.0", - port = Some(8888), - rootDir = dir - ) - ).startAndStopOnShutdown() - - implicit val env = otoroshi.env - - awaitF(3.seconds).futureValue - - val services = getOtoroshiServices(Some(8888), otoroshi.ws).futureValue - - services.size mustBe 1 - - // create service using rest api - val (_, status) = createOtoroshiService(initialDescriptor, Some(8888), otoroshi.ws).futureValue - - status mustBe 201 - - { - val basicTestResponse1 = otoroshi.ws - .url(s"http://127.0.0.1:8888/api") - .withHttpHeaders( - "Host" -> serviceHost - ) - .get() - .futureValue - - basicTestResponse1.status mustBe 200 - basicTestResponse1.body mustBe basicTestExpectedBody - callCounter.get() mustBe 1 - } - - otoroshi.dataStores.serviceDescriptorDataStore.set(otherDescriptor).futureValue - - { - val basicTestResponse1 = otoroshi.ws - .url(s"http://127.0.0.1:8888/api") - .withHttpHeaders( - "Host" -> serviceHost2 - ) - .get() - .futureValue - - basicTestResponse1.status mustBe 200 - basicTestResponse1.body mustBe basicTestExpectedBody - callCounter.get() mustBe 2 - } - - basicTestServer.stop() - otoroshi.stop() - system.terminate() - FileUtils.deleteDirectory(dir) - } - } -} diff --git a/otoroshi/test/functional/SidecarSpec.scala b/otoroshi/test/functional/SidecarSpec.scala deleted file mode 100644 index a6c129530a..0000000000 --- a/otoroshi/test/functional/SidecarSpec.scala +++ /dev/null @@ -1,212 +0,0 @@ -package functional - -import java.util.Base64 -import java.util.concurrent.atomic.AtomicInteger - -import akka.actor.ActorSystem -import com.auth0.jwt.JWT -import com.auth0.jwt.algorithms.Algorithm -import com.typesafe.config.ConfigFactory -import otoroshi.models.{ApiKey, ServiceDescriptor, ServiceGroupIdentifier, Target} -import org.scalatest.concurrent.IntegrationPatience -import org.scalatestplus.play.PlaySpec -import play.api.Configuration -import play.api.libs.ws.WSResponse - -class SidecarSpec(name: String, configurationSpec: => Configuration) extends OtoroshiSpec { - - lazy val serviceHost = "sidecar.oto.tools" - implicit val system = ActorSystem("otoroshi-test") - lazy val fakePort = TargetService.freePort - - def debugResponse(resp: WSResponse): WSResponse = { - if (resp.status != 200) { - println(resp.status + " => " + resp.body) - } - resp - } - - override def getTestConfiguration(configuration: Configuration) = - Configuration( - ConfigFactory - .parseString(s""" - |{ - | app.sidecar.serviceId = "sidecar-service1-test" - | app.sidecar.target = "http://127.0.0.1:$fakePort" - | app.sidecar.from = "127.0.0.1" - | app.sidecar.strict = false - | app.sidecar.apikey.clientId = "sidecar-apikey-test" - |} - """.stripMargin) - .resolve() - ).withFallback(configurationSpec).withFallback(configuration) - - s"[$name] Otoroshi Sidecar" should { - - val basicTestExpectedBody = """{"message":"hello world"}""" - val basicTestServer1 = TargetService - .withPort( - fakePort, - Some(serviceHost), - "/api", - "application/json", - { _ => - basicTestExpectedBody - } - ) - .await() - - val basicTestExpectedBody2 = """{"message":"bye world"}""" - val basicTestServer2 = TargetService( - Some(serviceHost), - "/api", - "application/json", - { _ => - basicTestExpectedBody2 - } - ).await() - - val basicTestExpectedBody3 = """{"message":"yeah world"}""" - val basicTestServer3 = TargetService( - Some("sidecar2.oto.tools"), - "/api", - "application/json", - { _ => - basicTestExpectedBody3 - } - ).await() - - val service1 = ServiceDescriptor( - id = "sidecar-service1-test", - name = "sidecar-service1-test", - env = "prod", - subdomain = "sidecar", - domain = "oto.tools", - targets = Seq( - Target( - host = s"127.0.0.1:${basicTestServer2.port}", - scheme = "http" - ) - ), - forceHttps = false, - enforceSecureCommunication = false - ) - - val service2 = ServiceDescriptor( - id = "sidecar-service2-test", - name = "sidecar-service2-test", - env = "prod", - subdomain = "sidecar2", - domain = "oto.tools", - targets = Seq( - Target( - host = s"127.0.0.1:${basicTestServer3.port}", - scheme = "http" - ) - ), - forceHttps = false, - enforceSecureCommunication = false - ) - - val apiKey = ApiKey( - clientId = "sidecar-apikey-test", - clientSecret = "1234", - clientName = "apikey-test", - authorizedEntities = Seq(ServiceGroupIdentifier("default")) - ) - - "warm up" in { - startOtoroshi() - getOtoroshiServices().futureValue // WARM UP - createOtoroshiApiKey(apiKey).futureValue - } - - "Allow access to local service from outside" in { - createOtoroshiService(service1).futureValue - - val resp = ws - .url(s"http://127.0.0.1:$port/api") - .withHttpHeaders( - "Host" -> serviceHost, - "Otoroshi-Client-Id" -> apiKey.clientId, - "Otoroshi-Client-Secret" -> apiKey.clientSecret, - "X-Forwarded-For" -> "99.99.99.99" - ) - .get() - .futureValue - - resp.status mustBe 200 - resp.body mustBe basicTestExpectedBody - - deleteOtoroshiService(service1).futureValue - } - - "Not allow access to local service from outside without apikey" in { - createOtoroshiService(service1).futureValue - - val resp = ws - .url(s"http://127.0.0.1:$port/api") - .withHttpHeaders( - "Host" -> serviceHost, - "X-Forwarded-For" -> "99.99.99.99" - ) - .get() - .futureValue - - resp.status mustBe 400 - resp.body.contains("No ApiKey provided") mustBe true - - deleteOtoroshiService(service1).futureValue - } - - "Allow access to outside service from inside without apikey" in { - createOtoroshiService(service1).futureValue - createOtoroshiService(service2).futureValue - - val resp = ws - .url(s"http://127.0.0.1:$port/api") - .withHttpHeaders( - "Host" -> "sidecar2.oto.tools", - "X-Forwarded-For" -> "127.0.0.1" - ) - .get() - .futureValue - - resp.status mustBe 200 - resp.body mustBe basicTestExpectedBody3 - - deleteOtoroshiService(service1).futureValue - deleteOtoroshiService(service2).futureValue - } - - "Not allow access to outside service from outside without apikey" in { - createOtoroshiService(service1).futureValue - createOtoroshiService(service2).futureValue - - val resp = ws - .url(s"http://127.0.0.1:$port/api") - .withHttpHeaders( - "Host" -> "sidecar2.oto.tools", - "X-Forwarded-For" -> "127.0.0.2" - ) - .get() - .futureValue - - resp.status mustBe 502 - resp.body.contains("sidecar.bad.request.origin") mustBe true - - deleteOtoroshiService(service1).futureValue - deleteOtoroshiService(service2).futureValue - } - - "stop servers" in { - basicTestServer1.stop() - basicTestServer2.stop() - system.terminate() - } - - "shutdown" in { - stopAll() - } - } -} diff --git a/otoroshi/test/functional/SnowMonkeySpec.scala b/otoroshi/test/functional/SnowMonkeySpec.scala index 6bc1ed3c48..462ba96f79 100644 --- a/otoroshi/test/functional/SnowMonkeySpec.scala +++ b/otoroshi/test/functional/SnowMonkeySpec.scala @@ -89,7 +89,7 @@ class SnowMonkeySpec(name: String, configurationSpec: => Configuration) extends capture = false, exportReporting = false, frontend = NgFrontend( - domains = Seq(NgDomainAndPath("monkey.oto.tools")), + domains = Seq(NgDomainAndPath(serviceHost)), headers = Map(), query = Map(), methods = Seq(), diff --git a/otoroshi/test/functional/Version1413Spec.scala b/otoroshi/test/functional/Version1413Spec.scala index eb61d88db4..9c97086df7 100644 --- a/otoroshi/test/functional/Version1413Spec.scala +++ b/otoroshi/test/functional/Version1413Spec.scala @@ -1,610 +1,687 @@ package functional -import java.util.concurrent.atomic.AtomicInteger import akka.actor.ActorSystem -import akka.http.scaladsl.util.FastFuture import akka.stream.Materializer import com.auth0.jwt.JWT import com.auth0.jwt.algorithms.Algorithm import com.typesafe.config.ConfigFactory import otoroshi.env.Env import otoroshi.models._ -import org.scalatest.concurrent.IntegrationPatience -import org.scalatestplus.play.PlaySpec -import otoroshi.next.plugins.api.{NgPluginCategory, NgPluginVisibility, NgStep} -import otoroshi.script -import otoroshi.script._ +import otoroshi.next.models._ +import otoroshi.next.plugins.api._ +import otoroshi.next.plugins.{ApikeyCalls, NgApikeyCallsConfig, NgApikeyMatcher} import otoroshi.security.IdGenerator import play.api.Configuration -import play.api.libs.json.Json +import play.api.libs.json.{JsObject, Json} import play.api.libs.typedmap.TypedKey import play.api.mvc.{Result, Results} -import scala.concurrent.{ExecutionContext, Future} import scala.util.{Failure, Success, Try} +import java.util.concurrent.atomic.AtomicInteger +import scala.concurrent.ExecutionContext class Version1413Spec(name: String, configurationSpec: => Configuration) extends OtoroshiSpec { - implicit val system = ActorSystem("otoroshi-test") - implicit lazy val env = otoroshiComponents.env + implicit val system = ActorSystem("otoroshi-test") + implicit lazy val env = otoroshiComponents.env - override def getTestConfiguration(configuration: Configuration) = - Configuration( - ConfigFactory - .parseString(s""" - |{ - |} + override def getTestConfiguration(configuration: Configuration) = + Configuration( + ConfigFactory + .parseString( + s""" + |{ + |} """.stripMargin) - .resolve() - ).withFallback(configurationSpec).withFallback(configuration) + .resolve() + ).withFallback(configurationSpec).withFallback(configuration) - s"[$name] Otoroshi service descriptors" should { - - "warm up" in { - startOtoroshi() - getOtoroshiServices().futureValue // WARM UP - } + s"[$name] Otoroshi service descriptors" should { - "support missing header (#364)" in { - - val counterBar = new AtomicInteger(0) - val counterKix = new AtomicInteger(0) - - val (_, port1, _, call1) = testServer( - "missingheaders.oto.tools", - port, - validate = req => { - val header = req.getHeader("foo").get().value() - if (header == "bar") { - counterBar.incrementAndGet() - } - if (header == "kix") { - counterKix.incrementAndGet() - } - true + "warm up" in { + startOtoroshi() + getOtoroshiServices().futureValue // WARM UP } - ) - - val service1 = ServiceDescriptor( - id = "missingheaders", - name = "missingheaders", - env = "prod", - subdomain = "missingheaders", - domain = "oto.tools", - targets = Seq( - Target( - host = s"127.0.0.1:${port1}", - scheme = "http" + + "support missing header (#364)" in { + + val counterBar = new AtomicInteger(0) + val counterKix = new AtomicInteger(0) + + val (_, port1, _, call1) = testServer( + "missingheaders.oto.tools", + port, + validate = req => { + val header = req.getHeader("foo").get().value() + if (header == "bar") { + counterBar.incrementAndGet() + } + if (header == "kix") { + counterKix.incrementAndGet() + } + true + } ) - ), - forceHttps = false, - enforceSecureCommunication = false, - publicPatterns = Seq("/.*"), - missingOnlyHeadersIn = Map( - "foo" -> "kix" - ) - ) - createOtoroshiService(service1).futureValue + val service1 = ServiceDescriptor( + id = "missingheaders", + name = "missingheaders", + env = "prod", + subdomain = "missingheaders", + domain = "oto.tools", + targets = Seq( + Target( + host = s"127.0.0.1:${port1}", + scheme = "http" + ) + ), + forceHttps = false, + enforceSecureCommunication = false, + publicPatterns = Seq("/.*"), + missingOnlyHeadersIn = Map( + "foo" -> "kix" + ) + ) - val resp1 = call1( - Map( - "foo" -> "bar" - ) - ) + createOtoroshiService(service1).futureValue - val resp2 = call1( - Map.empty - ) + val resp1 = call1( + Map( + "foo" -> "bar" + ) + ) - resp1.status mustBe 200 - resp2.status mustBe 200 + val resp2 = call1( + Map.empty + ) - counterBar.get() mustBe 1 - counterKix.get() mustBe 1 + resp1.status mustBe 200 + resp2.status mustBe 200 - deleteOtoroshiService(service1).futureValue + counterBar.get() mustBe 1 + counterKix.get() mustBe 1 - stopServers() - } + deleteOtoroshiService(service1).futureValue - "support override header (#364)" in { - - val counterCanal02 = new AtomicInteger(0) - val counterCanalBar = new AtomicInteger(0) - - val (_, port1, _, call) = testServer( - "overrideheader.oto.tools", - port, - validate = req => { - val header = req.getHeader("MAIF_CANAL").get().value() - if (header == "02") { - counterCanal02.incrementAndGet() - } - if (header == "bar") { - counterCanalBar.incrementAndGet() - } - true + stopServers() } - ) - - val service1 = ServiceDescriptor( - id = "overrideheader", - name = "overrideheader", - env = "prod", - subdomain = "overrideheader", - domain = "oto.tools", - targets = Seq( - Target( - host = s"127.0.0.1:${port1}", - scheme = "http" + + "support override header (#364)" in { + + val counterCanal02 = new AtomicInteger(0) + val counterCanalBar = new AtomicInteger(0) + + val (_, port1, _, call) = testServer( + "overrideheader.oto.tools", + port, + validate = req => { + val header = req.getHeader("MAIF_CANAL").get().value() + if (header == "02") { + counterCanal02.incrementAndGet() + } + if (header == "bar") { + counterCanalBar.incrementAndGet() + } + true + } ) - ), - forceHttps = false, - enforceSecureCommunication = false, - publicPatterns = Seq("/.*"), - additionalHeaders = Map( - "MAIF_CANAL" -> "02" - ) - ) - createOtoroshiService(service1).futureValue + val service1 = ServiceDescriptor( + id = "overrideheader", + name = "overrideheader", + env = "prod", + subdomain = "overrideheader", + domain = "oto.tools", + targets = Seq( + Target( + host = s"127.0.0.1:${port1}", + scheme = "http" + ) + ), + forceHttps = false, + enforceSecureCommunication = false, + publicPatterns = Seq("/.*"), + additionalHeaders = Map( + "MAIF_CANAL" -> "02" + ) + ) - val resp1 = call( - Map( - "MAIF_CANAL" -> "bar" - ) - ) + createOtoroshiService(service1).futureValue - val resp2 = call( - Map.empty - ) + val resp1 = call( + Map( + "MAIF_CANAL" -> "bar" + ) + ) - resp1.status mustBe 200 - resp2.status mustBe 200 + val resp2 = call( + Map.empty + ) - counterCanal02.get() mustBe 2 - counterCanalBar.get() mustBe 0 + resp1.status mustBe 200 + resp2.status mustBe 200 - deleteOtoroshiService(service1).futureValue + counterCanal02.get() mustBe 2 + counterCanalBar.get() mustBe 0 - stopServers() - } + deleteOtoroshiService(service1).futureValue - "support override header case insensitive (#364)" in { - - val counterCanal02 = new AtomicInteger(0) - val counterCanalBar = new AtomicInteger(0) - - val (_, port1, _, call) = testServer( - "overrideheader.oto.tools", - port, - validate = req => { - val header = req.getHeader("MAIF_CANAL").get().value() - if (header == "02") { - counterCanal02.incrementAndGet() - } - if (header == "bar") { - counterCanalBar.incrementAndGet() - } - true + stopServers() } - ) - - val service1 = ServiceDescriptor( - id = "overrideheader", - name = "overrideheader", - env = "prod", - subdomain = "overrideheader", - domain = "oto.tools", - targets = Seq( - Target( - host = s"127.0.0.1:${port1}", - scheme = "http" + + "support override header case insensitive (#364)" in { + + val counterCanal02 = new AtomicInteger(0) + val counterCanalBar = new AtomicInteger(0) + + val (_, port1, _, call) = testServer( + "overrideheader.oto.tools", + port, + validate = req => { + val header = req.getHeader("MAIF_CANAL").get().value() + if (header == "02") { + counterCanal02.incrementAndGet() + } + if (header == "bar") { + counterCanalBar.incrementAndGet() + } + true + } ) - ), - forceHttps = false, - enforceSecureCommunication = false, - publicPatterns = Seq("/.*"), - additionalHeaders = Map( - "MAIF_CANAL" -> "02" - ) - ) - createOtoroshiService(service1).futureValue + val service1 = ServiceDescriptor( + id = "overrideheader", + name = "overrideheader", + env = "prod", + subdomain = "overrideheader", + domain = "oto.tools", + targets = Seq( + Target( + host = s"127.0.0.1:${port1}", + scheme = "http" + ) + ), + forceHttps = false, + enforceSecureCommunication = false, + publicPatterns = Seq("/.*"), + additionalHeaders = Map( + "MAIF_CANAL" -> "02" + ) + ) - val resp1 = call( - Map( - "maif_canal" -> "bar" - ) - ) + createOtoroshiService(service1).futureValue - val resp2 = call( - Map.empty - ) + val resp1 = call( + Map( + "maif_canal" -> "bar" + ) + ) - resp1.status mustBe 200 - resp2.status mustBe 200 + val resp2 = call( + Map.empty + ) - counterCanal02.get() mustBe 2 - counterCanalBar.get() mustBe 0 + resp1.status mustBe 200 + resp2.status mustBe 200 - deleteOtoroshiService(service1).futureValue + counterCanal02.get() mustBe 2 + counterCanalBar.get() mustBe 0 - stopServers() - } + deleteOtoroshiService(service1).futureValue - "be able to validate access (#360)" in { - val (_, port1, counter1, call1) = testServer("accessvalidator.oto.tools", port) - val service1 = ServiceDescriptor( - id = "accessvalidator", - name = "accessvalidator", - env = "prod", - subdomain = "accessvalidator", - domain = "oto.tools", - targets = Seq( - Target( - host = s"127.0.0.1:${port1}", - scheme = "http" - ) - ), - forceHttps = false, - enforceSecureCommunication = false, - accessValidator = AccessValidatorRef( - enabled = true, - refs = Seq( - "cp:otoroshi.plugins.apikeys.HasAllowedApiKeyValidator", - "cp:functional.Validator1" - ), - config = Json.obj( - "tags" -> Json.arr("foo") - ) - ) - ) - val validApiKey = ApiKey( - clientId = IdGenerator.token(16), - clientSecret = IdGenerator.token(64), - clientName = "apikey1", - authorizedEntities = Seq(ServiceGroupIdentifier("default")), - tags = Seq("foo", "bar") - ) - val invalidApiKey = ApiKey( - clientId = IdGenerator.token(16), - clientSecret = IdGenerator.token(64), - clientName = "apikey2", - authorizedEntities = Seq(ServiceGroupIdentifier("default")), - tags = Seq("kix") - ) - - createOtoroshiService(service1).futureValue - createOtoroshiApiKey(validApiKey).futureValue - createOtoroshiApiKey(invalidApiKey).futureValue - - TransformersCounters.counterValidator.get() mustBe 0 - - val resp1 = call1( - Map( - "Otoroshi-Client-Id" -> validApiKey.clientId, - "Otoroshi-Client-Secret" -> validApiKey.clientSecret - ) - ) + stopServers() + } - TransformersCounters.counterValidator.get() mustBe 1 + "be able to validate access (#360)" in { + val serviceHost = "accessvalidatorhost.oto.tools" + val (_, port1, counter1, call1) = testServer("foo.oto.tools", TargetService.freePort) + val route = NgRoute( + location = EntityLocation.default, + id = "accessvalidator", + name = "accessvalidator", + description = "accessvalidator", + tags = Seq(), + metadata = Map(), + enabled = true, + debugFlow = false, + capture = false, + exportReporting = false, + frontend = NgFrontend( + domains = Seq(NgDomainAndPath(serviceHost)), + headers = Map(), + query = Map(), + methods = Seq(), + stripPath = true, + exact = false + ), + backend = NgBackend( + targets = Seq( + NgTarget( + hostname = "127.0.0.1", + port = port1, + id = "accessvalidator-target", + tls = false + ) + ), + root = "/", + rewrite = false, + loadBalancing = RoundRobin, + client = NgClientConfig.default + ), + plugins = NgPlugins( + Seq( + NgPluginInstance( + plugin = NgPluginHelper.pluginId[ApikeyCalls], + config = NgPluginInstanceConfig(NgApikeyCallsConfig( + routing = NgApikeyMatcher( + enabled = true, + oneTagIn = Seq("foo") + ) + ).json.as[JsObject]) + ), + NgPluginInstance( + plugin = "cp:functional.Validator1" + ) + ) + ) + ) - val resp2 = call1( - Map( - "Otoroshi-Client-Id" -> invalidApiKey.clientId, - "Otoroshi-Client-Secret" -> invalidApiKey.clientSecret - ) - ) + createOtoroshiRoute(route).futureValue - TransformersCounters.counterValidator.get() mustBe 1 + val validApiKey = ApiKey( + clientId = IdGenerator.token(16), + clientSecret = IdGenerator.token(64), + clientName = "apikey1", + authorizedEntities = Seq(ServiceGroupIdentifier("default")), + tags = Seq("foo", "bar") + ) + val invalidApiKey = ApiKey( + clientId = IdGenerator.token(16), + clientSecret = IdGenerator.token(64), + clientName = "apikey2", + authorizedEntities = Seq(ServiceGroupIdentifier("default")), + tags = Seq("kix") + ) - resp1.status mustBe 200 - counter1.get() mustBe 1 + createOtoroshiApiKey(validApiKey).futureValue + createOtoroshiApiKey(invalidApiKey).futureValue - resp2.status mustBe 400 - counter1.get() mustBe 1 + TransformersCounters.counterValidator.get() mustBe 0 - deleteOtoroshiService(service1).futureValue - deleteOtoroshiApiKey(validApiKey).futureValue - deleteOtoroshiApiKey(invalidApiKey).futureValue + val resp1 = ws.url(s"http://127.0.0.1:$port/api") + .withHttpHeaders( + "Otoroshi-Client-Id" -> validApiKey.clientId, + "Otoroshi-Client-Secret" -> validApiKey.clientSecret, + "Host" -> serviceHost + ) + .get() + .futureValue - stopServers() - } + TransformersCounters.counterValidator.get() mustBe 1 - "be able to chain transformers (#366)" in { - val (_, port1, counter1, call1) = testServer("reqtrans.oto.tools", port) - val service1 = ServiceDescriptor( - id = "reqtrans", - name = "reqtrans", - env = "prod", - subdomain = "reqtrans", - domain = "oto.tools", - targets = Seq( - Target( - host = s"127.0.0.1:${port1}", - scheme = "http" - ) - ), - forceHttps = false, - enforceSecureCommunication = false, - publicPatterns = Seq("/.*"), - transformerRefs = Seq( - "cp:functional.Transformer1", - "cp:functional.Transformer2", - "cp:functional.Transformer3" - ) - ) - createOtoroshiService(service1).futureValue + val resp2 = ws.url(s"http://127.0.0.1:$port/api") + .withHttpHeaders( + "Otoroshi-Client-Id" -> invalidApiKey.clientId, + "Otoroshi-Client-Secret" -> invalidApiKey.clientSecret, + "Host" -> serviceHost + ) + .get() + .futureValue - TransformersCounters.counter.get() mustBe 0 - TransformersCounters.counter3.get() mustBe 0 - TransformersCounters.attrsCounter.get() mustBe 0 - counter1.get() mustBe 0 + TransformersCounters.counterValidator.get() mustBe 1 - val resp1 = call1(Map.empty) + resp1.status mustBe 200 + counter1.get() mustBe 1 - TransformersCounters.counter.get() mustBe 3 - TransformersCounters.counter3.get() mustBe 1 - TransformersCounters.attrsCounter.get() mustBe 2 - counter1.get() mustBe 1 - resp1.status mustBe 200 + resp2.status mustBe 404 + counter1.get() mustBe 1 - val resp2 = ws - .url(s"http://127.0.0.1:${port}/hello") - .withHttpHeaders("Host" -> "reqtrans.oto.tools") - .get() - .futureValue + deleteOtoroshiRoute(route).futureValue + deleteOtoroshiApiKey(validApiKey).futureValue + deleteOtoroshiApiKey(invalidApiKey).futureValue - TransformersCounters.counter.get() mustBe 7 - TransformersCounters.counter3.get() mustBe 1 - TransformersCounters.attrsCounter.get() mustBe 3 - counter1.get() mustBe 1 - resp2.status mustBe 201 + stopServers() + } - deleteOtoroshiService(service1).futureValue + "be able to chain transformers (#366)" in { + val serviceHost = "reqtrans-frontend.oto.tools" + val (_, port1, counter1, _) = testServer("reqtrans.oto.tools", port) + val route = NgRoute( + location = EntityLocation.default, + id = "reqtrans", + name = "reqtrans", + description = "reqtrans", + tags = Seq(), + metadata = Map(), + enabled = true, + debugFlow = false, + capture = false, + exportReporting = false, + frontend = NgFrontend( + domains = Seq(NgDomainAndPath(serviceHost)), + headers = Map(), + query = Map(), + methods = Seq(), + stripPath = true, + exact = false + ), + backend = NgBackend( + targets = Seq( + NgTarget( + hostname = "127.0.0.1", + port = port1, + id = "reqtrans-target", + tls = false + ) + ), + root = "/", + rewrite = false, + loadBalancing = RoundRobin, + client = NgClientConfig.default + ), + plugins = NgPlugins( + Seq( + NgPluginInstance( + plugin = "cp:functional.Transformer1" + ), + NgPluginInstance( + plugin = "cp:functional.Transformer2" + ), + NgPluginInstance( + plugin = "cp:functional.Transformer3" + ) + ) + ) + ) + createOtoroshiRoute(route).futureValue - stopServers() - } + TransformersCounters.counter.get() mustBe 0 + TransformersCounters.counter3.get() mustBe 0 + TransformersCounters.attrsCounter.get() mustBe 0 + counter1.get() mustBe 0 - "support DefaultToken strategy in JWT Verifiers (#373)" in { + val resp1 = ws.url(s"http://127.0.0.1:$port/api") + .withHttpHeaders( + "Host" -> serviceHost + ) + .get() + .futureValue + //val resp1 = call1(Map.empty) + + TransformersCounters.counter.get() mustBe 3 + TransformersCounters.counter3.get() mustBe 1 + TransformersCounters.attrsCounter.get() mustBe 2 + counter1.get() mustBe 1 + resp1.status mustBe 200 + + val resp2 = ws + .url(s"http://127.0.0.1:${port}/hello") + .withHttpHeaders("Host" -> serviceHost) + .get() + .futureValue + + TransformersCounters.counter.get() mustBe 7 + TransformersCounters.counter3.get() mustBe 1 + TransformersCounters.attrsCounter.get() mustBe 3 + counter1.get() mustBe 1 + resp2.status mustBe 201 + + deleteOtoroshiRoute(route).futureValue + + stopServers() + } - val algorithm = Algorithm.HMAC512("secret") + "support DefaultToken strategy in JWT Verifiers (#373)" in { + + val algorithm = Algorithm.HMAC512("secret") + + val jwtVerifier = GlobalJwtVerifier( + id = "jwtVerifier", + name = "jwtVerifier", + desc = "jwtVerifier", + strict = true, + source = InHeader(name = "X-JWT-Token"), + algoSettings = HSAlgoSettings(512, "secret"), + strategy = DefaultToken( + true, + Json.obj( + "user" -> "bobby", + "rights" -> Json.arr( + "admin" + ) + ) + ) + ) - val (_, port1, counter1, call1) = testServer( - "defaulttoken.oto.tools", - port, - validate = req => { - val header = req.getHeader("X-JWT-Token").get().value() - Try(JWT.require(algorithm).build().verify(header)) match { - case Success(_) => true - case Failure(_) => false - } - } - ) - val (_, port2, counter2, call2) = testServer( - "defaulttoken2.oto.tools", - port, - validate = req => { - val maybeHeader = req.getHeader("X-JWT-Token") - if (maybeHeader.isPresent) { - Try(JWT.require(algorithm).build().verify(maybeHeader.get().value())) match { - case Success(_) => true - case Failure(_) => false + val jwtVerifier2 = GlobalJwtVerifier( + id = "jwtVerifier2", + name = "jwtVerifier2", + desc = "jwtVerifier2", + strict = true, + source = InHeader(name = "X-JWT-Token"), + algoSettings = HSAlgoSettings(512, "secret"), + strategy = DefaultToken( + false, + Json.obj( + "user" -> "bobby", + "rights" -> Json.arr( + "admin" + ) + ) + ) + ) + + val (_, port1, counter1, call1) = testServer( + "defaulttoken.oto.tools", + port, + validate = req => { + val header = req.getHeader("X-JWT-Token").get().value() + Try(JWT.require(algorithm).build().verify(header)) match { + case Success(_) => true + case Failure(_) => false + } } - } else { - true - } - } - ) - - val service1 = ServiceDescriptor( - id = "defaulttoken", - name = "defaulttoken", - env = "prod", - subdomain = "defaulttoken", - domain = "oto.tools", - targets = Seq( - Target( - host = s"127.0.0.1:${port1}", - scheme = "http" ) - ), - forceHttps = false, - enforceSecureCommunication = false, - publicPatterns = Seq("/.*"), - jwtVerifier = LocalJwtVerifier( - enabled = true, - strict = true, - source = InHeader(name = "X-JWT-Token"), - algoSettings = HSAlgoSettings(512, "secret"), - strategy = DefaultToken( - true, - Json.obj( - "user" -> "bobby", - "rights" -> Json.arr( - "admin" + val (_, port2, counter2, call2) = testServer( + "defaulttoken2.oto.tools", + port, + validate = req => { + val maybeHeader = req.getHeader("X-JWT-Token") + if (maybeHeader.isPresent) { + Try(JWT.require(algorithm).build().verify(maybeHeader.get().value())) match { + case Success(_) => true + case Failure(_) => false + } + } else { + true + } + } + ) + + val service1 = ServiceDescriptor( + id = "defaulttoken", + name = "defaulttoken", + env = "prod", + subdomain = "defaulttoken", + domain = "oto.tools", + targets = Seq( + Target( + host = s"127.0.0.1:${port1}", + scheme = "http" ) + ), + forceHttps = false, + enforceSecureCommunication = false, + publicPatterns = Seq("/.*"), + jwtVerifier = RefJwtVerifier( + enabled = true, + ids = Seq("jwtVerifier") ) ) - ) - ) - - val service2 = ServiceDescriptor( - id = "defaulttoken2", - name = "defaulttoken2", - env = "prod", - subdomain = "defaulttoken2", - domain = "oto.tools", - targets = Seq( - Target( - host = s"127.0.0.1:${port2}", - scheme = "http" - ) - ), - forceHttps = false, - enforceSecureCommunication = false, - publicPatterns = Seq("/.*"), - jwtVerifier = LocalJwtVerifier( - enabled = true, - strict = true, - source = InHeader(name = "X-JWT-Token"), - algoSettings = HSAlgoSettings(512, "secret"), - strategy = DefaultToken( - false, - Json.obj( - "user" -> "bobby", - "rights" -> Json.arr( - "admin" + + val service2 = ServiceDescriptor( + id = "defaulttoken2", + name = "defaulttoken2", + env = "prod", + subdomain = "defaulttoken2", + domain = "oto.tools", + targets = Seq( + Target( + host = s"127.0.0.1:${port2}", + scheme = "http" ) + ), + forceHttps = false, + enforceSecureCommunication = false, + publicPatterns = Seq("/.*"), + jwtVerifier = RefJwtVerifier( + enabled = true, + ids = Seq("jwtVerifier2") ) ) - ) - ) - createOtoroshiService(service1).futureValue - createOtoroshiService(service2).futureValue + createOtoroshiVerifier(jwtVerifier).futureValue + createOtoroshiVerifier(jwtVerifier2).futureValue + createOtoroshiService(service1).futureValue + createOtoroshiService(service2).futureValue - counter1.get() mustBe 0 - counter2.get() mustBe 0 + counter1.get() mustBe 0 + counter2.get() mustBe 0 - val resp1 = call1( - Map.empty - ) + val resp1 = call1( + Map.empty + ) - val resp2 = call1( - Map( - "X-JWT-Token" -> JWT - .create() - .withIssuer("mathieu") - .withClaim("bar", "yo") - .sign(algorithm) - ) - ) - - resp1.status mustBe 200 - resp2.status mustBe 400 - counter1.get() mustBe 1 - - val resp3 = call2( - Map.empty - ) - - val resp4 = call2( - Map( - "X-JWT-Token" -> JWT - .create() - .withIssuer("mathieu") - .withClaim("bar", "yo") - .sign(algorithm) - ) - ) + val resp2 = call1( + Map( + "X-JWT-Token" -> JWT + .create() + .withIssuer("mathieu") + .withClaim("bar", "yo") + .sign(algorithm) + ) + ) - resp3.status mustBe 200 - resp4.status mustBe 200 - counter2.get() mustBe 2 + resp1.status mustBe 200 + resp2.status mustBe 400 + counter1.get() mustBe 1 - deleteOtoroshiService(service1).futureValue - deleteOtoroshiService(service2).futureValue + val resp3 = call2( + Map.empty + ) - stopServers() - } + val resp4 = call2( + Map( + "X-JWT-Token" -> JWT + .create() + .withIssuer("mathieu") + .withClaim("bar", "yo") + .sign(algorithm) + ) + ) + + resp3.status mustBe 200 + resp4.status mustBe 200 + counter2.get() mustBe 2 + + deleteOtoroshiService(service1).futureValue + deleteOtoroshiService(service2).futureValue + deleteOtoroshiVerifier(jwtVerifier).futureValue + deleteOtoroshiVerifier(jwtVerifier2).futureValue - "shutdown" in { - stopAll() + stopServers() + } + + "shutdown" in { + stopAll() + } } - } } object TransformersCounters { - val attrsCounter = new AtomicInteger(0) - val counterValidator = new AtomicInteger(0) - val counter = new AtomicInteger(0) - val counter3 = new AtomicInteger(0) + val attrsCounter = new AtomicInteger(0) + val counterValidator = new AtomicInteger(0) + val counter = new AtomicInteger(0) + val counter3 = new AtomicInteger(0) } case class FakeUser(username: String) object Attrs { - val CurrentUserKey = TypedKey[FakeUser]("current-user") + val CurrentUserKey = TypedKey[FakeUser]("current-user") } -class Transformer1 extends RequestTransformer { - - override def visibility: NgPluginVisibility = NgPluginVisibility.NgUserLand - override def categories: Seq[NgPluginCategory] = Seq.empty - override def steps: Seq[NgStep] = Seq.empty - - override def transformRequestWithCtx( - context: TransformerRequestContext - )(implicit env: Env, ec: ExecutionContext, mat: Materializer): Future[Either[Result, script.HttpRequest]] = { - TransformersCounters.counter.incrementAndGet() - context.attrs.put(Attrs.CurrentUserKey -> FakeUser("bobby")) - FastFuture.successful( - Right( - context.otoroshiRequest.copy( - headers = context.otoroshiRequest.headers ++ Map( - "foo" -> "bar" - ) +class Transformer1 extends NgRequestTransformer { + override def multiInstance: Boolean = true + override def defaultConfigObject: Option[NgPluginConfig] = None + override def isTransformRequestAsync = false + + override def transformRequestSync( + ctx: NgTransformerRequestContext + )(implicit env: Env, ec: ExecutionContext, mat: Materializer): Either[Result, NgPluginHttpRequest] = { + TransformersCounters.counter.incrementAndGet() + ctx.attrs.put(Attrs.CurrentUserKey -> FakeUser("bobby")) + + Right( + ctx.otoroshiRequest.copy( + headers = ctx.otoroshiRequest.headers ++ Map( + "foo" -> "bar" + ) + ) ) - ) - ) - } + } } -class Transformer2 extends RequestTransformer { - - override def visibility: NgPluginVisibility = NgPluginVisibility.NgUserLand - override def categories: Seq[NgPluginCategory] = Seq.empty - override def steps: Seq[NgStep] = Seq.empty - - override def transformRequestWithCtx( - context: TransformerRequestContext - )(implicit env: Env, ec: ExecutionContext, mat: Materializer): Future[Either[Result, script.HttpRequest]] = { - TransformersCounters.counter.incrementAndGet() - context.attrs.get(Attrs.CurrentUserKey) match { - case Some(FakeUser("bobby")) => TransformersCounters.attrsCounter.incrementAndGet() - case _ => - } - if (context.otoroshiRequest.headers.get("foo").contains("bar")) { - TransformersCounters.counter.incrementAndGet() - } - if (context.otoroshiRequest.path == "/hello") { - TransformersCounters.counter.incrementAndGet() - FastFuture.successful(Left(Results.Created(Json.obj("message" -> "hello world!")))) - } else { - FastFuture.successful(Right(context.otoroshiRequest)) +class Transformer2 extends NgRequestTransformer { + override def multiInstance: Boolean = true + override def defaultConfigObject: Option[NgPluginConfig] = None + override def isTransformRequestAsync = false + override def transformRequestSync( + ctx: NgTransformerRequestContext + )(implicit env: Env, ec: ExecutionContext, mat: Materializer): Either[Result, NgPluginHttpRequest] = { + TransformersCounters.counter.incrementAndGet() + ctx.attrs.get(Attrs.CurrentUserKey) match { + case Some(FakeUser("bobby")) => TransformersCounters.attrsCounter.incrementAndGet() + case _ => + } + if (ctx.otoroshiRequest.headers.get("foo").contains("bar")) { + TransformersCounters.counter.incrementAndGet() + } + if (ctx.otoroshiRequest.path == "/hello") { + TransformersCounters.counter.incrementAndGet() + Left(Results.Created(Json.obj("message" -> "hello world!"))) + } else { + Right(ctx.otoroshiRequest) + } } - } } -class Transformer3 extends RequestTransformer { - - override def visibility: NgPluginVisibility = NgPluginVisibility.NgUserLand - override def categories: Seq[NgPluginCategory] = Seq.empty - override def steps: Seq[NgStep] = Seq.empty - - override def transformRequestWithCtx( - context: TransformerRequestContext - )(implicit env: Env, ec: ExecutionContext, mat: Materializer): Future[Either[Result, script.HttpRequest]] = { - TransformersCounters.counter3.incrementAndGet() - context.attrs.get(Attrs.CurrentUserKey) match { - case Some(FakeUser("bobby")) => TransformersCounters.attrsCounter.incrementAndGet() - case _ => +class Transformer3 extends NgRequestTransformer { + override def multiInstance: Boolean = true + override def defaultConfigObject: Option[NgPluginConfig] = None + override def isTransformRequestAsync = false + + override def transformRequestSync( + ctx: NgTransformerRequestContext + )(implicit env: Env, ec: ExecutionContext, mat: Materializer): Either[Result, NgPluginHttpRequest] = { + TransformersCounters.counter3.incrementAndGet() + ctx.attrs.get(Attrs.CurrentUserKey) match { + case Some(FakeUser("bobby")) => TransformersCounters.attrsCounter.incrementAndGet() + case _ => + } + Right(ctx.otoroshiRequest) } - FastFuture.successful(Right(context.otoroshiRequest)) - } } -class Validator1 extends AccessValidator { +class Validator1 extends NgAccessValidator { + override def isAccessAsync = false + + override def accessSync(ctx: NgAccessContext)(implicit env: Env, ec: ExecutionContext): NgAccess = { + TransformersCounters.counterValidator.incrementAndGet() + NgAccess.NgAllowed + } - override def visibility: NgPluginVisibility = NgPluginVisibility.NgUserLand - override def categories: Seq[NgPluginCategory] = Seq.empty - override def steps: Seq[NgStep] = Seq.empty + override def multiInstance: Boolean = true - override def canAccess(context: AccessContext)(implicit env: Env, ec: ExecutionContext): Future[Boolean] = { - TransformersCounters.counterValidator.incrementAndGet() - FastFuture.successful(true) - } + override def defaultConfigObject: Option[NgPluginConfig] = None } diff --git a/otoroshi/test/functional/Version150Spec.scala b/otoroshi/test/functional/Version150Spec.scala index 924340e626..8e905ddd25 100644 --- a/otoroshi/test/functional/Version150Spec.scala +++ b/otoroshi/test/functional/Version150Spec.scala @@ -209,7 +209,7 @@ class AuthModuleConfigApiSpec(name: String, configurationSpec: => Configuration) } } - override def singleEntity(): AuthModuleConfig = env.datastores.authConfigsDataStore.template("basic".some, env) + override def singleEntity(): AuthModuleConfig = env.datastores.authConfigsDataStore.template(None, env) override def entityName: String = "AuthModuleConfig" override def route(): String = "/api/auths" override def readEntityFromJson(json: JsValue): AuthModuleConfig = AuthModuleConfig._fmt(env).reads(json).get @@ -362,6 +362,7 @@ class CertificateApiSpec(name: String, configurationSpec: => Configuration) exte } } + override def queryParams(): Seq[(String, String)] = Seq(("enrich", "false")) override def singleEntity(): Cert = Await.result(env.datastores.certificatesDataStore.template(ec, env), 10.seconds) override def entityName: String = "Cert" override def route(): String = "/api/certificates" @@ -518,7 +519,7 @@ class ApikeyServiceApiSpec(name: String, configurationSpec: => Configuration) .initiateNewApiKey("admin-api-group", env) .copy(authorizedEntities = Seq(ServiceDescriptorIdentifier("admin-api-service"))) override def entityName: String = "ApiKey" - override def route(): String = "/api/services/admin-api-service/apikeys" + override def route(): String = "/api/routes/admin-api-service/apikeys" override def readEntityFromJson(json: JsValue): ApiKey = ApiKey._fmt.reads(json).get override def writeEntityToJson(entity: ApiKey): JsValue = ApiKey._fmt.writes(entity) override def updateEntity(entity: ApiKey): ApiKey = entity.copy(clientName = entity.clientName + " - updated") diff --git a/otoroshi/test/functional/utils.scala b/otoroshi/test/functional/utils.scala index e202707e2c..a2248564e1 100644 --- a/otoroshi/test/functional/utils.scala +++ b/otoroshi/test/functional/utils.scala @@ -1346,7 +1346,7 @@ trait OtoroshiSpec extends WordSpec with MustMatchers with OptionValues with Sca .map { resp => (resp.json, resp.status) } - .andWait(1000.millis) + .andWait(2000.millis) } def createOtoroshiApiKey( @@ -1367,6 +1367,24 @@ trait OtoroshiSpec extends WordSpec with MustMatchers with OptionValues with Sca .andWait(2000.millis) } + def deleteOtoroshiVerifier( + verifier: GlobalJwtVerifier, + customPort: Option[Int] = None, + ws: WSClient = wsClient + ): Future[(JsValue, Int)] = { + ws.url(s"http://localhost:${customPort.getOrElse(port)}/api/verifiers/${verifier.id}") + .withHttpHeaders( + "Host" -> "otoroshi-api.oto.tools", + "Content-Type" -> "application/json" + ) + .withAuth("admin-api-apikey-id", "admin-api-apikey-secret", WSAuthScheme.BASIC) + .delete() + .map { resp => + (resp.json, resp.status) + } + .andWait(1000.millis) + } + def deleteOtoroshiApiKey( apiKey: ApiKey, customPort: Option[Int] = None, @@ -1426,7 +1444,7 @@ trait OtoroshiSpec extends WordSpec with MustMatchers with OptionValues with Sca .map { resp => (resp.json, resp.status) } - .andWait(1000.millis) + .andWait(2000.millis) } } @@ -1905,6 +1923,7 @@ trait ApiTester[Entity] { def ws: WSClient def env: Env def port: Int + def queryParams: Seq[(String, String)] = Seq() def testingBulk: Boolean = true @@ -1965,6 +1984,7 @@ trait ApiTester[Entity] { val path = route() ws .url(s"http://otoroshi-api.oto.tools:$port$path") + .withQueryStringParameters(queryParams:_*) .withAuth("admin-api-apikey-id", "admin-api-apikey-secret", WSAuthScheme.BASIC) .withHttpHeaders("Content-Type" -> "application/json") .withFollowRedirects(false) @@ -1994,6 +2014,7 @@ trait ApiTester[Entity] { val path = route() + "/" + extractId(entity) ws .url(s"http://otoroshi-api.oto.tools:$port$path") + .withQueryStringParameters(queryParams:_*) .withAuth("admin-api-apikey-id", "admin-api-apikey-secret", WSAuthScheme.BASIC) .withHttpHeaders("Content-Type" -> "application/json") .withFollowRedirects(false) @@ -2024,6 +2045,7 @@ trait ApiTester[Entity] { val path = route() + "/" + extractId(entity) ws .url(s"http://otoroshi-api.oto.tools:$port$path") + .withQueryStringParameters(queryParams:_*) .withAuth("admin-api-apikey-id", "admin-api-apikey-secret", WSAuthScheme.BASIC) .withHttpHeaders("Content-Type" -> "application/json") .withFollowRedirects(false) @@ -2052,6 +2074,7 @@ trait ApiTester[Entity] { val path = route() + "/" + extractId(entity) ws .url(s"http://otoroshi-api.oto.tools:$port$path") + .withQueryStringParameters(queryParams:_*) .withAuth("admin-api-apikey-id", "admin-api-apikey-secret", WSAuthScheme.BASIC) .withFollowRedirects(false) .withMethod("DELETE") @@ -2077,6 +2100,7 @@ trait ApiTester[Entity] { val path = route() ws .url(s"http://otoroshi-api.oto.tools:$port$path") + .withQueryStringParameters(queryParams:_*) .withAuth("admin-api-apikey-id", "admin-api-apikey-secret", WSAuthScheme.BASIC) .withFollowRedirects(false) .withMethod("GET") @@ -2098,6 +2122,7 @@ trait ApiTester[Entity] { val path = route() + "/" + extractId(entity) ws .url(s"http://otoroshi-api.oto.tools:$port$path") + .withQueryStringParameters(queryParams:_*) .withAuth("admin-api-apikey-id", "admin-api-apikey-secret", WSAuthScheme.BASIC) .withFollowRedirects(false) .withMethod("GET") @@ -2119,6 +2144,7 @@ trait ApiTester[Entity] { val path = route() + "/_bulk" ws .url(s"http://otoroshi-api.oto.tools:$port$path") + .withQueryStringParameters(queryParams:_*) .withAuth("admin-api-apikey-id", "admin-api-apikey-secret", WSAuthScheme.BASIC) .withHttpHeaders("Content-Type" -> "application/x-ndjson") .withFollowRedirects(false) @@ -2148,6 +2174,7 @@ trait ApiTester[Entity] { val path = route() + "/_bulk" ws .url(s"http://otoroshi-api.oto.tools:$port$path") + .withQueryStringParameters(queryParams:_*) .withAuth("admin-api-apikey-id", "admin-api-apikey-secret", WSAuthScheme.BASIC) .withHttpHeaders("Content-Type" -> "application/x-ndjson") .withFollowRedirects(false) @@ -2192,6 +2219,7 @@ trait ApiTester[Entity] { val path = route() + "/_bulk" ws .url(s"http://otoroshi-api.oto.tools:$port$path") + .withQueryStringParameters(queryParams:_*) .withAuth("admin-api-apikey-id", "admin-api-apikey-secret", WSAuthScheme.BASIC) .withHttpHeaders("Content-Type" -> "application/x-ndjson") .withFollowRedirects(false) @@ -2224,6 +2252,7 @@ trait ApiTester[Entity] { val path = route() + "/_bulk" ws .url(s"http://otoroshi-api.oto.tools:$port$path") + .withQueryStringParameters(queryParams:_*) .withAuth("admin-api-apikey-id", "admin-api-apikey-secret", WSAuthScheme.BASIC) .withFollowRedirects(false) .withMethod("DELETE")